About this Report
This GRI report builds upon the EDGE principles – explainability, data, governance, ethics – for responsible development and deployment of artificial intelligence (AI) systems. The principles are applied to Generative AI (GenAI), a sub-category of AI models with the ability to generate new and creative content that has not been encountered before.
The report goes through each of the EDGE principles, citing real-world examples to illustrate the challenges faced by organizations in implementing GenAI applications, including:
- Explainability – Researchers are in the process of mapping how large language models make decisions, like how they interpret and respond to concepts like “The Golden Gate Bridge” or gender bias. The hope is that eventually this will lead to the ability to explain why a given GenAI model behaves the way it does, and to modify that behaviour if needed.
- Data – A study into Claude 2, a GenAI model, showed that the model demonstrated significant biases in making decisions like loan approvals, and these biases could be traced back to the data used to train the model.
- Governance – The report does not provide a case study to demonstrate the challenges of adhering to this principle. It presents a lower degree of risk for financial institutions than the others given that GenAI governance policies can largely be enforced similarly to other model-based risks.
- Ethics – A current lawsuit calls into question whether copyright law should protect the rights of publications – The New York Times in this case – used to train large language models like ChatGPT.
Ultimately, the report finds three of the four EDGE principles (above) present a high degree of difficulty for the financial services sector to address. The principle of governance is the lone outlier, presenting only a medium degree of difficulty.