Ethics & Artificial Intelligence in Finance

  • Dr. Alex LaPlante, Managing Director, Research, Global Risk Institute
A human and a robot finger touching each other

INTRODUCTION

We are standing on the cusp of the fourth industrial revolution—the rise of the “intelligent machine.” At the heart of this revolution is Artificial Intelligence (AI), algorithms that allow machines to mimic human cognitive functions like learning, problem-solving, and decision-making. AI offers a plethora of benefits, including increased speed and efficiency, reductions in labour and resource costs, improved customer experience, and enhanced security. Numerous sectors, including automotive, healthcare, retail, and defense have already witnessed the game-changing impact of AI, and the financial services sector is no exception. AI has begun to replace or augment human decision makers in business lines across financial institutions, from predictive tasks like fraud detection and risk management to customer interactions like loan approval and wealth advisory. As technological capabilities continue to improve, data collection grows in scale and scope, and competitive pressure from non-traditional financial institutions increases, the use of AI in finance will only become more widespread.

With the advancement and adoption of AI set to continue to increase for the foreseeable future, there have been growing concerns around the broader implications of such a transformative technology. [1] Unsurprisingly, this has led to calls for government regulation of AI development and restrictions on the use of AI technologies. Perhaps more telling, however, is that tech leaders themselves are voicing concerns. Elon Musk, for one, has widely claimed that artificial intelligence is the biggest existential threat to humans and has called for regulatory oversight at both the national and international levels.[2] Other notable tech figures like Bill Gates and Steve Wozniak, as well as leading minds like Stephen Hawking, have voiced similar concerns around the long-term risks of AI. [3][4]

While there may be some merit to the notion that AI could cause the eventual demise of the human race, there is an active debate about a number of more immediate ethical concerns surrounding the pervasive deployment of the technology.[5] These concerns range from growing fears that automation will lead to mass unemployment and extreme income inequality, triggering widespread social unrest, to more philosophical questions like the infamous “trolley problem” in the context of driverless cars.[6] However, there are even more fundamental concerns around the algorithms themselves and the data we use to train them.

Interpretability of a model’s results and transparency about how those results were generated are critical in ensuring the alignment of the model with the problem at hand. AI algorithms, which are often referred to as black-box approaches, suffer from a lack of transparency and interpretability, making it difficult to parse out how and why these algorithms come to particular conclusions. As a result, the identification of model bias or discriminatory behaviour can be challenging. In fact, even world leaders in AI like Google have had significant missteps with unintentional bias. In 2015, for example, Google found itself in hot water when it employed an image recognition algorithm that classified people of colour as gorillas. Even after this incident, there has been no shortage of high-profile cases of discriminatory AI algorithms.

Adding an additional layer of complexity is the use of non-traditional data sources like social media and IoT (Internet of Things) technologies. For one, the use of certain types of data such as race, disability status, and religious affiliation may be seen as unethical in and of itself, and may raise questions around a customer’s data usage and privacy rights. Moreover, unrepresentative or systematically inaccurate data sets can be another key source of bias.

From a financial institution’s perspective, AI offers a wide range of potential benefits, but one must ensure that the implementation of this technology is both prudent and ethical. To aid in the understanding of how this can be achieved, this report will introduce AI and detail issues of bias, interpretability, and data security and privacy as they relate to the ethical use of AI algorithms by financial institutions. It will also discuss the key risks that can arise from the unethical use of AI and the considerations that should be made in order to manage these risks throughout the development and implementation stages.

 


 
FOOTNOTES

[1] Kaushal, M., Nolan, S., (2015) “Understanding Artificial Intelligence”, Brookings Institute

[2] Domonoske, C., (2017) “Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’”, NPR

[3] Mack, E., (2015) “Bill Gates Says You Should Worry About Artificial Intelligence”, Forbes

[4] Kharpal, A., (2017) “Stephen Hawking says A.I. could be ‘worst event in the history of our civilization’”, CNBC

[5] Bossman, J., (2016) “Top 9 ethical issues in artificial intelligence”, World Economic Forum

[6] The trolley problem is a thought experiment in ethics first introduced by philosopher Philippa Foot in 1967. It explores the moral dilemma that one would face should they find themselves in the following situation: There is a runaway trolley moving toward five tied-up people lying on the tracks. You are standing next to a lever that controls a switch. If the lever is pulled, the trolley will be redirected onto a side track, saving the five people on the main track. However, there is a single person lying on the side track. From an ethical standpoint, what should you do?