AI Frontiers: Where We are Today and the Risks and Benefits of an AI Enabled Future

  • Mike Durland, Special Advisor, Global Risk Institute
  • Matthew Killi, Vice President, DeepLearni. ng
robot figure of human face with computer code in background

EDITOR’S NOTE:

“AI Frontiers: Where We are Today and the Risks and Benefits of an AI Enabled Future” is the next in a series of Global Risk Institute expert papers on the evolving world of machine learning. The authors Michael Durland and Matthew Killi are also preparing a GRI paper on the topic of the impact of AI on financial services for the fall of 2017.

EXECUTIVE SUMMARY

“Artificial Intelligence” is a very powerful narrative. In fact, today, many leading thinkers envision a future where machines surpass humans in intelligence. Many of those individuals worry about the abuses of AI, and although they don’t dispute the potential good, they dwell more on the potential bad. Others have a more constructive imagination of the future. They see AI more as a powerful set of tools, with the potential to significantly augment human productivity. They see the risk of “singularity” as over hyped and distracting. In Part One of our two part series, we assess of the potential near term risks and benefits of Artificial Intelligence. Later, in Part Two, we explore how AI is expected to impact Financial Services and what specific use-cases we expect to see over the next 1-3 years.

The paper begins by differentiating between the concept of automation and innovation. Here we define automation as the use of technology as a substitute for an existing process, for example one that is carried out using human labour, and innovation, as the use of technology to augment human productivity, enabling humans to do things they could not do before. AI has the potential to both automate and innovate. Although subtle, this distinction is important. Automation and innovation have very different potential implications for both the future of labour and human progress.

We discuss the semantics behind AI, and how the phrase “Artificial Intelligence” creates a powerful fictional image that serves to both inspire innovation and evoke fear. The inspiration is important. Fictional narratives such as “Artificial Intelligence” are a vital component of driving forward human progress. Yet, at times this particular narrative acts more as a negative, evoking fears that to date seem mostly unfounded.

We then discuss the current optimism surrounding AI today. We provide a brief primer on the most important technology underlying AI today, machine learning. We contrast the various forms of learning paradigms: supervisedunsupervised and reinforcement learning. These concepts are important because they are at the center of an emerging issue in AI, namely, who is accountable for the actions of an AI and how the developer must take a deeper role in curation.

Following this section, we introduce the concept of Narrow AIGeneral AI and Artificial Super Intelligence. In Narrow AI, machines perform a narrow set of tasks applied to a narrowly defined problem. Narrow AI’s can be integrated to produce highly powerful applications. An example of this is the autonomous vehicle. General AI refers to a machine that is capable of performing the broad array of intellectual tasks of a human. In General AI, machines have human-like cognitive abilities and are capable of reasoning, making decisions, learning and communicating in natural language, and are able to operate in an open system. Creating General AI is a much different and more difficult challenge than creating Narrow AI. Artificial Super Intelligence, refers to a computer that is “smarter than a human”, a machine that is capable of performing more than the broad array of intellectual tasks of a human. In this fictional form of machine intelligence, the computer would have the cognitive ability to outperform human brains across a large number of disciplines, even possessing scientific creativity and social skills. Today, all forms of artificial intelligence are instances of Narrow AI. In a world of Narrow AI, we can eliminate from our concern the notion of AI as an existential threat and instead focus on the impact that Narrow AI is likely to have on the world in which we live in today.

We begin our assessment of the risks and benefits of AI by introducing four key factors that are likely to shape the future of AI in the near term: 1) the identification and application of suitable use cases, 2) the access to large data sets, 3) the scarcity of talent, and 4) the lack of platform technologies. In other words, in order to successfully create AI today you must identify a suitable problem, have access to the data required to train the AI to solve the problem you identify, and have access to the talent and the tools required to developed the AI.

We build three broad scenarios that help us think about the future of AI: “AI Winter”, “Winner Takes All”, and “Collaborative AI”. These scenarios are used to assess the potential benefits and risks associated with AI in the near future. We consider two potential benefits: an increase in human productivity and efficiency, and an increase in our ability to drive future innovation. The later benefit is a broad category but is meant to capture the tremendous potential for AI to drive future scientific innovation. We consider six potential risks: scope erosion, unemployment, wealth inequality, the exploitation of data, black box vulnerability and the creation of new system risk. The results of this scenario analysis are summarized in Table 1.

The “AI Winter” scenario, in which AI doe s not live up to its hype, provides the least benefits. Although a “full” AI winter deemed not likely, we do believe, that given the considerable hype surrounding AI, the probability of some type of cooling off period for AI is likely in the near future.

The “Winner Takes All” scenario, in which a number of companies exploit the potential of AI to achieve an early monopoly position, provides moderate benefits and material risks. We believe these risks are tolerable, and indeed likely necessary for society to further the development of important innovations in AI. We believe that such innovations will increase the potential for material long term benefits. However, we do believe this scenario should grab our attention. The current discourse of disruption and creative destruction must be understood in the context of a tenuous balance. The outcome we want for society is not disruption but progress. One way we can achieve this objective is the democratization of AI.

The “Collaborative AI” attempts to assess what needs to occur in order for society to maximize the future benefit of AI while minimizing its future risks. To achieve this objective it is helpful for us to perceive artificial intelligence, not as the automation of human cognition, but rather as an innovation capable of augmenting human productivity and efficiency. In this scenario, AI is not perceived as a substitute for human intelligence, in the possession of a concentrated set of large corporations but as a complement to human intelligence available to the masses. This scenario can be defined as the state in which we have successfully begun the democratization of the benefits of AI.


 

ABOUT THE AUTHORS

Mike Durland

Mike Durland

Mike is the former Group Head and CEO, Global Banking and Markets for Scotiabank. Mike retired from Scotiabank in 2016 to pursue a variety of business, philanthropic and academic interests. Mike is the CEO of Melancthon Capital, a Professor of Practice at the Munk School of Global Affairs, and a member of the Business Strategy Committee for the Global Risk Institute. Mike is a member of a number of corporate, academic, and philanthropic boards, and holds a B Comm degree from St. Mary’s University, a PhD in Management from Queen’s University and an honorary Doctorate from St. Mary’s University.

Matthew Killi

Matthew Killi

Matthew is Vice President at DeepLearni.ng, a Toronto based company who creates bespoke AIs for large enterprises and helps organizations unlock value in their data assets with machine learning. Previously, he was a management consultant at McKinsey & Company. Before joining McKinsey, he researched theoretical physics at the Centre for Quantum Materials at the University of Toronto.