- October 29, 2024
AI RISING: Risk vs Reward – The Hinton Lectures™

On October 28 and 29, 2024, GRI partnered with Nobel Prize winner Geoffrey Hinton to present a 2-part lecture series on the profound impact of AI, offering essential insights and exploring its trajectory and safety. Hosted by Hinton, the lectures were delivered by Jacob Steinhardt, Assistant Professor at UC Berkeley, who was selected by a panel of experts on AI safety and ethics.
Please see summaries of the lectures below, along with videos of the event.
In the second of his two lectures, UC Berkeley’s Jacob Steinhardt spoke about four key areas of risk in AI.
Misuse
Professor Steinhardt noted that the most well understood risk of artificial intelligence is that it will be deliberately misused by “bad actors”. He likened the irresponsible release of AI models to distributing uranium, and showed how they can be ‘fine-tuned’ to carry out things like cyberattacks, biological weapon development, and social manipulation through deepfakes.
Markets
Steinhardt spoke about the dangers of unhealthy market centralization, using the example of “recommender” systems that reinforce people’s beliefs, and virtual companions that are deliberately built to be addictive. He also touched on the fact that because AI models are so expensive to build and train, a few players are bound to dominate, limiting user choice and transparency. He proposed open-source AI models and public oversight to balance these tendencies.
Mediators
AI “mediators” are AI systems designed to explain complex data and monitor for harmful activities, acting as checks against misuse. Steinhardt suggested putting AI to work in monitoring other AI, detecting fake content and improving cyber defense. He also floated the idea of digital identity verification for high-impact actions, to limit misuse. He stressed the need for collaboration between governments, AI developers, and the public to create resilient systems.
Memes
In his closing, Steinhardt spoke about AI memes that, like human-created memes, replicate and spread, in this case across AI systems, much like human social memes. He raised concerns about how “jailbreak” could spread, enabling one AI to manipulate another. He predicted that as AI systems interact, they could develop their own internal memes, which could manifest as AI-generated ideologies or behaviours.
Download Day 2 – AI in Society: Misuse, Markets, Mediators and Memes slides