This year, AI researchers were awarded not one but two, Nobel Prizes. The Nobel Prize for Chemistry went to Google Deepmind’s Demis Hassabis and John Jumper, along with the University of Washington’s David Baker. Hassabis and Jumper were recognized for their work on AlphaFold, an artificial intelligence (AI) tool that can predict the structure of nearly every protein, while Baker invented a new protein using computer software.
Separately, the Nobel Prize for Physics was awarded to AI researcher Geoffrey Hinton and physicist John Hopfield for their work on neural networks. Neural networks, a type of machine learning, underpin self-driving cars and large language models (LLMs) like ChatGPT and AlphaFold.
These two prizes are powerful acknowledgments of AI’s capabilities and influence. Previously, a graduate student might spend their entire Ph.D. trying to identify the shape of one protein. AlphaFold can now predict the shape of around 200 million proteins.
“AlphaFold is the first AI system to send such ripples throughout the life sciences” - Edith Heard, Director General of the European Molecular Biology Laboratory.
This breakthrough likely only marks the beginning. Training compute, strongly correlated with model capability, has increased by 4 to 5 times annually since 2010. While potential energy and data scarcity bottlenecks exist, Epoch AI predicts that this scaling could continue until 2030. In other words, AI models are only getting more and more powerful.
With more powerful models, the risks from AI are likely to increase. State actors are already using ChatGPT to scale cyberattacks, and AI models have already shown they can deceive humans to achieve their assigned goals. As models become increasingly integrated into society, especially in high-stakes contexts, the consequences of such unexpected behavior could be devastating.
This doesn’t mean that we need to stop using AI. However, it does call for some sensible policies, such as ensuring that the most powerful models are vetted before they are released to the public. Yet, AI labs currently face no legal requirements to do so.
Both Nobel Laureates and the general public agree on the importance of AI safety. Hinton has been a vocal advocate of AI safety, co-authoring a paper with 23 other AI experts calling for an AI permit system. Hassabis has said, “We must take the risks of AI as seriously as other major global challenges.” Meanwhile, polling data consistently demonstrates public support for binding safety requirements. KPMG has found that 67% of Americans believe that AI regulation is necessary, while another survey found that 80% of voters favor regulation that “mandates security standards and safety measures for AI systems’ most advanced models.”
Despite this consensus, Congress has yet to take tangible action on AI safety. The Center for AI Policy (CAIP) urges Congress to introduce comprehensive safety legislation or, at the very least, address the ‘low hanging fruit,’ such as whistleblower protections for AI lab employees. After all, the pace of technological development—and the associated risks—will only continue to increase.
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting
Reclaim safety as the focus of international conversations on AI