Public opinion polls frequently support policies well before they become law.
US polls have shown majority support for ideas like marijuana legalization, term limits, and universal health insurance years in advance of when politicians started shaping these policy ideas into legislation.
We are seeing this same dynamic play out today with AI safety legislation.
As part of the "Trust in Artificial Intelligence: A Global Study," KPMG and the University of Queensland surveyed respondents about trust and attitudes towards AI systems in general and in four rapidly developing application domains: healthcare, public safety and security, human resources, and consumer recommender applications.
The research provided comprehensive and global insights into the public's trust and acceptance of AI systems, including who is trusted to develop, use, and govern AI, the perceived benefits and risks of AI use, community expectations of the development, regulation, and governance of AI, and how organizations can support trust in their AI use.
The survey found that Americans are wary about trusting and accepting AI and feel ambivalent about its benefits and risks. While these findings are similar to those of other Western countries, Americans differ from their Western counterparts in their beliefs about AI regulation and who should regulate it.
Here are the key findings from those surveyed in the United States:
Americans are not swayed by the tech industry's warnings about potential economic and innovative setbacks from regulation. Even when geopolitics is involved, voters still clearly favor a measured and secure approach to AI development, rather than racing ahead to build ever more powerful artificial intelligence for the sake of competing with China.
In polling shared with Time, voters showed strong willingness to accept restrictions on AI development in the interest of public safety and security..
The data reveals a surprising level of agreement across party lines. Both Republican and Democratic voters show support for government-imposed limits on AI development, prioritizing safety and national security concerns.
In late June, the AI Policy Institute (AIPI), a US nonprofit advocating for a more measured approach to AI development, polled public attitudes towards AI regulation and development. The results reveal:
Again, this data suggests that many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition.
Although Congress has not yet acted on these voters' preferences, AI safety advocates should remain steadfast in their cause. Public support often precedes political action; it takes time and effort to activate and mobilize the public and convert their opinions into irresistible political demands.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business