When Polling Is Ahead of Politicians

Jason Green-Lowe
,
October 16, 2024

Public opinion polls frequently support policies well before they become law. 

US polls have shown majority support for ideas like marijuana legalization, term limits, and universal health insurance years in advance of when politicians started shaping these policy ideas into legislation.

We are seeing this same dynamic play out today with AI safety legislation.

As part of the "Trust in Artificial Intelligence: A Global Study," KPMG and the University of Queensland surveyed respondents about trust and attitudes towards AI systems in general and in four rapidly developing application domains: healthcare, public safety and security, human resources, and consumer recommender applications.

The research provided comprehensive and global insights into the public's trust and acceptance of AI systems, including who is trusted to develop, use, and govern AI, the perceived benefits and risks of AI use, community expectations of the development, regulation, and governance of AI, and how organizations can support trust in their AI use. 

The survey found that Americans are wary about trusting and accepting AI and feel ambivalent about its benefits and risks. While these findings are similar to those of other Western countries, Americans differ from their Western counterparts in their beliefs about AI regulation and who should regulate it.

Here are the key findings from those surveyed in the United States:

  • 72 percent feel that the impact of AI on society is uncertain and unpredictable.
  • 67 percent report that AI regulation is necessary, with co-regulation by government and industry being the most popular option (60 percent agree).
  • Half (53 percent) believe AI should be regulated by the government and existing regulators.
  • Only 30 percent believe current regulations, laws, and safeguards are sufficient to make AI use safe.
  • 96 percent view the principles and practices of trustworthy AI as necessary for their trust in AI systems.

Americans are not swayed by the tech industry's warnings about potential economic and innovative setbacks from regulation. Even when geopolitics is involved, voters still clearly favor a measured and secure approach to AI development, rather than racing ahead to build ever more powerful artificial intelligence for the sake of competing with China.

In polling shared with Time, voters showed strong willingness to accept restrictions on AI development in the interest of public safety and security..

The data reveals a surprising level of agreement across party lines. Both Republican and Democratic voters show support for government-imposed limits on AI development, prioritizing safety and national security concerns. 

In late June, the AI Policy Institute (AIPI), a US nonprofit advocating for a more measured approach to AI development, polled public attitudes towards AI regulation and development. The results reveal:

  • 50% of surveyed voters support using the US's AI advantage to implement strict safety measures and rigorous testing requirements. These voters believe such actions could prevent any single country from developing an overly powerful AI system.
  • In contrast, only 23% of respondents favor a rapid approach to AI development, prioritizing outpacing China to gain a significant advantage over Beijing.

Again, this data suggests that many voters prefer a careful, safety-focused approach to AI development over a strategy emphasizing speed and geopolitical competition.

Although Congress has not yet acted on these voters' preferences, AI safety advocates should remain steadfast in their cause. Public support often precedes political action; it takes time and effort to activate and mobilize the public and convert their opinions into irresistible political demands.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more