On September 12, the Associated Press (AP) released a new poll showing once again that the American public is profoundly skeptical of AI and worried about its risks.
When asked which sources they trust to provide them with accurate information about the government, 22% said they trust TV news, 18% said they trust the government itself to be honest about what it’s doing, and only 3% of Americans said they trust responses from an artificial intelligence chatbot.
Similarly, only 8% of American adults thought that AI responses were always or often “based on factual information.”
Americans aren’t wrong to be skeptical of AI’s accuracy, because even the companies developing it acknowledge that their models frequently hallucinate.
The interesting question is: why aren’t politicians moving faster to address this problem? The federal government might be proverbially untrustworthy, but it’s still six times more trustworthy than AI.
The public wants the government to take action to protect them against the risks from AI, but instead Congress has focused on voluntary standards while leaving AI developers free to integrate AI into more and more of the pillars of our economy. We’re taking the same software programs that routinely make up false information and putting them in charge of our power plants, our automobiles, our reservoirs, and our farms.
That isn’t what Americans want. 52% of Americans in the AP poll said that on balance they were “more concerned” about the future impact of AI, compared to only 9% of Americans who were “more excited.”
The Center for AI Policy (CAIP) shares Americans’ concerns about unregulated AI, and we call on Congress to take immediate action. To start with, we’d like to see a floor vote on the responsible AI bills recommended by large bipartisan majorities on the House Committee on Science, Space, and Technology and the Senate Commerce Committee. These issues have been thoroughly debated – it’s time for some action.
Analyzing present and future military uses of AI
AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model
Slower AI progress would still move fast enough to radically disrupt American society, culture, and business