The AI Knowledge Paradox

February 3, 2025

A fascinating new study in the Journal of Marketing has revealed a counterintuitive truth about AI adoption that every policymaker needs to understand: the less people know about AI, the more likely they are to embrace it. 

The research, led by Professor Chiara Longoni at Bocconi University, uncovered the "lower literacy-higher receptivity" link. According to Professor Longoni, people with little understanding of AI are more likely to accept it and incorporate it into their workflow because they “perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes.”

Think about that for a moment. Our most tech-savvy citizens might be our biggest skeptics about AI adoption, while those with less technical knowledge could be the strongest advocates for more use of AI. The effect is strongest when AI is used for tasks we typically associate with human qualities, like emotional support or counseling. 

Understanding these psychological dynamics becomes crucial as American companies make decisions about how and when to incorporate AI into their core products. To succeed, they will need to build safe and reliable AI systems that create a positive experience for the people who use them.

Because America’s biggest consumers of AI are least likely to be well-informed on the issue, America’s leaders should take action to help ensure a functional marketplace. Here are three public policy priorities that will promote safe AI:

  1. Firm requirements for AI transparency: Companies should be expected to clearly and conspicuously disclose when and how AI systems are being used, especially in human-centric applications like healthcare, counseling, and educational services. Such transparency addresses the "magical thinking" bias by ensuring consumers understand they are interacting with AI while establishing clear accountability for AI-driven decisions and outcomes.
  2. Whistleblower protections for AI employees: The people who know the most about the risks posed by advanced AI are often the engineers who helped design it. These engineers should not be punished for sharing what they know with the responsible authorities. If a company’s management won’t take action to address AI risks, then engineers need the right to escalate their concerns to their board of directors or (in the case of threats to public safety) a relevant government official.
  3. Mandatory safety planning: The Center for AI Policy (CAIP) recommends that AI developers be required to prepare and publish plans showing how they are addressing some of the most dangerous risks from advanced AI, including the risk that AI could be used to develop chemical, biological, radiological, or nuclear (CBRN) weapons. AI isn’t magic, and it won’t magically avoid these threats unless we have a reliable plan for ensuring that AI stays safe. In particular, Congress should reintroduce and pass the legislation introduced by Senator Romney (R-UT) last Congress, which would create an Artificial Intelligence Safety Review Office to evaluate developers’ safety plans.

These balanced and commonsense measures will not make everyone an AI expert, but they will help create an informed public that can make rational decisions about AI adoption and understand its benefits and risks. Nothing less will do.

Memo: AI Questions for Commerce Secretary Nominee Howard Lutnick

CAIP will be looking to hear 5 questions asked at Lutnick's confirmation hearing.

January 28, 2025
Learn More
Read more

Biden's Final AI Flurry Raises Important Questions for President Trump to Answer

AI is a critical topic for any U.S. president to address.

January 24, 2025
Learn More
Read more

What's at the Other End of Stargate?

If it's important enough to spend $500 billion on, it’s important enough to have a specific destination in mind.

January 23, 2025
Learn More
Read more