South Dakotans Have a Unique Opportunity to Call for Safe AI Legislation From Congress

Jason Green-Lowe
,
June 16, 2024

Jason Green-Lowe, Executive Director at the Center for AI Policy (CAIP), recently published an opinion editorial in the Capital Journal, emphasizing the unique position South Dakota holds in the national conversation on AI policy. The text is available below.

South Dakotans have a unique opportunity to call for safe AI legislation from Congress

Artificial intelligence (AI) is rapidly transforming our world, and its impact will only continue to grow in the coming years. In South Dakota, AI will impact agriculture, logistics, and healthcare.

South Dakota will harness AI for precision farming, using data analytics and machine learning algorithms to optimize crop yields. Once the farming is complete, AI will optimize routing and scheduling for transportation companies to get the crops to market. In healthcare, AI-powered medical imaging will improve the accuracy of diagnoses and guide doctors' decisions.

As this powerful technology advances, it is crucial that Congress put in place rugged regulations to ensure AI's safe and responsible development and deployment. First, we need more software engineers in the federal government. Instead of complaining that the government is too slow to keep up with technology, we should use direct hiring authority to cut through red tape and recruit top-level talent.

Once we have the right team in place, we must give that team the authority to deal with the most critical threats. Right now, AI risks are entirely managed on an industry-by-industry basis: the Federal Aviation Administration (FAA) regulates airplane AI, the Food and Drug Administration (FDA) regulates pharmaceutical AI, and so on.

As we've seen, some of the most advanced AI is general-purpose: ChatGPT and Claude operate across every industry. In practice, this means that nobody regulates them. We need a dedicated office in the federal government that monitors and responds to threats from advanced, general-purpose AI. That office should be able to hold bad actors accountable for their actions with civil and criminal liability.

Finally, the legislation must include robust whistleblower protection measures to encourage individuals to come forward with concerns or evidence of violations without fear of retaliation. Half of OpenAI's safety team quit over the last few months. Most were intimidated into keeping quiet about why because OpenAI threatened to confiscate the equity they needed for their retirements. Those scare tactics should be illegal: we must encourage people to share safety concerns, not shut them down.

Fortunately, South Dakota is uniquely positioned to shape the future of AI policy in America. Senator Mike Rounds (R-SD) serves on the Senate bipartisan Artificial Intelligence working group, while Senator John Thune (R-SD) holds a key Senate leadership position. This means that South Dakota's elected officials have an outsized influence on the direction of AI legislation in Congress. South Dakotans should embrace this role and seize their unique position to make their voices heard and demand that your senators champion safe and ethical AI policies.

Senator Rounds just released an AI "roadmap" that proposes $32 billion in Silicon Valley handouts but has little to say about safety. Senator Thune's AI framework calls for companies to "self-certify" that their products are safe—but we already know that many of these Big Tech companies aren't trustworthy. Their business model is scraping content from other people's books, movies, and news articles without paying a fee or getting the authors' permission. If you can't trust Sam Altman to tell the truth about whether his app is copying Scarlett Johansson's voice, why would you trust him to say to you that GPT-5 is safe?

As South Dakotans, you have a unique opportunity to shape the future of AI policy in America. CAIP encourages you to contact Senators Rounds and Thune and urge them to introduce powerful legislation to bring AI companies under meaningful oversight. Do you trust OpenAI's Sam Altman and Google's Sundar Pichai to have the final say about our future? If not, call your Senator and ask them to get moving on serious AI safety legislation. You can help ensure that AI is developed and deployed to benefit American society rather than putting us at risk.

As AI continues to advance at a breakneck pace, America must act before it's too late to put these critical safeguards in place. We need our elected officials to act urgently and prioritize the development of safe and responsible AI systems. The consequences of inaction could be severe, ranging from job displacement and privacy violations to existential risks.

This will require collaboration between policymakers, experts, and stakeholders to strike the right balance between innovation and protection. The future of American society depends on how we collectively navigate this transformative technology.

The stakes could not be higher. Senators Rounds and Thune's decisions on AI regulation will have profound implications for generations.

The House That AI Built

Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting

October 31, 2024
Learn More
Read more

"AI'll Be Right Back"

5 horror movie tropes that explain trends in AI

October 28, 2024
Learn More
Read more

A Recommendation for the First Meeting of AI Safety Institutes

Reclaim safety as the focus of international conversations on AI

October 24, 2024
Learn More
Read more