Reflections on AI in the Big Apple

Jason Green-Lowe
,
September 20, 2024

AI is traditionally associated with Silicon Valley, but they’re not the only ones making money from new technology. Earlier this month, I visited New York City to see what AI-related businesses have to say about AI’s risks and rewards.

My hosts included:

  • a senior software engineer who designs the simulations used to test industrial robots, 
  • an attorney who sells AI-powered identity verification services to banks, 
  • the chief privacy officer for a firm that helps customers comply with the EU AI Act, and
  • a software engineer who designed some of the tools used by Meta to protect against election interference.

All four people were from the private sector, so I had some capitalistic expectations. I was pleasantly surprised to see that they were all deeply interested in the ethics and safety of AI.

The robotics engineer complained that his company isn’t getting enough credit for their efforts to develop socially responsible technology. He explained that they specifically design their robots to only replace jobs that humans don’t enjoy and can’t do safely. 

For example, when he showed their latest box-stacking robots to some workers who used to move heavy boxes for a living, the workers thanked him! There was general agreement that this is a painful job that humans would like to avoid. Speaking as someone who got a hernia the one time I worked in a warehouse, I’m inclined to agree.

The attorney who works in identity verification noted that his customers have widely varying views about how much to invest in know-your-customer (KYC) protocols. Some customers make a serious effort to design an effective system that makes use of a broad range of document verification and biometric verification tools, whereas other customers just want to check the box and say they have a KYC system. 

He noted that AI can be helpful for detecting fraud, but it also makes fraud easier to commit – so whether commercial transactions wind up becoming more secure or less secure depends in part on how much people invest in security.

The chief privacy officer explained that businesses headquartered in America are already buying EU compliance services, because they have customers or suppliers or employees in the EU. The EU is threatening such companies with fines of up to 7% of global revenue. To avoid these fines, businesses with high-risk AI products need to have a quality control process that’s documented in a “systematic and orderly manner in the form of written policies, procedures and instructions.” 

As Congress continues to procrastinate on passing its own AI safety legislation, American businesses are increasingly being regulated by foreign laws like the EU AI Act. Europe is a close enough partner that they’d almost certainly be willing to work with us to harmonize international AI policy, but that doesn’t mean that they’re going to sit around waiting for multiple years while we take no action at all. If Congress doesn’t pass AI safety legislation, then our businesses will wind up dancing to other players’ tunes.

Finally, the election security specialist shared his frustration with the lack of attention being devoted to the quality of AI-mediated information. He worries that we are passively accepting the harms done by algorithmically curated social media feeds – if these feeds damage public trust, or erode civil society, or undermine mental health on a massive scale, that’s seen as an unfortunate but inevitable consequence of AI “progress.”

The reality, though, is that we as a society could choose to do better. If we want to invest in safe, accurate, well-balanced streams of information, we could do that. The Center for AI Policy (CAIP) believes that we should – and it seems that many of New York’s AI professionals would agree.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more