AI Companions: Too Close for Comfort?

Claudia Wilson
,
October 22, 2024

Would you date a chat bot? Train an AI model on your deceased loved one’s voice and image? Confess your innermost thoughts to a large language model (LLM)?

You may scoff, but AI companion apps, which allow you to pick an avatar and interact with a virtual person, have reached 225 million downloads in the Google Play Store. Eight of these apps have debuted on Andreesen Horowitz’s top 100 generative AI consumer apps and their engagement far exceeds other use cases (see graph below). One of these apps, Character.AI, has users spending an average of 2 hours a day on its interface. People are still spending more time on social media apps, though not by much. 

Source: Andreesen Hororwitz

In the wake of a loved one’s passing, some companies are offering users the chance to have video, text, or voice conversations with that person. Although the figures for dedicated grief bots are less clear than more general companion bots, there seems enough opportunity that several companies have entered the space - HereAfter AI, StoryFile, and Seance AI to name a few.

The jury is out on the social and health effects of these tools. In one Stanford study, 3% of Replika users reported, without solicitation, that the app had stopped their suicidal ideation. Yet, skeptics have likened companion apps to social media, which has seen strong correlations with mental health issues. Users have already reported a profound sense of loss when software updates changed the personality of their companions. Others worry that grief bots are simply delaying the natural process of grieving.

Source: Ark Invest

Beyond larger-scale societal effects, there are also examples of easily avoidable harms. Certain companion apps have not-so-helpfully specified methods for suicide to vulnerable users who tragically take their advice. Plus, companions seem to have a tendency to lean sexual and/or romantic, even when users rebuff advances or are underage.

Although current usage and capabilities are impressive, this may be just the beginning. Ark Investment estimates that these companion apps’ revenue could scale five-fold by 2030. Implicit in this estimate is the assumption that these tools will proliferate throughout society. If the pace of technological advances, such as voice and image generation, continues, then rapid adoption is all the more likely.

Companion and grief bots are just one, albeit an incredibly personal, example of AI’s potential reach. As time goes on, AI is likely to become inextricably linked to our markets, weapons, critical infrastructure, and yes, social lives.

The reality is that many of these advanced models can be black boxes, meaning that we don’t fully understand why they respond the way they do. If we want to reap the benefits from this technology, we should be judicious about when we use AI and require companies to conduct safety testing ahead of wider release. If we don’t, then the public could wind up with both broken hearts and broken electric grids.

The Center for AI Policy (CAIP) has a full model legislation, which seeks to mitigate catastrophic risks from AI. CAIP also has a 2024 action plan with ‘low-hanging fruit’ proposals such as whistleblower protections for AI lab employees.

Biden and Xi’s Statement on AI and Nuclear Is Just the Tip of the Iceberg

Analyzing present and future military uses of AI

November 21, 2024
Learn More
Read more

Bio Risks and Broken Guardrails: What the AISI Report Tells Us About AI Safety Standards

AISI conducted pre-deployment evaluations of Anthropic's Claude 3.5 Sonnet model

November 20, 2024
Learn More
Read more

Slower Scaling Gives Us Barely Enough Time To Invent Safe AI

Slower AI progress would still move fast enough to radically disrupt American society, culture, and business

November 20, 2024
Learn More
Read more