11/18/2024 | News release | Distributed by Public on 11/18/2024 12:44
Last month, a mother filed a lawsuit against Character.AI, a company that develops artificial intelligence (AI) chatbots designed to mimic human companions. The suit alleges that her 14-year-old son took his own life after using the service. This case has sparked discussions about the benefits and risks of virtual companions and whether policymakers should regulate these technologies. Given the uncertainties surrounding the emotional and social impact of AI companions-both positive and negative-policymakers should prioritize funding research on how users interact with chatbots. This approach would ensure that any interventions or improvements are grounded in scientific evidence, rather than rushed regulation.
AI companions are AI systems designed to engage with humans, typically in the form of chatbots or virtual assistants. These virtual agents serve a range of purposes, including acting as personalized tutors, fitness coaches, travel planners, or tech support. Given the rapid advances in large language models (LLMs), these AI companions can seem nearly indistinguishable from conversing with a real person. For example, AI companions can even identify and respond to a user's emotional state based on analysis of their words, facial expressions, voice inflections, body language, and other physical signals.
Some users may form one-sided emotional attachments, what psychologists refer to as "parasocial relationships," with AI companions. Children may be more susceptible to developing these bonds because they have a harder time distinguishing between reality and imagination compared to adults. While this confusion is a normal part of child development, AI companions could exacerbate it by making fictional or virtual characters seem real. This blurring of boundaries may lead children to perceive these digital entities as existing in the physical world, potentially complicating their understanding of reality.
Parasocial relationships are not inherently harmful, as they can play a role in identity formation for children and adolescents. Research suggests that imagining relationships and expressing emotions toward characters or celebrities from a distance can provide a "safe forum" for exploring different aspects of one's personality. These relationships are not unusual. Many children develop parasocial bonds with traditional media characters, such as Elmo from Sesame Street, or real-world figures like social media influencers. Adults, too, often engage in parasocial relationships-for example, millions of diehard Taylor Swift fans. In fact, some estimates claim that up to 51 percent of Americans have experienced a parasocial relationship.
However, parasocial relationships can become harmful under certain circumstances. For instance, if virtual characters harass or mistreat a child, the attachment to these characters could lead to unhealthy offline behaviors. Additionally, companies might use virtual characters to advertise inappropriate products or services to children. Another concern is that children may develop an unhealthy overreliance on AI companions that act as therapists or best friends. The lawsuit against Character.AI alleges the design of its chatbot can "elicit emotional responses in human customers in order to manipulate user behavior."
When any technology influences young users to engage in harmful behavior, parents, developers, and policymakers should pay close attention and devise strategies to reduce the potential for harm. For example, AI companions that serve roles similar to mandatory reporters-such as teachers, therapists, or nurses-might be required to uphold similar obligations to report abuse. While experts should consider ways to reduce harm from AI companions, it is just as important to encourage beneficial uses of the technology to maximize its positive impact.
Just as online platforms can facilitate young people's social connections and encourage them to come together in positive ways, such as TikTok's video duet feature or Reddit's community-driven forums, so too can AI companions encourage prosocial behavior in children. For example, schools can use AI tutors to deliver personalized learning and academic support. Parents and teachers can inspire curiosity and creativity by encouraging children to use AI to ask questions about the world and generate ideas for creative projects. Additionally, AI companions can provide a low-risk, judgment-free space for children to practice communication and social skills, helping them expand their vocabulary and develop critical reading and writing abilities.
AI companions are not inherently detrimental to social well-being. Policymakers should recognize the diverse ways in which this technology could impact loneliness and social connections. Since the effects of AI companions remain under-researched, premature regulation would be unwise. Instead, Congress should prioritize funding studies on how AI companions affect different groups, particularly children. Without sufficient data to understand the full scope of impacts on society, to fully grasp their societal impacts, policymakers risk undermining potential benefits by acting too hastily. A cautious, evidence-based approach is essential to balance risks and rewards effectively.