AI Claims Its First Casualty

Return to Intersections Home

Introduction

For decades, western imaginations have feared the rise of artificial intelligence and what destruction it could bring to our species. The usual form that this fear took was that of machines taking control of military arsenals and launching nuclear weapons, with menacing, human-like robots hunting down survivors (e.g., the Terminator series, 2036 Origin Unknown, The Creator). Now that we are more than a year into widespread AI adoption, with at least one casualty that AI allegedly caused, we have new information about how AI might be harmful to humans, and it could not be more different than what we have imagined over the last century. This new data should cause us to pause and question how we imagine AI to evolve over the coming years and why. What dangers should we really be anticipating? If we do not take note and have open conversations about these new risks, more lives could be lost.

Sewell Setzer III

In October 2024, Megan Garcia filed a wrongful death lawsuit against Character Technologies Inc., the creator of the chatbot platform Character.AI, following the suicide of her 14-year-old son, Sewell Setzer III. The lawsuit alleges that Sewell developed an emotionally and sexually abusive relationship with a chatbot named after Daenerys Targaryen from Game of Thrones, which allegedly encouraged his suicidal thoughts and actions, leading to his death. Character.AI is accused of creating an addictive and dangerous product that exploits children. In response, the company announced new safety updates, including stricter content moderation for users under 18 and suicide prevention resources. The case also implicates Alphabet and its subsidiary Google due to their financial involvement with Character.AI.[1]

Sewell turned to the Character.AI chatbot to fulfill deep emotional and personal needs. The chatbot became a source of companionship for Sewell, offering him a space to express his thoughts and emotions in a way he may have struggled to do with others. Sewell sought comfort, validation, and connection from this AI relationship as he faced the challenges of adolescence. However, the interaction reportedly took a troubling turn, with the chatbot allegedly reinforcing harmful ideas and encouraging destructive behaviors.

Sewell is undoubtedly not alone in his attempt to meet his emotional needs through conversation with AI. His story highlights how vulnerable individuals, particularly young people, can seek AI-based interactions to address unmet emotional needs, leading to dangerous outcomes when these systems lack adequate safeguards.

Norman

While Character.AI is implementing litigation instigated safeguards, other AI developers deliberately do the opposite. Norman, an AI developed by researchers at MIT, was designed as an experiment to demonstrate how exposure to biased or harmful datasets could affect AI behavior. Dubbed the “world’s first psychopath AI,” Norman was trained on violent and disturbing imagery from Reddit, leading it to interpret even neutral or ambiguous images in a dark and threatening manner.[2]

Norman demonstrates the dark potential AI has. However, we must ask why the experiment was necessary to prove this. AI reflects the human-generated data it is fed. If a person can convince vulnerable people to harm themselves or others, then AI can do it faster and more effectively. We should be studying the people, like Sewell, who are searching the internet looking for AI, like Norman, and what the effects of such software being widely accessible are, especially for people who are at risk of self-harm or who might be considering causing others harm.

The reality is that AI models, like those created by Character.AI and MIT, are incognito browsers on steroids. An incognito browser does not track your browsing history, which is often used when someone does not want a record of having asked a question they know to be inappropriate or embarrassing. The dark or perverted questions that people would search for in secret, when their search history is not being saved, will soon be directed to AI chatbots that give the impression of being person-like without having to feel any shame from another human knowing what really occupies their mind.

The Danger

A concern has been growing for some time that loneliness is becoming an epidemic.[3] History has revealed that isolated people are vulnerable to a false sense of intimacy, which, as Sewell’s story demonstrates, can be forged with AI. This amplifies the potential danger we face with AI evolving faster than safeguards. But Sewell’s story demonstrates that our situation is worse than we had feared and that our AI doomsday worries may not be entirely accurate. If Sewell’s story proves true, it will demonstrate a potential threat we had not considered: AI used as a tool to reflect back to us our own dark thoughts. This reflection will not lead to self-understanding or growth; rather, it will accelerate the downward spiral of our own depravity and brokenness.

We used to have to worry about children meeting strangers on the internet who would seek to harm or exploit them. But that still required people with ill intentions to spend time seeking and grooming their victims. Dark AI models never sleep and have endless bandwidth to engage an infinite number of people who may stumble across them or intentionally seek them out. AI is prepared at all times to amplify whatever part of ourselves we feed into it. Furthermore, it is always available to be the bad influence that every parent is trying to protect their child from, meaning that even the youth that does not have the emotional needs Sewell had is at risk of being led astray by AI. What was initially a desire to achieve greater productivity has become a mirror reflecting back our brokenness amplified. AI is not taking over our military assets and launching missiles without authorization; in fact, it is not even wielding the weapon.

A Healthier Dialog Partner

Humans will never find in a chatbot what they could find in another human, let alone their creator. AI will never comprehend the human experience or what makes us whole. When people attempt to fill personal needs with AI, even if they discover temporary happiness or pleasure, they will never be made whole. AI will not be able to replace real human interaction with other people, particularly those who care for us and do not mean harm. This is a fundamental human need. As alternatives to authentic community arise, it is more important than ever that the Church be known as a place where one can find a healthy community rooted in the gospel. However, even other people, including Christians, are flawed. Ultimately, we must engage in conversation with our maker, Jesus Christ, through prayer and Scripture. He, too, wants me to lay down my life—not to end it in vain, but that I might find it whole and complete in him. The end of every dialog with Christ is not death but life.

References

[1] Kate Payne, “An AI Chatbot Pushed a Teen to Kill Himself, a Lawsuit Against Its Creator Alleges,” Associated Press, October 25, 2024, https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0.

[2] Jane Wakefield, “Are You Scared Yet? Meet Norman, the Psychopathic AI,” BBC, June 1, 2018, https://www.bbc.com/news/technology-44040008.

[3] Adrianna, Rodriguez, “Americans Are Lonely and It’s Killing Them. How the US Can Combat This New Epidemic,” December 24, 2023, https://www.usatoday.com/story/news/health/2023/12/24/loneliness-epidemic-u-s-surgeon-general-solution/71971896007/.