Making Friends: AI and Companionship
Near the climax of Terminator 2, war-weary mother Sarah Connor reflects on the robotic killing machine that has become her son’s protector. “It would never leave him,” she muses. Further explaining:
And it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him… Of all the would-be fathers who came and went over the years, this thing, this machine, was the only one who measured up.
The titular robot doll in 2022’s M3gan has the same inhuman protective drive; there, though, it’s the source of a murderous rampage.
That idea of a designed persona — gone right, or horribly wrong — is part of what makes AIs such compelling heroes and villains. Increasingly, it also drives AI relationships in the real world. Apps like Replika offer custom-designed companions, dates, or even spouses. Some users credit AI with helping them rebuild their lives. In another case, one lawsuit argues that a chatbot girlfriend encouraged a teen to commit suicide.
We look for the familiar in inanimate objects, imagining faces in the clouds and photocopiers that hold a grudge. AI magnifies these tendencies, as even the chatbots of the 1960s were occasionally mistaken for real humans. Today, we face what technologist Derek Schuurman calls “ontological confusion”: the loss of our ability to distinguish genuine personhood from mechanical imitation.
In response, it’s tempting to reject artificial companions wholesale — an urge not helped by their marketing. In one baffling ad for “Friend,” a wearable AI, a young woman trades banter with a (human) romantic interest. Conversation pauses, and her hand drifts automatically to activate the Friend, which she wears like a pendant. At the last moment, she snatches her hand back — but the cultivated habit is clear. Why choose human interaction, with all its messy awkwardness, when the machine always knows what to say? Why pursue people who might fail or hurt you, when you can buy an AI social network that will affirm you to your exact specifications?
Perhaps that’s the greatest risk — not confusion, but a conscious rejection of the work of dealing with other people. And yet that work is vital. Relationships are “people-growing machines.” It’s in dealing with other human beings, with all their fragility and mess, that we learn gentleness and patience. As any parent can attest, there’s no surer way to discover you are not the center of the universe than to meet someone else who thinks they are. If we abandon the labor of human connection, we lose our best tool for refining ourselves.
And yet sweeping rejection of AI companions seems too broad. Imagine a shut-in who buys a pet for company; surely we don’t condemn that as a cheap substitute for human interaction! A dog is not a person, of course — and maybe that’s the key. We rejoice with the lonely person who buys a dog. We look askance at someone who talks only to dogs, let alone someone who marries them.
As with much of our technology, AI can function in two modes: as creating new kinds of actions, or as replacing existing ones. Whether tech connects or isolates us often depends on the mode in view. When we’re distant from our families, video chat is a blessing, allowing a closeness that’s otherwise impossible — but as lockdowns proved, it’s a poor alternative for all other human interaction.
In the same way, we might use AI to create new social roles, rather than supplant human ones. Artificial pets or caretakers could meet our needs for physical or emotional support in specific ways, without a pretense of human intimacy. AI tutors are already a blessing for students who cannot otherwise afford help. We could imagine something similar for social roles. An AI might help users practice professional norms, or learn the niceties of high-class social events. In a more targeted way, an AI could help build confidence in small talk or basic conversation — say, for children with autism, or Japan’s growing population of self-isolated hikikomori.
Critically, such systems could be intentionally limited. A guide for autistic children could be designed to ultimately encourage its wards away from itself and toward human interaction. A robotic caretaker might take orders or sit for conversation, but a good design would respect the boundary between “butler” and “friend.” The more an app blurs these lines, the greater the risk of further impoverishing our human connections.
Ultimately, the choice to cultivate ontological confusion — as designers, as users — is a choice. The existence of AI does not force us to depend on it for pseudo-companionship. One thing AI has not changed: if all our friendships are fake, it’s because we chose to make them that way.
READ MORE:
AI Snake Oil: Separating Hype From Reality in Artificial Intelligence
AI Chatbots Lean to the Left. That’s a Problem for Elections.
How I Learned to Stop Worrying and Love AI
The post Making Friends: AI and Companionship appeared first on The American Spectator | USA News and Politics.