What’s next in AI: Can we become virtually immortal? Do we want to?
Question: If you could create an AI twin of yourself, would you?
Such an avatar could be trained on your memories and mannerisms. It could converse with friends and colleagues. And it could live on long after you die.
One Pittsburgh AI executive told me recently he uses his digital twin to “gather stories, preferences and general wisdom” from his life experiences. In less than five minutes, Brooks Canavesi — of Problem Solutions in Beaver — can turn a script into a realistic video that looks like he’s talking on a Zoom call.
“It’s pretty slick,” Canavesi said. “People have a difficult time knowing it’s not me.”
Speaking of Zoom, the video company’s CEO recently suggested that digital twins could replace real people on calls, taking notes and learning, then reporting back to their real-life counterpart. But the concept goes far beyond corporate applications.
Artificial intelligence software is helping Holocaust survivors preserve their stories and unknown politicians to run for president. People are speaking with AI versions of their loved ones, even after death. And that blurring of reality — with all of its potential and pitfalls — is giving shape to a concept some see as a form of immortality.
Now instead of a passive memoir, you can leave behind a version of yourself that can commune with future generations.
With that potential comes the familiar swirl of questions around AI: Will it be accurate and ethical? What if it’s used to mislead? What will it mean for privacy?
And then there’s a deeper concern: AI could change our relationship to death.
“If we have a virtual copy of our relatives, will we still properly mourn them when they pass away?” asked Vincent Conitzer, an AI researcher at Carnegie Mellon University.
Or, what if that relative has advanced-stage Alzheimer’s disease? “Will we start to ignore the actual relative in favor of a virtual copy that is easier to talk with?”
The questions are no longer theoretical. CNN reported earlier this year on an Illinois woman using Snapchat’s AI to ask her dead husband for cooking advice. Another man with terminal colon cancer spent weeks training an AI version of himself for his wife to talk to after he dies. Some grief experts think holding onto these digital substitutes could make it harder to move on.
The computerized versions of people also make it harder to tell what’s real. People are already using fake Zoom backgrounds — what happens when they start to fake themselves?
In the political sphere, lawmakers have tried to get out ahead of deepfakes, which are made without consent. Pennsylvania representatives are currently trying to outlaw AI impersonation in political campaigns, citing concerns about interference in the 2024 presidential election.
On the business side, some CEOs are diving in the deep end, creating synthetic clones to respond to emails and accomplish other mundane tasks — and realizing in the process that the digital version sometimes does a better job.
“If you have a good database, your digital clone could represent you better than you can yourself, which is, you know, kind of fun and also scary,” said Jim Kaskade, who built a synthetic version of the venture capitalist Jason Palmer to help him run for president. Enough voters enjoyed conversing with the AI Palmer that the real Palmer ended up winning enough votes to beat Joe Biden in the Democratic caucus in American Samoa earlier this year.
Kaskade said it was essential to let people know they were dealing with a fake version of the candidate. But he was also blown away by its capability.
“There’s no technical limitation at this point,” Kaskade said. “To be honest, I think you can create truly realistic and very responsive digital clones now, and they can understand context, emotions, and very nuanced human interactions.”
He’s also enjoying dealing with those synthetics.
“I’d probably rather have board meetings with my board’s digital twins than themselves now,” he said with a laugh. “Just kidding.”
At some level, computerized perfection might perform “better,” but at what cost? Fallibility is part of what makes us human. Humor too. The AI clones, by contrast, are likely to be sanitized versions. Politicians and sales reps will optimize and inflate training data to make their AI an unrealistic version of themselves. Zoom’s CEO, for instance, said he can dial up his personality in certain synthetic versions.
But there could also be a kind of reverse training where people use their AI for self-improvement, Kaskade said. They could use their AI twin as a mentor, coach, therapist and teacher, tapping into not only their own experiences and memories but entire industries of collective knowledge.
Researchers at the University of Oxford argue in a soon-to-be-released article that “we cannot yet achieve immortality through AI,” because a digital replica can’t gain new subjective experience.
“But it could at least partially fulfill the goods associated with one’s life projects and valued social relationships,” Cristina Voinea, a research fellow at Oxford’s Centre for Practical Ethics, wrote to me recently.
What’s clear now is that people are starting to experiment — blurring the lines between lives that are already largely online, and the new entirely synthetic future, where digital twins could potentially last forever.
At least so long as mortal humans find a way to keep the power on at the data centers where all those memories are stored.
(c)2024 the Pittsburgh Post-Gazette. Visit the Pittsburgh Post-Gazette at www.post-gazette.com. Distributed by Tribune Content Agency, LLC.