News in English

Deepfakes of dead will spark Black Mirror-style chaos leaving us questioning life & death, says ex-White House tech boss

DEEPFAKES will spark Black Mirror-style chaos unless the public starts to think twice before sharing data, an ex-White House chief has warned.

Theresa Payton, a former White House Information Officer, said “the world should be concerned about the increasing pace of intelligence gathering”.

Getty
It’s feared the rise of deepfakes will produce ‘unprecedented developments’[/caption]
Getty
Former White House Information Officer Theresa Payton warned the public to ‘think twice’ before sharing data online[/caption]
AFP
Many states – including China – are ‘rapidly evolving deepfakes and AI technology’[/caption]

Experts say deepfakes can be used to legitimise war, sow confusion and chaos, divide ranks and allies, discredit leaders, and polarise societies.

And regulation of an exploding AI industry is getting more and more difficult.

In China, deepfake technology has advanced to create lifelike fakes of dead people, Ms Payton said.

Described as “AI resurrection” services, customers can pay just £2.20 to interact with the dead as if they were still alive.

The market for “digital humans” was already worth £1.3billion in 2022 – and is set to soar.

While the avatars could be used in beneficial ways – to help process brief and preserve a “legacy” – Ms Payton warned it poses huge risks.

She told The Sun: “This technology poses risks if misused by criminal syndicates or nation-state cyber operatives.

“Malicious applications include creating fake personas to spread misinformation, manipulate public opinion, and conduct sophisticated social engineering attacks.”

According to Sima Huapeng, CEO and co-founder of Chinese company Silicon Intelligence, a third of their requests involve creating clones of celebrities, religious leaders or other famous people.

In Black Mirror style, Ms Payton warned malicious actors could exploit this technology to sow confusion and chaos by creating deepfakes of world leaders or famous faces – dead or alive.

It could blur the lines between fact and fiction – and make it more difficult for the public to understand whether someone is actually dead or alive.

The new technology also raises concerns about fraud – and how easy it would be to steal someone’s identity, or create new identities with a mash-up of data.

It’s also sparked questions about what rights any of us have after we die.

Ms Payton said: “Convincing deepfake videos of influential figures could undermine trust in authentic communication channels, causing widespread discord and confusion.”

In the sixth season of Black Mirror, “Joan is Awful” played out the real-world anxiety.

The main character wakes up to find her every move documented on a TV series – exploring the dark side of AI.

Ms Payton said creators of deepfakes will always stay “a step ahead of detection methods” – and warned of “unprecedented developments”.

She said: “The pace of innovation coupled with motivation will yield unprecedented developments.

“AI methods are already difficult for even astute individuals to detect.

“Creators of deep fakes will stay a step ahead of detection methods.

“Most concerning is the lack of awareness among the general public on how much personal information can be collected from afar by well-resourced and technologically adept nation states.

“Think twice before you respond to that social media poll or fill out something like an online personality survey.”

Creators of deep fakes will stay a step ahead of detection methods

Theresa PaytonFormer White House Information Officer

Ms Payton said many countries are “rapidly evolving deepfakes and AI technology”.

And “misinformation tactics” will pose “large-scale threats” to the world, she warned.

She said: “The world should be concerned about the increasing pace of intelligence gathering as well as misinformation and the ultimate effects on geopolitical concerns, elections and even grass root-level matters.

“Nation states like China will continue to evolve deepfakes and AI to the extent these technologies enable their geopolitical strategies.”

She said these countries will continue to “mine and collect massive amounts of information”.

They will be looking at “preferences and patterns of life… to advance their country’s geopolitical agenda”.

Collecting huge amounts of information on individuals can make it easier for malicious actors to create accurate and lifelike deepfakes.

Malicious actors are already putting AI and deep fakes to use for social engineering purposes

Theresa PaytonFormer White House Information Officer

But it’s too early to tell who will be the global leader in AI and deep fakes, the former White House chief said.

“What will be interesting to watch is how each round of development plays off the previous and how these technologies are leveraged for good and bad causes,” she said.

“Malicious actors are already putting AI and deep fakes to use for social engineering purposes.

“I expect social engineering fuelled by AI to continue, commoditize, and scale.

“It will be increasingly important for individuals across the world to seek out official sources for information and additional verification before sharing information or conducting transactions.

“Innovation in the area of deep fake detection will be key and establishing an appropriate policy and diplomatic framework is increasingly important.

“It’s likely these tactics will not keep pace in the short term so people across the world should remain vigilant.”

‘REVOLUTIONARY’ DEEPFAKES

It comes as cybersecurity company Trellix warned a new AI method unveiled by researchers in China has “revolutionised the deepfake landscape”.

InstantID – created by researchers at Peking University in Beijing – can generate convincing deepfakes “in a matter of seconds”, the company said.

Security researchers Jambul Tologonov and John Fokke, said the tech is easily accessibly to malicious actors “with minimal skill sets and hardware resources”.

And this “sets InstantID apart from other models“, they said in a blog post for Trellix.

They explained: “With just a single reference image, individuals can swiftly generate convincing deepfake content in a matter of seconds.

“The capabilities of the InstantID method are indeed impressive, as it can achieve remarkable results in less than a minute with just a single face reference.”

They said the tech “raises concerns” about how cybercriminals could “easily create deepfake videos and images”.

“Given recent AI advancements like InstantID, the creation of deepfake content has become more accessible than ever before, and cybercriminals are keenly observing these developments,” Fokke and Tologonov added.

Chinese company InstantID are “pioneers” in the field of deepfakes and AI, they said.

They “consistently experiment with and integrate deepfake GenAI technology to craft highly convincing synthetic media”.

A report from The Jamestown Foundation said Beijing appears to be taking a two-pronged approach to deepfakes.

Xi Jinping has brought in strict laws on deepfakes in his own country – while also weaponising the tech for international smear campaigns and influence operations, it said.

Deepfakes – what are they, and how do they work?

Here's what you need to know...

  • Deepfakes are phoney videos of people that look perfectly real
  • They’re made using computers to generate convincing representations of events that never happened
  • Often, this involves swapping the face of one person onto another, or making them say whatever you want
  • The process begins by feeding an AI hundreds or even thousands of photos of the victim
  • A machine learning algorithm swaps out certain parts frame-by-frame until it spits out a realistic, but fake, photo or video
  • In one famous deepfake clip, comedian Jordan Peele created a realistic video of Barack Obama in which the former President called Donald Trump a ‘dipsh*t’
  • In another, the face of Will Smith is pasted onto the character of Neo in the action flick The Matrix. Smith famously turned down the role to star in flop movie Wild Wild West, while the Matrix role went to Keanu Reeves

China is likely to use the deepfakes to “reinforce repression, influence public opinion, and conduct electoral interference in liberal democracies”, the report added.

And it may also be used to “strengthen PRC propaganda… leveraging deepfake personas – from witnesses to news anchors – to influence public perception”.

The report said: “It is only a matter of time until Beijing figures out the best way to integrate these new and evolving tools into its influence operation playbook.”

In May, The New York Times reported on the “a flourishing genre on Chinese social media” – AI-manipulated videos using “Russian” women to rally support for closer China-Russia ties.

The videos – often labelled with “Russian wife” and “Russian beauty” – show women singing the praises of Chinese men.

Synced to fluent Mandarin, other videos describe China as “a saviour”.

In a Congressional hearing, Dr Benjamin Jensen, Senior Fellow at CSIS, warned that “cyber espionage campaigns by the Chinese Communist Party are about to increase in scope and severity with the proliferation of generative AI”.

US ELECTION ‘INTERFERENCE’

Payton previously told The Sun how the chilling rise of deepfakes and AI will change the face of the 2024 US presidential elections.

Payton said American voters will face a flood of propaganda in a bid to sway electoral campaigns, blurring the lines between reality and fiction.

“Deepfakes are absolutely going to be front and centre of the US presidential elections,” Theresa told The Sun.

“Propagandists could use AI to create fake documents, audio, and videos about candidates, influencing public opinion and potentially causing division within society.

“Notable people who have influence could be shown saying or doing something that they never did, spreading disinformation and misinformation.”

And earlier this year, Microsoft also warned that China will attempt to disrupt the US presidential elections using AI.

The tech giant said China will leverage generative AI to flood the internet with synthetic media in a bid to influence the elections.

A report from Microsoft’s threat intelligence team said: “China’s increasing experimentation in augmenting memes, videos and audio will continue – and may prove effective down the line.”

China will aim to flood social media with AI-generated content that “benefits their positions in these elections”, they said.

Deepfakes are absolutely going to be front and centre of the US presidential elections

Theresa PaytonFormer White House Information Officer

The report suggested that Beijing has “doubled down” on targets and increased the sophistication of its influence operations against the electoral campaigns in the US.

Clint Watts, general manager of Microsoft’s threat analysis, said: “China is using fake social media accounts to poll voters on what divides them most to sow division and possibly influence the outcome of the US presidential election in its favour.

“China has also increased its use of AI-generated content to further its goals around the world.”

The threat centre reported last year that social media accounts “affiliated” with the Chinese government had impersonated US voters to influence midterm elections in 2022.

“They use original videos, memes, and infographics as well as recycled content from other high-profile political accounts,” Watts said.

In May last year, Donald Trump himself shared a deepfake of his interview with CNN.

The doctored clip showed CNN anchor Anderson Cooper saying: “That was President Donald J. Trump ripping us a new a**hole here on CNN’s live presidential townhall.”

While the lip movement of the anchor does not match the words being said, the video is still convincing for those who are not aware of the tell-tale signs of deepfakes.

Defence against deepfakes

Here’s what Sean Keach, Head of Technology and Science at The Sun and The U.S. Sun, has to say…

The rise of deepfakes is one of the most worrying trends in online security.

Deepfake technology can create videos of you even from a single photo – so almost no one is safe.

But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.

For a start, there’s much greater awareness about deepfakes now.

So people will be looking for the signs that a video might be faked.

Similarly, tech companies are investing time and money in software that can detect faked AI content.

This means social media will be able to flag faked content to you with increased confidence – and more often.

As the quality of deepfakes grow, you’ll likely struggle to spot visual mistakes – especially in a few years.

As the quality of deepfakes grow, you’ll likely struggle to spot visual mistakes – especially in a few years.

Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?

If you’re being told something alarming, a person is saying something that seems out of character, or you’re being rushed into an action, there’s a chance you’re watching a fraudulent clip.

Читайте на 123ru.net