News in English

Microsoft warns against ‘deepfake fraud’ and deceptive AI as the company begs government leaders to take action

MICROSOFT is calling on lawmakers to combat dangerous deepfake technology and AI-powered cyberattacks.

On Tuesday, the tech behemoth released a 42-page report detailing ways to protect consumers from “abusive AI-generated content.”

Getty Images - Getty
Microsoft has outlined plans to combat the rise of “abusive AI-generated content,” with Vice Chair Brad Smith calling for “new laws to help stop bad actors”[/caption]

This includes placing pressure on government officials to fix the problem tech companies created.

Vice Chair Brad Smith summed up the call to action in a post on the company blog.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” Smith wrote.

“In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children. ”

The report detailed instances of deepfakes, a portmanteau of “deep learning” and “fake.”

The term was coined in 2017 and originally used to describe photos manipulated by open-source face-swapping technology.

With the advent of artificial intelligence, it has become easier than ever for malicious actors to produce synthetic content, especially as such technology becomes readily available.

The Microsoft report cites instances of deepfakes being used to manipulate election outcomes.

In Slovakia, for instance, an audio recording rapidly spread in which presidential nominee Michal Šimečka bragged about rigging the election.

Despite dispelling it as a fake, Šimečka suffered in an already tight race, with Microsoft implying the deepfake cost him the election.

“While it is impossible to credit the deepfake for the result, the spread is within the typical statistical error rate, and its impact cannot be easily dismissed,” the report read.

Other instances excluded from the paper have been reported throughout the United States.

In January, a robocall using President Joe Biden’s voice urged Democrats not to vote in New Hampshire primaries.

The man behind the plot was arrested and indicted on charges of voter suppression and impersonation of a candidate.

The tech tycoon was slammed for promoting manipulated content in violation of the terms of his own platform.

PA
The company has released a horde of AI tools including Copilot, billed as “your everyday AI companion”[/caption]

Earlier this week, Elon Musk posted a deepfake video of presumptive Democratic presidential nominee Kamala Harris to X, formerly Twitter.

Microsoft cited the use of deepfakes to manipulate vulnerable people, specifically senior citizens, into falling for schemes.

The paper referenced a 2023 study in which 37% of organizations worldwide reported experiencing some voice deepfake fraud attempts.

Voice phishing, or vishing, has experienced an explosion in frequency in just the past year.

Cybercriminals will pose as a victim’s friends or relatives using AI voice cloning technology and convince them to send money or hand over sensitive bank account details.

Getty
Microsoft insists it is doing everything it can, joining other tech companies to further AI education and lay out defenses against the deceptive use of AI[/caption]

The company insists it is doing all it can to combat the issues, and makes an admittedly strong case for itself.

Initiatives include signing a cross-tech sector agreement at the Munich Security Conference in February.

Microsoft was among a coalition of 20 companies that launched the Tech Accord to Combat Deceptive Use of AI in 2024 Elections.

The tech giant also partnered with OpenAI in May to launch a $2 million Societal Resilience Fund.

Both companies hope the initiative will “further AI education and literacy among voters and vulnerable communities.”

Getty Images - Getty
Smith says the fear is that tech giants and lawmakers “will move too slowly or not at all” when it comes to combatting cybercrime and fraud[/caption]

However, Microsoft insists the federal government needs to do more to set and enforce laws regarding the improper use of AI technology.

Suggestions for U.S. lawmakers include requiring AI system providers to clearly label synthetic content.

Microsoft proposes using provenance tools, which provide details about the origin, changes to, and validity of data.

“This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated,” the report read.

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.

Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.

Microsoft is also pushing for a federal “deepfake fraud statute” to protect victims of cybercrime.

“We need to give law enforcement officials, including state attorneys general, a standalone legal framework to prosecute AI-generated fraud and scams as they proliferate in speed and complexity,” the report read.

The company has also advocated for changes to laws at the federal and state levels to keep up to date with developments in AI.

“While it’s imperative that the technology industry have a seat at the table, it must do so with humility and a bias towards action,” Smith wrote in a foreword.

“Ultimately, the danger is not that we will move too fast, but that we will move too slowly or not at all. ”

Читайте на 123ru.net