News in English

Indian journalists are on the frontline in the fight against election deepfakes

Indian journalists are on the frontline in the fight against election deepfakes

In October 2023, Karen Rebelo came across a viral video of a stump speech by a former chief minister of the central Indian state, Madhya Pradesh. Rebelo, the deputy editor of Boom Live, a fact-checking publication based in Mumbai, is used to poring over video footage of prominent Indian political figures. But something about this particular recording of the local official, Kamal Nath, wasn’t adding up.

In the 40-second clip, Nath appears to address voters from a pulpit and says that if his party wins the upcoming state elections he plans to end the ladli behna program. The popular state welfare scheme was introduced by the opposing Bharatiya Janata Party, or BJP. The comments were more than uncharacteristic, they were suspiciously favorable to the BJP’s election bid.

“I told my teammates there could be only two possibilities: either you have the world’s best impersonator, or you’re using a voice clone,” Rebelo told me on a recent WhatsApp call. “There’s no other third possibility.”

“I’m quite confident that none of them would take up detection if it’s a very contentious high-stakes election issue,” he added, imagining an incident that had influence over an election outcome. “That hinders and pushes journalists behind in the news cycle as we try to report the truth.”

For many reporters, gaps in domestic deepfake testing have forced them to regularly seek out expertise overseas. Christopher, for one, has been working closely with Witness, a Brooklyn-based nonprofit that furthers the use of video and technology to defend human rights. Last year, Witness launched its Deepfake Rapid Response Force, an initiative that seeks to help journalists take on some of “the most difficult deepfake cases.”

For one of Christopher’s investigations, published by Rest of World in July 2023, Witness analyzed two audio clips of a politician from the southern state of Tamil Nadu. The politician claimed the embarrassing clips — in which he accuses his own party officials of corruption — were AI-generated. The Deepfake Rapid Response Force was able to conduct machine tests and bring in Tamil-speaking experts to review the recordings. Ultimately, one of the clips was deemed real by several sources. The deepfake claim appeared to be a cover to save face.

Services like this, run by Western nonprofits, are critical to journalists reporting on election deepfakes in India right now, according to Christopher. But there are still significant downsides. For one, the full review by the Deepfake Rapid Response Force took about three weeks.

“As someone who is in the Global South, without easy access to these tools, even when organizations like Witness are willing to help, the time lag between when you submit a piece of evidence and you get the result is so far out,” he said, noting that even when checks are expedited, there are few academic or nonprofit partners who can get something turned around in less than a few days.

The art of triage

Academics I spoke to confirmed that they are being inundated with requests from journalists in India, with limited time and resources to take on these pro-bono cases. That means as the election unfolds, there is a constant need to triage which potential deepfakes get prioritized.

“I would prefer academic institutions any day of the week. There’s no question in my head at all, because the quality of the analysis will be far more robust than any commercial tool out there,” said Rebelo. In the past, Boom Live has tried to use free, publicly available AI image detectors, with mixed results. In one case Rebelo had to issue a retraction after a faulty result from an AI image testing service, an experience she calls “scarring.”

Given the limited bandwidth of academics, Boom Live invested in a subscription to Loccus, a tool that detects audio deepfakes and is already widely used in the banking sector to verify the voices of customers, among other use cases. Loccus’ profile has risen quickly, particularly after it entered a partnership with Eleven Labs last fall to build better detection tools.

“We’ve definitely seen a surge in demand this year [from journalists],” said Manel Terraza, the CEO and founder of Loccus. Most small fact-checking organizations, including those in India, have opted for the company’s monthly subscription. For larger media companies, Loccus tends to sell its API, which newsrooms use to service in-house tools that are charged for each minute of audio processed. Locus has signed one such deal with Agence France-Presse (AFP), the French wire service.

Little information about Loccus’ products is available to prospective customers — including its pricing tiers for journalists. Most subscription deepfake detection products on the market, including competitors Reality Defender and Deep Media, require personal consultations before signup. “When you’re building a solution like ours, you need to be very careful around adversarial attacks,” said Terraza, explaining that bad actors could try to use Loccus to reverse engineer deepfakes that evade detection.

Some journalists told me the cost of Loccus, and similar subscription products, can quickly become prohibitive for already bootstrapped newsrooms, or for independent journalists like Christopher. “I met with a couple of websites that promised to give me a response. They were behind paywalls, though, which, again, creates access friction for a journalist on a deadline,” he said.

One solution to these cost problems though has started to emerge among Indian journalists — pooling newsroom resources.

India’s deepfake “war room”

The Misinformation Combat Alliance (MCA) offers one model for how news organizations in India are working together to address both capacity and cost barriers in deepfake detection. Launched in 2022, the MCA currently includes 12 Indian fact-checking organizations among its ranks, including Boom Live, Logically Facts, India Today, and The Quint.

In February, the coalition launched a dedicated WhatsApp tipline called the Deepfakes Analysis Unit (DAU). The tipline is open to anyone, including journalists, to submit. To subsidize its operating costs, the DAU is receiving direct funding from Meta, which owns WhatsApp and has long been criticized for enabling the flow of political disinformation across India.

The most popular messaging platform in the country, WhatsApp is end-to-end encrypted, meaning false information can be forwarded from private group chat to private group chat without easy detection. The BJP is reported to operate as many as 5 million WhatsApp groups across the country.

The DAU has hired three full-time editorial staff in India to triage requests that come in on WhatsApp. If the team determines something could be a deepfake, they’ll conduct verification using standard AI detection tools and OSINT techniques. If there are signs of manipulation, that same clip may be passed off to one of the DAU’s forensic partners. That includes deepfake detection labs at academic institutions, like Hany Farid’s lab at UC Berkeley. It also includes the developers of for-profit authentication tools, including IdentifAI and Contrails AI, which send back detailed analysis free of charge.

Currently, the DAU is only servicing video and audio clips in English, Hindi, Tamil and Telugu, but not images or clips in other regional languages.

“The DAU becomes a sort of central nodal point to coordinate with a lot of different forensic partners. Otherwise, what was happening was all the fact-checkers were reaching out to set up individual partnerships with certain labs and startups. It wasn’t efficient,” said Tarunima Prabhakar, co-founder of the Delhi-based Tattle Civic Technologies, which is a member of the MCA and has been central to launching the DAU tip line.

There are other advantages to newsrooms working as a collective, according to Prabhakar, including bargaining power. “Big AI content-generation companies might not speak to a specific group in India, but as an alliance of 12 fact-checking orgs, they are speaking to us.”

The same can be said for academics. “We tend to try to work with umbrella groups, like when the AP does a fact-check, hundreds of organizations get that fact check,” said Farid, the Berkeley professor, who mentioned the MCA is among the umbrella organizations he works with closely.

The DAU is also able to offer access to commercial detection tools by proxy. With proper coordination, newsrooms can avoid charges for each testing the same media item by routing their verification through the DAU or reviewing public reports on content the DAU has already tested.

The DAU may not be a silver bullet for spotting deepfakes in India’s complex information ecosystem, but it is one innovative model for cross-industry, cross-newsroom collaboration. It’s also a model that could be replicated by other newsrooms around the world as they navigate deepfake coverage during their own upcoming election cycles.

The challenge of finding deepfakes amongst the deluge of political media circulating in India right now is daunting. The tools used to detect them are new. But the dynamics on the ground are, unfortunately, more than familiar.

“I would argue the bigger story is good old-fashioned social media: Twitter and YouTube, Facebook and TikTok, Telegram and WhatsApp,” said Farid. Political deepfakes can only travel so far without being disseminated across group chats and newsfeeds and For You Pages.

“If you send a deepfake to five of your email friends, I’m not worried. It’s how these deepfakes are not just being allowed on platforms, but being promoted and algorithmically amplified and rewarded and monetized,” he added. “That’s an age-old story there. In fact, it’s not even a generative AI story.”

This story has been corrected to accurately describe the Deepfakes Analysis Unit’s internal verification process.

Photo of Bharatiya Janata Party (BJP) supporters gathering to greet Indian Prime Minister Narendra Modi during a roadshow in Varanasi, India, on May 13, 2024 by AP/Rajesh Kumar.

Читайте на 123ru.net