News in English

Most Downloaded ‘AI’ Powered News App Routinely Makes Up News

We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.

The result has been a keystone-cops-esque parade of scandals where a long line of traditional brands (CNETGannett, Sports Illustrated) have been hollowed out and turned into superficial and badly automated engagement and infotainment mills.

But “AI” has also spurred the creation of countless new pseudo-news websites. Outlets that basically use AI to scour the web for journalism and press releases, then reconstitute them into an aggregated mush that sort of looks like original news. This mindless, automated SEO engagement chasing redirects money that should be going to quality journalism and analysis to a sort of dull simulacrum.

Recently we saw a flood of similar stories about such companies. One such story focused on NewsBreak, a Chinese-backed app that’s now the most downloaded news app in the U.S. Like these other outlets, Newsbreak just basically scours the web for existing stories and press releases, jumbles them together into something that vaguely looks like journalism, then poops it out at incredible scale.

Last Christmas, the app completely made up a story about a shooting in New Jersey that never happened:

“…no such shooting took place. The Bridgeton, New Jersey police department posted a statement dismissing the article – produced using AI technology – as “entirely false”.

“Nothing even similar to this story occurred on or around Christmas, or even in recent memory for the area they described,” the post said. “It seems this ‘news’ outlet’s AI writes fiction they have no problem publishing to readers.”

Real journalism is hard and time consuming. It requires actually talking to human beings. It requires digging through court filings. It requires comparing the opinions of multiple experts with a litany of different biases to get to the truth. That’s absolutely not what these outlets are doing, and it’s a direct byproduct of basing the entirety of our truth and information systems on ad engagement.

This was already a problem before LLMs. LLMs have just supercharged the industry’s worst tendencies. “AI” certainly could theoretically help journalists with research, analysis, editing, transcription, and administrative work. But instead it’s largely being used as a way to automate engagement clickbait at unprecedented scale, which further undermines trust in — and the financing of — real reporting.

In this case it took Newsbreak four days before it corrected the story. And the outlet has a history of at least 40 such stories (that we know of) since 2021. Amusingly, this most recent error came after AI scraped false stories written by another shoddy AI-driven infotainment mill named FindPlace.xyz, which was busted also making up stories using fake author bylines.

Ironically all of these outlets try to frame themselves as the solution to the very real problem that is a dearth of quality journalism (especially local). Instead they’re making the problem significantly worse. And they’re helped by companies like Google, which are increasingly focusing on stock-bumping half-assery instead of maintaining quality control across both Google News and Google search.

Whatever this monstrosity we’re building is, it’s most decidedly the complete opposite of what journalism actually needs. And if it continues apace without some kind of meaningful trajectory shift, the cost to history, collective understanding, and an informed public will be incalculable.

Читайте на 123ru.net