Tech in 2024: A Musk-fueled X-odus, and the deepfake apocalypse that didn’t happen
We’re rounding up the biggest trends, headlines, and stories from the tech scene in 2024.
In this one, we talk about how a significant amount of users fled the now-right-wing X platform, and how the threats of deepfake video hasn’t fully materialized…yet.
An X-odus from the ‘pro-Trump echo chamber’ formerly known as Twitter
On November 15, horror fictionist Stephen King announced he had left X, calling the platform “too toxic.”
He wouldn’t be the only one. The Guardian had also left, and several others, such as the German football club St. Paul, the actor Jamie Lee Curtis, and Spanish newspaper La Vanguardia.
NBC News called it a “pro-Trump echo chamber,” thanks to the guidance of new owner Elon Musk, who, as the US elections came nearer at the time, “ramped up his pro-Trump efforts on X with dozens of political posts a day.”
The site wrote, “The shifts have remade X’s user base, consolidating a right-wing elite who drive political debate on the app — similar to conservative talk radio hosts who appeared a generation ago.” Called “newsbrokers,” they were found to lack journalistic standards and propagate conspiracy, while getting a personal boost from Musk.
Seeking to escape that kind of environment, a significant number of X users fled to rival Bluesky, which saw one million new users since the US election, mainly from North American and the UK.
Among those who left is New York University professor Ruth Ben-Ghiat, who, quoted in a Guardian report, warned that X’s “functions as a Trump propaganda outlet and far-right radicalization machine [that] could be accelerated” once Trump takes office.
Are you still on X? We have a growing community on the Rappler Communities app if you’re looking to strike out.
The deepfake apocalypse will have to wait
In the beginning of 2024, one of the biggest fears was that deepfake videos would wreak havoc on the many elections set to happen that year. It didn’t materialize.
As we wrote earlier: “A September article by NPR noted that, while there has been widespread concern on deepfake videos and audio ‘doing or saying something they didn’t,’ the biggest use of AI in disinformation has been in boosting the volume of fake content on social media, fake accounts, and fake news stories, or generating memes and jokes.”
Perhaps the most prominent example, however, is the Trump campaign running an AI-generated image of a fake Taylor Swift endorsement, which the singer eventually addressed when she officially endorsed then-candidate Kamala Harris.
“Recently I was made aware that [an] AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation…. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter,” Swift said then.
AI-generated videos used in scams, politically motivated content
While AI-generated videos have not found a lot of use yet in elections — although eventual Indonesian President Prabowo Subianto used an AI-generated cute version of himself in his campaign — deepfake videos have been used in potential scams, and potentially dangerous politically motivated scenarios.
In the Philippines, this year, we saw deepfake GMA anchors Ivan Mayrina and Susan Enriquez promoting a free “Mama Mary necklace from Vatican, Italy”; a deepfake Maria Ressa promoting Bitcoin; personalities, such as Vilma Santos, Lucy Torres, Regina Velasquez, and again, Maria Ressa, each with their own deepfake videos promoting fake diabetes cures; and President Ferdinand “Bongbong” Marcos Jr. who had a deepfake audio ordering a military attack, and a deepfake video alleging the use of a psychoactive substance.
There are still tell-tale signs that you can spot in a deepfake video, as we’ve previously laid out in a guide before. But, as the technology is seen to be improving, detecting these things may only get harder in the future.
Time wrote about the “underwhelming impact” of AI in this year’s elections, but also warned: “However, it’s likely that the role of generative AI will only expand in future elections. Improved technology will allow campaigns to create messaging and fundraise more quickly and inexpensively…. But improved AI technology will also lead to more believable deepfake video and audio clips, likely leading both to the spread of disinformation and an increasing distrust in all political messaging and its veracity.”
AI for good!
But, speaking of AI, we also rolled out our new Rai chatbot during the 2024 Social Good Summit.
Rai gathers information from the Rappler website, getting the latest articles every 15 minutes. This is unlike other chatbots whose data sources include random websites whose content are not necessarily vetted. After all, garbage in, garbage out.
This makes Rai the most up-to-date and reliable chatbot when it comes to news that matters to Filipinos and other citizens interested in the Philippines and the region. This also gives Rai a unique Filipino and Asian perspective when it comes to world events.
It’s currently in beta, and limited to Rappler Plus members, but we hope to do a wider rollout in the future.
Hollywood vs. AI
Taylor Swift wasn’t the only entertainer and celebrity that had a beef with AI in the past year.
Before that, there was Black Widow actor Scarlett Johansson, who accused OpenAI of creating a voice known as “Sky” that sounded “eerily similar” to her voice, after she had declined earlier to be the voice for the chatbot.
OpenAI CEO Sam Altman told Reuters, “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson.”
Some pursued legal actions. Sony, Universal, and Warner sued AI music models Suno and Udio; The New York Daily News, The Chicago Tribune, The Denver Post, and five other newspapers sued OpenAI and Microsoft for training their AI systems without consent; authors sued Anthropic for allegedly consent-less AI training as well; and voice actors and Blade Runner 2049 producer had their own lawsuits too.
These legal actions are part of the ongoing argument on ethics regarding AI systems making use of online data to train their models. The companies are facing challenges mostly from the creatives who believe that their work was used without their consent and without proper remuneration.
Social media bans
Just this November, Australia passed a law that’s the first in the world to ban social media use for individuals under 16 years old. Platforms, such as Instagram, Facebook, and TikTok, may face fines of up to $32 million if those within the age threshold are able to log in, with measures to be enforced starting in January 2025.
Called the Social Media Minimum Age Law, it seeks to lessen the mental health issues in young people that may be potentially caused by exposure to social media.
Reuters reported there are similar restriction laws in France and some US states that require parental consent, but Australia’s ban is “absolute.”
In the US, social media giant TikTok is fighting for its survival, as the government seeks to ban the platform unless its Chinese parent company ByteDance sells its ownership stake to another entity that’s not controlled by what the US considers a hostile foreign government.
ByteDance has until January 19, 2025, to divest. Some sources close to incoming President Trump, however, say the new government may attempt to stop the ban.
There is also actually a similar effort in the Philippines with regard to TikTok. In May, Manila Representative Bienvenido Abante Jr. filed House Bill 10489, which seeks to prohibit apps controlled by what is considered a “foreign adversary” to the Philippines, with the President being given the power to decide who is a foreign adversary.
The Philippine News Agency quoted Abante as saying, “China could therefore use TikTok’s content recommendations to fuel misinformation, a concern that has escalated in the United States and led to the passage of a law…banning TikTok in the US.”
India has an ongoing TikTok ban since 2020 along with 58 other Chinese apps. – Rappler.com