News in English

AI Takes Center Stage in Combating Online Abuse From Wimbledon to Main Street

The pristine lawns of Wimbledon are now home to more than just world-class tennis.

This year, the All England Lawn Tennis Club deployed a new ally in its fight against online harassment: artificial intelligence. The move mirrors a broader trend across industries as organizations grapple with the growing challenge of digital abuse.

Wimbledon’s AI system now scans players’ social media accounts, identifying and flagging abusive content in 35 languages, The Guardian reported. This is a response to the experiences of players like Emma Raducanu and Naomi Osaka, who have previously stepped back from social media due to online harassment.

Online businesses are increasingly adopting AI-powered reputation monitoring systems, according to experts. These tools scan social media and review sites and online forums, providing real-time alerts on mentions of a company or product. By quickly identifying negative sentiment or emerging issues, businesses can respond promptly to customer concerns. AI analysis of feedback trends offers insights for product improvement and customer service enhancements.

“Online abuse isn’t just a nuisance, it’s a bottom-line issue,” PeakMetrics CEO Nick Loui told PYMNTS.

For example, an eCommerce clothing retailer might detect a pattern of complaints about sizing inconsistencies, allowing them to address the issue before it affects their reputation. This approach aims to maintain brand image, foster customer trust and potentially prevent sales losses.

Digital Abuse

Online abuse has become a pervasive issue, affecting individuals and businesses alike. Industry estimates suggest that a company’s reputation can account for up to 63% of its market value, underlining the financial implications of unchecked online harassment.

To give context to the issue, the Internet Crime Complaint Center received 9,587 harassment and stalking complaints in 2023.

The problem extends beyond high-profile athletes and large corporations. Local businesses, too, have found themselves vulnerable to digital attacks, particularly fake reviews. In 2023, Google reported blocking over 170 million fake reviews, a 45% increase from the previous year. The surge highlights the escalating nature of online misinformation and its potential to damage reputations.

As the volume and sophistication of online abuse grow, AI has emerged as a tool in the fight against digital harassment. These systems employ algorithms to analyze patterns and identify suspicious activity quickly.

Google’s approach illustrates the potential of AI in this arena. The company’s new algorithm examines review patterns over time, spotting red flags such as identical reviews across multiple business pages or sudden spikes in extreme ratings.

However, AI systems are not infallible. False positives can occur, and context-dependent harassment may still slip through filters.

“AI can parse information chaos faster than ever, but it’s not foolproof,” Loui said.

Human oversight remains crucial in interpreting AI-flagged content, he added.

Proactive Measures

Some organizations are pushing AI capabilities beyond mere detection to proactive response strategies. These systems can suggest appropriate reactions to negative feedback, providing businesses with a 24/7 crisis management team.

This proactive approach allows organizations to identify emerging issues before escalating and develop strategies to mitigate potential crises. It’s particularly valuable in an era where online discourse can rapidly shape public opinion and impact business outcomes.

The battle against online abuse isn’t limited to businesses and athletes. Executives and other public figures are increasingly finding themselves in the digital crosshairs. In January, ReputationDefender, a digital privacy brand of Gen, introduced Total Radius, an AI-powered service designed to protect high-profile individuals from risks that could jeopardize their physical safety in response to rising online threats.

“Today’s executives, professionals and other public-facing people need to make tough decisions in the real world that are often magnified and scrutinized in the online world,” Gen President Ondrej Vlcek said at the time of the launch. “This can lead to both physical and digital threats targeting them, and the people close to them, including family members.”

AI has been used for some time to protect against reputation attacks by identifying fraudulent websites and issuing takedowns of those sites before they can harm the particular company, SlashNext Email Security+ CEO Patrick Harr told PYMNTS.

“AI can also be employed as in this case to search the dark web and general web for unfounded reputation attacks and protect executives, VIPs and employees from these attacks by writing countermeasures or devaluing/burying search results of untruths,” he said.

Reality Defender, for example, uses AI to help companies detect deepfakes, Zendata CEO Narayana Pappu told PYMNTS.

“This technology could be used to protect company reputations or employees by verifying media content, detecting impersonation attempts and safeguarding employee privacy by preventing the spread of deepfake content that might violate employee’s privacy or be used for harassment,” he said.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post AI Takes Center Stage in Combating Online Abuse From Wimbledon to Main Street appeared first on PYMNTS.com.

Читайте на 123ru.net