AI scrapers running out of space as restrictions close the net

AI scrapers running out of space as restrictions close the net

AI crawlers are facing increasingly hostile online restrictions, study finds

AI scrapers are increasingly facing hostile online environments as data sources dry up. Crawling for data, also known as scraping,… Continue reading AI scrapers running out of space as restrictions close the net

The post AI scrapers running out of space as restrictions close the net appeared first on ReadWrite.

AI crawlers are facing increasingly hostile online restrictions, study finds

AI scrapers are increasingly facing hostile online environments as data sources dry up.

Crawling for data, also known as scraping, previously meant vast troves of text, images, and videos could be pulled from the internet without too much trouble. AI models could be trained on the seemingly infinite source but that is no longer the case.

A study from AI research thinktank Data Provenance Initiative, named “Consent In Crisis” has found a hostile environment now awaits website scrapers, especially those for the development of generative AI.

Researchers probed the domains utilized in three of the most important datasets used for training AI models and that data is now more restricted than ever.

14,000 web domains were assessed with the discovery of an “emerging crisis in consent” as online publishers have reacted to the presence of crawlers and the harvest of data. The researchers outlined in the three data sets – known as C4, RefinedWeb, and Dolman – that around 5% of all data, and 25% of content from the best sources had enforced restrictions.

In particular, OpenAI’s GPTBot and Google-Extended crawlers provoked a reaction from websites to change their robot.txt restrictions. The study found between 20 and 33 percent of the top web domains have introduced extensive restrictions on scrapers, compared to a much lesser figure at the start of last year.

Hard crawls resulting in full bans

Over the whole base of domains, 5-7% have enforced restrictions, up from just 1% across the same period.

It was noted that many websites had changed their terms of service to completely prohibit crawling and lifting content for use in generative AI, but not to the extent of the restrictions on robot.txt.

AI companies have possibly wasted time and resources due to excessive crawling that was likely not required. The researchers showed that while around 40% of the top sites used across the three datasets were related to news, over 30% of ChatGPT inquiries were for creative writing, compared to just 1% that featured news.

Other notable requests included translation, coding help, and sexual roleplay.

Image credit: Via Ideogram

The post AI scrapers running out of space as restrictions close the net appeared first on ReadWrite.

Читайте на 123ru.net