News in English

Cannibalizing generative AI models could ‘go mad’ over time

Cannibalizing generative AI models could ‘go mad’ over time

A captivating 3D render of AI models, personified as colorful, anthropomorphic beings, feeding each other in a continuous cycle. They aredepicted as vibrant and lively creatures, with a mixture of human and machine-like features. Warning symbols and flashing lights surround the scene, signifying the potential dangers and risks of this endless cycle.

Generative AI models that just feed off one another could end up ‘going mad’ over time, affecting the quality of… Continue reading Cannibalizing generative AI models could ‘go mad’ over time

The post Cannibalizing generative AI models could ‘go mad’ over time appeared first on ReadWrite.

A captivating 3D render of AI models, personified as colorful, anthropomorphic beings, feeding each other in a continuous cycle. They aredepicted as vibrant and lively creatures, with a mixture of human and machine-like features. Warning symbols and flashing lights surround the scene, signifying the potential dangers and risks of this endless cycle.

Generative AI models that just feed off one another could end up ‘going mad’ over time, affecting the quality of the output.

A new study from researchers from Rice University and Stanford University has highlighted how the quality of generative AI could suffer a decline if AI engines are trained on machine-made input rather than that from humans. In essence, if AI models learn from one another in a cannibalizing fashion, it could affect the long-term quality of the systems.

How can generative AI models go ‘mad’?

The researchers have dubbed this effect Model Autophagy Disorder (MAD). They parallel it with mad cow disease, a neurological disease in cows that are fed the infected remains of other cattle. Just as those cows are affected by eating cow meat, so too are AI systems affected by taking in information from other systems.

AI models need fresh data from the real world to work effectively, both in terms of quality and in diversity. There’s a risk of future AI models falling prey to people trying to speed up the learning processes.

“Some ramifications are clear: without enough fresh real data, future generative models are doomed to MADness,” computer engineer Richard Baraniuk from Rice University told Science Alert.

The findings came about from testing out a visual generative AI model and training it on three types of data: full synthetic, a mix of synthetic and real fixed data, and a mix of synthetic and real fresh data. In the first two scenarios, the output of the model became more and more flawed.

For example, there were noticeable marks on computer-generated faces and numbers became harder to read. Faces also began to look more and more similar to one another, highlighting the lack of diversity over time in synthetic data-taught models.

“Our group has worked extensively on such feedback loops, and the bad news is that even after a few generations of such training, the new models can become irreparably corrupted,” said Baraniuk.

Although this experiment focused on image generation it can be extrapolated out to have far-reaching consequences. As AI is used in more and more systems, the researchers warn of a ‘doomsday scenario.’

“If left uncontrolled for many generations, MAD could poison the data quality and diversity of the entire internet,” said Baraniuk. “Short of this, it seems inevitable that as-to-now-unseen unintended consequences will arise from AI autophagy even in the near term.”

Featured image: Ideogram

The post Cannibalizing generative AI models could ‘go mad’ over time appeared first on ReadWrite.

Читайте на 123ru.net