The New York Times highlighted recent research from Richard G. Baraniuk’s group on the risks of training generative AI models with AI-generated data, a phenomenon explored in their paper Self-Consuming Generative Models Go MAD. The team, including Sina Alemohammad and others, warns that when models repeatedly train on their own synthetic outputs—without enough fresh real data—they enter a decline known as Model Autophagy Disorder (MAD), leading to reduced precision and diversity. This feedback loop, now attracting widespread attention, raises urgent concerns about the future robustness and reliability of generative AI systems.