As artificial intelligence accelerates into every corner of modern life, the dominant narrative remains one of inevitability. With smarter models and total automation, many feel catapulted toward a future reshaped by machines that outperform humans at almost everything. But increasingly, tech insiders are vocalising alternative views: that the trajectory is neither healthy, sustainable, or inevitable. Dan Houser – co-founder of Rockstar games – recently compared the current boom to Mad Cow disease, saying it’s a system that’s feeding on itself and will eventually become fundamentally unstable. It’s a provocative analogy that opens a deeper question that may bring relief to some – what if the flaws emerging in AI are not bugs to be fixed, but are structural limitations that will prevent the technology from ever truly “taking over the world”?

The Mad Cow Analogy: Is AI Eating Itself?
Houser’s comparison hinges on a specific historical lesson. Mad Cow disease spread when cattle were fed processed remains of other cattle, creating a closed loop of degraded biological material that eventually produced catastrophic neurological failure. His argument is that artificial intelligence is – rather than becoming invincible and taking over the world – actually drifting into a similar pattern. Models are increasingly being trained on synthetic outputs that were previously generated by other AI systems – not on human-created knowledge.
Essentially, as automated models continue growing, more of what we see on the internet is generated by the same systems. Thus, as new and existing models train further, they are in fact only digesting their own outputs. Researchers have already documented a phenomenon known as model collapse, where generative systems trained repeatedly on AI-created data become less accurate, less diverse, and more detached from reality over time. Instead of their intelligence compounding, the systems end up hollowing themselves out, reinforcing their original errors and flattening nuance.
A Growing Problem Tech Leaders Don’t Talk About
Public-facing AI marketing focuses on scale: more data, more integration, more parameters. What’s not being talked about, however, is the growing scarcity of high-quality, human-generated training material. Much of the open internet has already been ingested by existing models, meaning what’s left is increasingly polluted by spam, automated noise, and other forms of AI content.
Large language models without access to continuously renewed human input such as art, reasoning, writing, and genuine lived experience, are at serious risk of stagnation or regression. The irony is stark: as more automated content floods the web, the less reliable the web becomes as a training source.
Houser’s criticism cuts deeper than technical architecture. He argues that those pushing hardest for complete AI adoption are often insulated from the intellectual and cultural costs, instead prioritising efficiency over proper understanding. In his own words, these executives are “not fully-rounded humans” who are narrowing perspective inside decision-making circles.
What Video Games Teach Us About AI
Rockstar Games – which Houser co-founded – built its reputation on human-crafted complexity including satire, cultural texture, and general creativity. These are exactly the qualities that generative AI struggles to reproduce convincingly.
While models can generate dialogue, textures, and code snippets, they lack an internal sense of meaning, motivation or consequence. These are qualities essential to storytelling and world-building, and game developers have long since encountered AI’s limits in practice. They highlight a broader issue: AI can imitate form, but it doesn’t understand context. It can predict what should come next, but not why it should come next at all.
Others are Sounding the Same Alarm
Houser is just one example of a growing number of concerned tech executives who are all echoing similar sentiment. They often warn that AI systems are brittle, overhyped, and fundamentally misaligned with how intelligence really works.
Confident but false outputs are often called “hallucinations”. These act as signs that these systems don’t actually know anything in a human sense. There’s also the concern of skyrocketing energy costs, data bottlenecks, and diminishing returns as models scale. Rumours are circulating that brute-force scaling and trying to expand as rapidly as possible is in fact approaching economic and physical limits.
Reassuringly perhaps, the fear of runaway super-intelligence starts to look less like an imminent threat, and more like a distraction from the real risks: cultural homogenisation, misinformation, and institutional over-reliance on systems that can never work like human beings.
Are These AI Limitations a Good Thing?
This structural weakness may be precisely what prevents catastrophe. If AI systems degrade when isolated from human input, then they can never become self-sustaining intelligence forms. They remain parasitic on human creativity and judgement, and that dependence undermines the popular science-fiction images of machines autonomously improving themselves beyond human control.
In that sense, AI may be more like an amplifier than a replacement. It can be a powerful tool, but fundamentally constrained. Perhaps it can accelerate patterns already present in society, but it cannot generate meaning, ethics, or purpose on its own. It may not be harmless, but it does start to appear limited.
The Real Risk Behind It All
The most serious danger in this case would not be AI itself, but rather how institutions respond to it. Corporations, media organisations, and even governments are increasingly treating AI outputs as authoritative, even when accuracy is uncertain. Over time, this degrades human expertise, accountability, and critical thinking.
If AI-generated material becomes the default reference point in law, journalism, education, or policy, for example, then errors stop being isolated mistakes and start being systemic failures. This is the true “mad cow” risk: not that machines rebel, but that humans outsource judgement until the feedback loop implodes.
Houser simply asks whether society is confusing automation with wisdom, and speed with progress.
Final Thought
If AI is truly entering its “mad cow” phase, then the fantasy and fear of total machine dominance looks less convincing. That may disappoint futurists and alarmists, but it should reassure everyone else.
The future certainly needs human judgement, creativity, and understanding. If we take arguments like Houser’s, then the danger isn’t that AI will replace everybody, and it doesn’t look like AI could ever take over the world. But that doesn’t mean we won’t end up surrendering it voluntarily by relying on automated models too much in the meantime.
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Did You Know?, World News
absolutely. a machine IMO cannot outperform what is being put in it. and it needs repairs. people have to repair it, I don’t think there exists a self-repairing machine. When computers became common goods, we were told there would be no need for paper records. But when your computer breaks down, you still need the good old address book.