The Tracinski Letter

The Tracinski Letter

Share this post

The Tracinski Letter
The Tracinski Letter
Amor Vincit Omnia

Amor Vincit Omnia

Robert Tracinski's avatar
Robert Tracinski
Mar 21, 2024
∙ Paid
6

Share this post

The Tracinski Letter
The Tracinski Letter
Amor Vincit Omnia
2
Share
Love triumphant over all—as imagined by Caravaggio in 1602.

I recently put up a new piece at the newsletter for my Prophet of Causation book: some in-depth background research on the relationship between Ayn Rand and the Stoics. There’s a lot of interesting stuff there, but you have to subscribe and support the book project to read the whole thing.

The Prophet of Causation
The Pain and the Passion
After my recent call for support for my book, I got a lot of people signing up as free subscribers on this list. Thanks for coming aboard, and I’m glad you’ll be following my progress—but I’ll warn you that I’m putting a pretty strict paywall on these posts, so you won’t get more than a short teaser. If you want to see more of the substance of what I’m working on, please consider becoming a paying subscriber…
Read more
a year ago · Robert Tracinski

I’ve been posting a bit lightly to this newsletter in the last week or so because I’ve been writing a few articles for other publications that haven’t quite been published yet. I’ll let you know when they go up.

The Real AI Apocalypse

A while back, I speculated about an AI doom loop, but perhaps not the one you were expecting.

In order to train programs on good-quality text and images, AI platforms rely on an abundance of human-generated material from existing media, and not just amateur bloggers or social media, but well-funded, carefully edited publications. If AI-generated text and images were to completely flood the internet—as they may soon do—then an AI that goes out to “scrape” the material it is designed to emulate would be working off of text and images generated by other AI. This raises the prospect of a doom loop where AI is copying AI and getting farther and farther away from anything that is useful or meaningful to humans.

Well, this is bolstered by several recent studies.

AI is eating its own tail. In what can be best described as a terrible game of telephone, AI could begin training on error-filled, synthetic data until the very thing it was trying to to create becomes absolute gibberish. This is what AI researchers call “model collapse.”

One recent study, published on the pre-print arXiv server, used a language model called OPT-125m to generate text about English architecture. After training the AI on that synthetic text over and over again, the 10th model’s response was completely nonsensical and full of a strange obsession with jackrabbits.

Another recent study, similarly posted to the pre-print arXiv server, studied AI image generators trained on other AI art. By the AI’s third attempt to create a bird or flower with only a steady diet of AI data, the results came back blurry and unrecognizable.

So, in order to train new AI models effectively, companies need data that’s uncorrupted by synthetically created information….

For now, engineers must sift through data to make sure AI isn’t being trained on synthetic data it created itself. For all the hand-wringing regarding AI’s ability to replace humans, it turns out these world-changing language models still need a human touch.

This report seems to be based in part on a previous article with a more suggestive title: “AI Is an Existential Threat to Itself.” The most intriguing detail is that researchers initially described the phenomenon of “model collapse” as “model dementia.”

This led me to speculate on Threads that our science fiction has gotten the AI apocalypse all wrong:

Keep reading with a 7-day free trial

Subscribe to The Tracinski Letter to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Robert Tracinski
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share