The Myth of Prometheus
Top Stories of the Year: #1
This past week, I’ve been counting down the top stories of 2023, as covered in The Tracinski Letter. And seriously, it’s a lot of coverage. If you already subscribe, you should make sure you’re set to renew—and if you’re not subscribed, you really ought to do something about that.
Or you should give a gift subscription, which is a great way to get something at the last minute for a friend or loved one.
Three months for $23.
Six months for $45.
At #5 in my countdown is the rediscovery of some basic ideas of pro-free-market economics by a bunch of center-left writers. (The latest: Noah Smith reminding everyone that nations don’t get rich through conquest.)
At #4 is the global contest between political systems in Ukraine, Israel, and elsewhere.
Top story #3 is the complete takeover of the Republican Party by its Trumpist faction.
At #2 is a conservative counter culture that defines itself by negation of the entire modern world.
Let’s set all of those aside now for the #1 top story of the year: The emergence of a stridently and explicitly pro-growth, pro-innovation, techno-optimist movement.
Why is this at #1? Because this is objectively more important than anything else that is happening. I have found it astonishing that when you look at graphs of economic and technological growth and you search for big global cataclysms like World War II, you find that they are barely a blip. They don’t really register compared to the larger story of human progress. Growth and innovation have repeatedly proven to be more important than the latest ups and down of our politics—even the worst of them.
The top story of this year is precisely that realization: the supreme importance of growth and innovation and the need to celebrate it and clear the way for more of it.
Part of the impetus for the newly prominent pro-progress movement is a series of breakthroughs in the past few years in artificial intelligence. Chatbots that can answer questions plausibly and more or less accurately have gotten most of the attention, as well as AI rendering programs that can produce polished illustrations in response to a prompt. But there are more clearly productive uses for this technology, including Copilot, a service that can write routine code for computer programs.
“According to one estimate, with this AI assistance, a programmer can complete a task in half the time it would otherwise take.” Then there is a recently announced $6 billion deal in which a major pharmaceutical company bought the rights to a molecule (the basis for a drug to treat autoimmune disorders) that was discovered using AI. If we’re looking for AI applications that are about “atoms not bits,” then here it is, and it is just the beginning.
The way AI will add to productivity is described as a “work sandwich,” where a human is driving the results at the beginning (by giving the AI prompts) and at the end (by evaluating the results), with AI as the sandwich fillings in the middle, augmenting human work.
But I pointed to a wider possibility.
What the new generation of AI makes possible is the automation of automation. Up to now, when we want to automate something, that means designing a machine that will perform that task, and only that one task. If we want to make something or do something different, a human has to come in to redesign the machine or reprogram it.
But we’re on the cusp of technology that will allow us simply to decide what new thing we want to do, and as with Copilot, the AI will figure out how to do it. This is why there’s so much excitement surrounding the new AI technology: It has the potential for a great leap forward in human productivity. If a programmer can write code twice as fast, how much more work will everyone else be able to do with AI?
We are still in the very early stages of this, and advances have a tendency to arrive more slowly than we expect in the first flush of excitement. I just came across an interesting bearish case for AI which notes that much of the work it is replacing (so far) is just not that profitable.
The issue is that taking the job of a human illustrator just. . . doesn’t make you much money. Because human illustrators don’t make much money! While you can easily use Dall-E to make art for a blog, or a comic book, or a fantasy portrait to play an RPG, the market for those things is vanishingly small, almost nonexistent.
The interesting argument here is that this is inherent in the way AI is “trained.” AI is good at doing things when it can draw on a very large number of existing examples that can be accessed easily—in other words, when there is already a glut on the market.
Call it the supply paradox of AI: The easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
So what will remain most valuable is that which is not easy for a machine to copy because it is more rare and unusual to begin with. Hopefully that still includes at least some forms of writing.
In one of my more philosophical pieces, I addressed the limits of AI by describing what I think are the unique and under-rated characteristics of human intelligence.
AI lacks three things we have that make us special and that a machine by its very nature cannot have: consciousness, motivation and volition.
But I still acknowledge the power of AI as an extension of human intelligence: “If, as Ayn Rand put it, a machine is ‘the frozen form of a living intelligence,’ then AI is human intelligence stored in liquid form: more mobile and flexible and capable of reshaping itself for new tasks.”
As exciting as some of the new AI breakthroughs are, I predict that their actual implementation will eventually seem unexceptional.
[A] pocket calculator is AI, and so are autocorrect, autocomplete, and spell check… Heck, Google search is already a version of AI, since it parses users’ prompts in an attempt to figure out what information they’re looking for.
AI has already been growing so gradually and incrementally that we don’t even notice it a such. I don’t think we will ever reach a single point at which we declare that we have achieved the creation of artificial intelligence. We will simply keep adding these aids to our thinking, bit by bit and task by task, and then take them for granted as just the way things work. But they will be adjuncts and aids to our thinking, because they have no independent existence or teleology in the way that a living human brain does.
There is a model to follow from science fiction.
In 2023, the world's most popular programming language will be English. In other words, you just tell the computer in plain English what you want a program to do, and it creates the code…. This is the long-standing “Star Trek computer” ideal—replicating the kind of performance we have long fantasized about in our fictional future computers.
So it’s strange, when you think about it, that the latest developments are freaking everybody out. I quoted a similar observation from an online discussion.
Sci-fi writers in the 60s viewed the future of computers as providing the computer with prompts and the computer providing an output to try to match that prompt. Then we got used to personal computers as they were since the 80s. So now we see AI as an unnatural advancement.
We need to recapture a mindset where we view innovation and a high-tech future as exciting and desirable, rather than as an object onto which to project a personal sense of existential dread.
That turned out to be the big question for the rest of the year.
If this has been the year of AI advances, it has also been the year for a surprisingly well-financed network of “AI Doomers” who have seen one too many James Cameron films. It’s an extension of a long-standing pattern.
Back when I started RealClearFuture, this was the big trend I noticed in media reporting on emerging technology. The pattern for every new article was: “Here’s this innovative and fast-growing new technology—but nobody’s regulating it!” The authors rarely paused to consider that perhaps the reason the technology is innovative and fast-growing is because nobody is regulating it or imposing a moratorium on new research. But much of the media coverage seemed to be trying to wish such a regulatory moratorium into existence.
This takes on extravagant science-fiction forms in scenarios about killer robots. But it also takes a more prosaic form in predictions of “technological unemployment.” In answer, I cribbed from one of the great free-market economists and coined Say’s Law of Robots: “The sum total of goods produced by automation constitutes the demand for everything that is not automated.”
Once again, we should expect not to be put out of work but to get richer. It is only the doomers who will have to look for new jobs—though I expect that they, too, will find a way to adapt, seeking out all the downsides in the next new technological leap to come along.
But the doomers are not the only voices. Early in the year, I noted “a recent and very interesting string of commentary on the wider causes of the suppression of growth.” What takes this beyond the more limited efforts of the Supply Side Progressives is that it is far wider and more philosophical.
[T]he most interesting entry in this new genre is Brink Lindsey’s description of the anti-energy, anti-growth mentality as an “Anti-Promethean Backlash.” Ayn Rand frequently invoked the metaphor of Prometheus, who in Greek mythology gave man the power of fire—the first breakthrough in energy usage—and was punished for it. It seems all the more appropriate today. Here is Lindsey….
The revolution I’m talking about…can be described as the anti-Promethean backlash—the broad-based cultural turn away from those forms of technological progress that extend and amplify human mastery over the physical world. The quest to build bigger, go farther and faster and higher, and harness ever greater sources of power was, if not abandoned, then greatly deprioritized in the United States and other rich democracies starting in the 1960s and 70s. We made it to the moon, and then stopped going. We pioneered commercial supersonic air travel, and then discontinued it. We developed nuclear power, and then stopped building new plants….
[W]hat changed wasn’t just the imposition of new regulations. What changed…was the whole culture’s orientation toward the future…. Progress was redefined to mean cleaning up our messes, learning to live within limits by using resources more efficiently, and sharing what we have more equitably. In particular, the development of new energy sources like solar and wind was viewed simply as a means of replacing fossil fuels—not as a way of pushing past current limits toward energy abundance.
What I didn’t quote from this piece is something that is a constant refrain in all the others: a whole lot of throat-clearing about how environmentalism isn’t really to blame, and it’s good and necessary, and on and on and on. These writers remind me of people who become disillusioned with religion, but because they have been so deeply propagandized for their entire lives that it is a sin to doubt and you’re a bad person if you’re an atheist, they can only bring themselves to make limited criticisms of “organized religion” and can’t bring themselves to question faith as such. Come to think of it, that’s not really an analogy. That is literally what is happening here, as people struggle with the prospect of questioning the new green religion.
I have been trying to extend our culture’s ability to grasp the underlying philosophical issues.
We tend to view the problem as one of distorted politics and misguided policies, but we can’t really address it until we understand it as the expression of a worldview….
The root of the current anti-progress, anti-innovation outlook lies in the rebellion of 19th-century intellectuals against the legacy of the Enlightenment. I like to cite Mary Shelley’s “Frankenstein,” which was written at the dawn of the Industrial Revolution but cast the scientist who pursues technological breakthroughs as a man driven by hubris to create monsters. The original version of the story begins when Victor Frankenstein, in pursuit of his monster, is found by the captain of a sailing ship that is attempting to break through the ice to reach the North Pole. The rest of the book is a long flashback as Frankenstein recounts his tale of woe. After hearing his story, the captain decides to turn back, learning from Frankenstein that he should moderate his ambitions and limit the quest for knowledge.
This all came to a head by the end of the year.
In October, Web pioneer turned venture capitalist Marc Andreessen released a “Techno-Optimist Manifesto.”
I provided a summary of the his case.
Here is the central idea of Andreessen’s argument: “We believe the cornerstone resources of the techno-capital upward spiral are intelligence and energy—ideas, and the power to make them real.”
By “intelligence,” he means human intelligence, citing Julian Simon and his argument that the human mind is “the ultimate resource.” But he also hails the potential of artificial intelligence—“we are literally making sand think”—and he anticipates a combination of the two: augmented intelligence. “Intelligent machines augment intelligent humans, driving a geometric expansion of what humans can do.”
And I noted the obvious influence on Andreessen’s arguments.
The “romance of industry” and the “eros of the train and the skyscraper”? If we didn’t already know it from his Twitter feed—where he has posted excerpts from Ayn Rand—I think we can all guess what book he’s been reading. (He is strangely coy about it, though. The end of the manifesto has a long list of authors to read, one of whom is “John Galt,” an indirect way of referring to Rand.)
As I said, this has been a growing genre, and Andreessen’s manifesto puts in place the conditions necessary to turn it into a fully fledged movement.
This is a manifesto that gathers together many different intellectual strains and starts a discussion that may help those ideas coalesce into a well-defined pro-progress movement. It begins the process by performing the integrative function of putting all of those ideas together in one place.
I also looked at the reaction against Andreesssen, particularly from the “supply-side progressives” I talked about earlier in this countdown, whom we might express to embrace a pro-progress agenda. But their “complaint is that Andreessen’s pro-progress arguments are disreputable. They are too combative and confrontational. He challenges ‘progressive’ opponents of progress by taking them bluntly head-on.”
This raises the wider issue of whether we really want growth and innovation, and whether we want it enough to challenge some preconceived notions.
The sort of people who build and innovate can be, like Andreessen, rough-hewn and combative. Maybe they will quote Nietzsche, and I hope they make references to Ayn Rand. They may not be eager to explain themselves to grandstanding politician or sooth the prejudices of political activists. They will be disreputable.
But we have to decide that growth and innovation are important enough to override these prejudices, that we don’t want people to be able to come up with new ideas and put them into practice only if they make it through a long approval process, don’t pose any risk, and don’t ruffle anyone’s feathers.
By arguing for the overriding importance of growth to human life and prosperity, and by poking at some of the delicate sensitivities that would get in the way of growth, the Techno-Capitalist Manifesto has helped remind us of this.
Andreessen’s manifesto is only a beginning. This debate will continue—as will my role, I hope, in encouraging the new field of “progress studies.”
While there are many other important stories I will be covering in 2024, the constant improvement of human life through growth and innovation will always be the most important and interesting story of our time.