Gilligan’s Island Economics
Why We Need AI Zoomers Instead of Doomers
I’ve already addressed the idea that the robots are going to kill us, a fear drummed into us by decades of science fiction, but not well grounded in science fact. I was delighted to see that piece linked to in Jim Pethokoukis’s pro-progress newsletter, “Faster, Please.” Pethokoukis then had me on for one of his brief “5 Questions” interviews, which you can read here.
This time I downshift to the less apocalyptic but more widely believed threat of “technological unemployment,” i.e., that the robots are going to take our jobs.
I answer this with an explanation of the poorly understood principle of Say’s Law. I do the usual thing economists like to do of imagining a simplified two-man economy—but with a twist.
Imagine two men washed ashore on a remote island. Let’s say that one decides to specialize in hunting wild pigs, while the other specializes in picking the island’s wild-growing fruits and vegetables. What would the two men trade? The first man would trade meat for vegetables, and the second man would trade vegetables for meat. Each man’s supply, the amount he produces, is also the demand he has to offer for trade. The hunter’s supply of meat is, from another perspective, his demand for vegetables, and the planter’s supply of vegetables is his demand for meat. What each man produces (over and above what he consumes himself) is what he has to trade for the other man’s product.
The key point is that the more each man produces, the greater the benefit to the other person. One man’s prosperity doesn’t take away from the other’s—nor, as Say concluded, is one country’s prosperity earned at the expense of its neighbors. Instead, each person’s prosperity adds to that of his neighbors.
How does this apply to automation? Well, suppose we have another island where two men wash ashore. Let’s call one of them “the professor.” He’s a scientific genius and prolific inventor who devotes his time to creating gadgets that do various bits of useful work and provide goods that would not normally be obtainable on an undeveloped island. Let’s call the second man Gilligan. (If you are over a certain age, you will start to get the joke now. Look it up, kids.) Not being quite as bright as the professor, but still industrious and eager to help, Gilligan does most of the manual labor: pedaling a bicycle to run a generator for electricity or gathering fruit from the island’s trees. In this two-man economy, whatever is produced by the professor’s machines constitutes his demand for Gilligan’s time and labor.
So what will happen to Gilligan as the professor builds more machines and makes them better and more sophisticated? Will he be better off or worse off? Will the professor’s inventions increase or decrease the demand for Gilligan’s labor? Well, remember that the total amount produced by machines constitutes the demand for Gilligan’s labor—so as the machines get better, that demand increases, and does so sharply. The more the professor is able to make and do with his gadgets, the more goods he will have to trade for Gilligan’s work.
This is the application of Say’s Law to automation. Call it Say’s Law of Robots: The sum total of goods produced by automation constitutes the demand for everything that is not automated.
I go on to explain more of the principles behind this, and I also appeal to two centuries of history that demonstrates the results.
The Moratorium on Digital Brains
I’m very excited about finally publishing this piece, because it’s one I’ve been thinking about for six or seven years and never had a chance to get down on paper. I originally conceived it back when I was editing RealClearFuture, as part of a series on policy ideas for the era of automation. (It was supposed to be Part 3 of the series. I got as far as Part 2 before RCF got caught up in a wave of layoffs.)
The policy implication I wanted to draw out of this is that AI will certainly cause some disruption to the economy. Some jobs will change and some may begin to fade away. But if you want to make is easier for people to adjust, the best approach is not to get in the way of the new innovation, but to encourage it to happen as quickly as possible. The faster a new technology is implemented, the greater the growth it produces—and, thanks to Say’s Law, the greater the demand it creates for other labor, making it easier for displaced workers to find new jobs and train themselves with the skills needed to perform them.
What is most striking about the reaction to recent progress on artificial intelligence is that the reaction is the opposite.
The media is dominated right now by the “AI doomers.”
Techdirt provides an overview.
When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.
But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).
In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.
Yes, they really are planning a moratorium on digital brains, with some proposing a 6-month pause in AI research and some taking it to its logical conclusion—does anyone think a mere six months would satisfy the doomers?—and proposing a longer one. I am sure China will agree to abide by this limitation on its research.
As usual, the Pessimists Archive is here to remind us that the doomers have been at it since 1863, and all along, the basis for their views has been grounded less in fact than in fiction—literally.
Wiener would cite old fables—such as genies in bottles—that he said contained a powerful lesson: if we have the power to realize our wishes “we are more likely to use it wrongly than to use it rightly, more likely to use it stupidly than to use it intelligently.” … Wiener also predicted mass unemployment and an “industrial revolution of unmitigated cruelty” by making “possible the factory substantially without employees.”
Or consider the centerpiece of the recent freakout: a breathless report that in an Air Force simulation, an AI-powered drone had turned and attacked its own operator. The story was widely broadcast as confirmation of the doomers’ fears—but it never happened. The basis for the story was “a ‘thought experiment’ rather than anything which had actually taken place.” And then a story in Wired rushed to explain to us “Why the Story of an AI Drone Trying to Kill Its Operator Seems So True.” It’s not true, you see, but the fact that it seems true reinforces the need for regulation.
That’s the AI doomer loop in a nutshell. Fiction begets more fiction, and then we try to talk ourselves into believing that it’s fact.
Doomers Versus Zoomers
Back when I started RealClearFuture, this was the big trend I noticed in media reporting on emerging technology. The pattern for every new article was: “Here’s this innovative and fast-growing new technology—but nobody’s regulating it!” The authors rarely paused to consider that perhaps the reason the technology is innovative and fast-growing is because nobody is regulating it or imposing a moratorium on new research. But much of the media coverage seemed to be trying to wish such a regulatory moratorium into existence.
In this case, there are some voices opposing the doomers. Yann LeCun, of Facebook/Meta, has made some interesting comments, as has Marc Andreessen. I’ll be looking at those in more detail soon. But it strikes me that we need a more consistent effort in defense of technological progress in general and AI innovation in particular. To counter the AI doomers we need more AI zoomers—people advocating for advancing this new technology as fast as possible.
I’m considering going back into this field a bit and renewing some of my old articles from RCF, which have aged surprisingly well.
In one of those old articles, I mentioned that an overlooked positive portrayal of AI is the ship’s computer from the Star Trek franchise.
[T]he closest thing in Star Trek to what we're doing with artificial intelligence right now is the Star Trek computer, which is capable of communicating in normal spoken English. It responds to commands, gives relevant answers to requests for information, and can even perform some fairly complex (in a few cases implausibly complex) analysis.
It is well known that this is the inspiration and goal for Google: to be able to ask a question in normal English and give an accurate, relevant answer. We're still not there yet today, but we're headed in exactly the direction imagined by Star Trek.
So I was delighted to see that echoed recently in this observation from Coleman Mulkerin on Twitter.
Sci-fi writers in the 60s viewed the future of computers as providing the computer with prompts and the computer providing an output to try to match that prompt. Then we got used to personal computers as they were since the 80s. So now we see AI as an unnatural advancement.
We need to recapture a mindset where we view innovation and a high-tech future as exciting and desirable, rather than as an object onto which to project a personal sense of existential dread.