Back to the Futurism
The New York Times's Farhad Manjoo recently called for more of us to "pick up the torch" from the recently deceased Alvin Toffler, author of the 1970 book Future Shock and one of the pioneers of what used to be called "futurism."
All around, technology is altering the world: Social media is subsuming journalism, politics and even terrorist organizations. Inequality, driven in part by techno-abetted globalization, has created economic panic across much of the Western world. National governments are in a slow-moving war for dominance with a handful of the most powerful corporations the world has ever seen—all of which happen to be tech companies.
But even though these and bigger changes are just getting started—here come artificial intelligence, gene editing, drones, better virtual reality and a battery-powered transportation system—futurism has fallen out of favor. Even as the pace of technology keeps increasing, we haven't developed many good ways, as a society, to think about long-term change.
The issue here isn't a lack of people writing about technology. It's an absence of people looking at the big picture and trying to project the profound ways in which new technology is going to change the way we live.
Yes, well, we're on that.
In a technological era, we certainly need more than ever someone who will project and analyze the implications of new technology and the impact it will have on our lives. But I don't agree with what appears to be Manjoo's vision of why we need futurism.
His article mostly becomes a pitch for reinstatement of the Office of Technology Assessment, because without such a government agency, "we risk rushing into tomorrow headlong, without a plan." Without a plan? Surely a consideration of past predictions about the future should inspire a little more skepticism about our ability to plan it.
See, for example, a more balanced assessment of Alvin Toffler's predictive record, which describes the kind of solution Toffler and Manjoo favor. Toffler called it "anticipatory democracy."
While "anticipatory democracy" sounds salubrious, Toffler defined it very specifically: "Anticipatory" meant making use of long-term forecasting techniques, especially the computer-based modeling of economic, demographic, and other trends that were just coming into vogue in the late 1960s and early 1970s; "democracy" meant the kind of intimate town halls of Norman Rockwell mythology that would restore to individual citizens a modicum of control over their careening, caroming lives.
Actually, this sounds a lot less like Norman Rockwell and more like another old-fashioned artifact of the 20th Century: central planning.
If we're going to learn from the futurism of the past, we ought to have learned something about its failures. Remember when the guy who was jockeying to be in charge of "industrial policy" for Bill Clinton's administration had just come off a job as a publicist for cold fusion? Well, you probably don't. But it happened.
Given that history, what we need to emphasize is not government planning, but the individual's efforts to plan for the impact of future technology.
Individual planning—by corporations, entrepreneurs, and the common man—is how we got all of this rapid technological change in the first place, without the need for a government office to plan it. No government agency anticipated or designed the personal computer, or the smartphone, or social media, or any of a million other innovations. Often the big corporations that developed the basis for these technologies—Zerox, IBM, AT&T—had no idea how to develop them or commercialize them, leaving that task open to unknown college dropouts working out of their garages.
This, by the way, turned out to be the ultimate answer to Toffler's problem of "future shock," the challenge of keeping up with the rapid pace of change. Some of the biggest innovations since 1970 have focused precisely on the job of enabling and managing an overwhelming flow of information, which is what things like "news aggregation" and "social media" are supposed to do. If rapid technological change is the challenge, then technological change, moved forward by entrepreneurs and by the users who embrace their products, also produces its own solutions, ones not necessarily anticipated by futurists or forecasters.
Government is by design unwieldy and slow moving. At its best, it responds not to the pet theories of experts, but to the broadly shared views of the people as a whole. At its worst, it responds to the uninformed passions of the moment. (In this election year, I don't think we need to look very far for examples.)
So the best and most effective plans for dealing with radical new technology and its impact on our lives are going to be made by individuals. That should be the target audience for a new generation of futurists.
I'm not just talking about writing for visionary billionaires and the people who love them. After all, the Peter Thiels and Elon Musks of the world already have the knowledge and resources to benefit from new technology, often because they're helping to make it. The person who is in far greater need of a little futurism is the average worker. That is the person who ought to be—but generally isn't—planning for what happens when his factory job gets taken over by a robot. Or, more and more frequently, when his cushy middle-class desk job gets taken over by an app.
Rapid technological change is full of dangers that can disrupt the course of the average person's life, but it's also full of new opportunities. Everyone is going to need help avoiding the dangers and finding the opportunities.
This individual planning is the only kind we're really likely to get, and we need a new 21st-century futurism that will take up that crusade.