I recently commented on an article about the economic rise of Africa. A more recent follow-up on that story puts the Africa boom in a very useful historical perspective.
The rise of Africa’s long forlorn economies...represents the final phase of a global economic transformation that began over 200 years ago as agrarian societies saddled with absolute rulers began their journey through industrialization into the pluralistic middle-class societies increasingly driven by the information age we know today.
For many reasons, Africa largely missed out on this journey. But no longer: while the process will not be complete by 2050, a changing set of global and local realities suggest that Africa is set to be the final beneficiary of this revolution.
From this perspective, it is not any kind of surprise that Africa, which everyone had long ago written off as a basket case, is finally joining the race to First World prosperity. The only surprise is that it took so long. The economic, technological, political, and cultural forces at work here are so vast—spanning the whole globe over a period of centuries—that their impact on Africa is, in retrospect, inevitable and inexorable.
In a sense, this simplifies the big “mystery” of development economics, the question of how you get a society to achieve economic “takeoff” and progress toward the new modern standard set by the West. Two centuries of progress is so big a phenomenon that the relevant question may not be how a society joins it, but how anyone manages to get left out. Within a few decades, I think we’re going to look back and see that the only nations to escape prosperity were those who took extraordinary measures to seal themselves off and sabotage progress. (Which unfortunately does not prevent some societies—particularly old Communist holdouts like North Korea, Cuba, Venezuela, and Zimbabwe—from doing so.)
I think we’re going to see a similar effect here in the US, as well. We’re in the middle of a decade-long economic depression, and we’re in an era of retrograde statist politics. (The two developments are not exactly a coincidence.) But as with the first Great Depression, scientific and technological progress has not ceased.
The forces at work are too big to be halted merely by one incompetent or malevolent leader. The Scientific Revolution of the 16th and 17th centuries and their result, the Industrial Revolution of the 18th and 19th centuries—have already radically transformed human life. But they are only just getting started.
At the beginning of the year, I noted that politics is going to be boring for a while, but there are issues that are going to be truly interesting and compelling.
“One of those is a new set of innovations in technology—the Internet, robotics, ‘artificial intelligence,’ biotechnology, and how they are all integrating together—which I think are about to launch a new era. I’ve been storing up links on this issue for more than a year, but I haven’t had the time to integrate them together into a big picture. And the big picture is very, very big, so it will take a whole series of articles to deal with it.”
This is the first installment in that series, and it begins with a growing consensus that one wave of technological innovation is playing itself out.
Over the past decade, an enormous investment has been poured into the growth of “social-mobile” information technology, which is also sometimes called “Web 2.0.” This is a term for the transformation of the Internet from static websites accessed on a PC to richly interactive sites, particularly social media like Facebook, that is accessed on a variety of devices like smartphones.
Which is what we’re all doing already, implying that the social-mobile revolution has been accomplished and it’s time to start asking, “What’s next?” As one tech blogger puts it:
Decades ago, the answer was, “Build the Internet.” Fifteen years ago, it was, “Build the Web.” Five years ago, the answers were probably, “Build the social network” or “Build the mobile web.” And it was in around that time in 2007 that Facebook emerged as the social networking leader, Twitter got known at SXSW, and we saw the release of the first Kindle and the first iPhone. There are a lot of new phones that look like the iPhone, plenty of e-readers that look like the Kindle, and countless social networks that look like Facebook and Twitter. In other words, we can cross that task off the list. It happened.
What we’ve seen since have been evolutionary improvements on the patterns established five years ago. The platforms that have seemed hot in the last couple of years—Tumblr, Instagram, Pinterest—add a bit of design or mobile intelligence to the established ways of thinking....
That paradigm has run its course.
Similarly, Paypal co-founder Peter Thiel warns that many of the big, successful technology companies of the past decade are sitting on giant stores of cash, which he takes as an implicit admission that they don’t know what new innovations to invest their money in.
Google is a great company. It has 30,000 people, or 20,000, whatever the number is. They have pretty safe jobs. On the other hand, Google also has 30, 40, 50 billion in cash. It has no idea how to invest that money in technology effectively. So it prefers getting zero percent interest from Mr. Bernanke, effectively the cash sort of gets burned away over time through inflation, because there are no ideas that Google has how to spend money.
I think this is a little unfair, because we’re not done exploiting the capabilities created by the social-mobile boom. As I have argued, “if you can build a $100 billion company by using the Internet to replace the college yearbook—imagine what you can do if you use the Internet to replace college.” All of the social-mobile technology we have developed is there waiting to be applied to a revolutionary change in how we learn—and how we work. There is now a fast-growing field of “ed tech” that is trying to do this, and they’re just getting started. It reminds me a lot of the Internet in 1995: they’re still just figuring out how to take something that has always been done in brick-and-mortar classrooms and do it online. Note also that I said this would change how we learn and how we work. The obsolescence of the university is only the first step in the revolution.
That said, I very much appreciate the spirit of a call by programmer and venture capitalist Paul Graham for “frighteningly ambition start-up ideas,” instead of just another variant on the social-mobile model. But I think Silicon Valley is already starting to move on the next frighteningly ambitious idea: breaking down the barrier between the “information economy” and the clunky old manufacturing economy.
Back during the Internet boom of the 1990s, someone made the wry observation that if you walk into the headquarters of a big manufacturing firm, it is likely to look sleek and modern—but if you walk into the headquarters of a high-tech startup, it is likely to be in a renovated old factory or warehouse, with rough brick walls and heavy steel beams. The more industrial the company, the more it seeks to compensate by looking modern. The more high-tech, the more it seeks to compensate by looking gritty and industrial. But those two worlds are now starting to come together.
The basic concept that is the springboard for this integration is the “Internet of Things.” Up to now, the Internet has mostly been used to move around and manipulate information—anything that can be reduced to ones and zeroes and digitized. But that is a big limitation, and imagine how much greater the power of this information technology would be if it could also be used to move, alter, and manipulate actual physical objects in the real world. In other words, what if we stopped playing around with “virtual reality” and started doing things in real reality? But to do this, the Internet needs to be able to connect to physical objects, to sense them and monitor them and move them around.
Some of the most interesting applications of this idea are industrial, but to give you an idea of how revolutionary this is, let me start with an example that is agricultural: a long and fascinating interview with the inventor of a “virtual fence” system for cattle ranchers. It operates on something of the same principle as an “invisible fence” for dogs, or in this case, “GPS-equipped free-range cows that can be nudged back within virtual bounds by ear-mounted stimulus-delivery devices.”
Not only will this eliminate the expense of building miles and miles of fencing, it will also make it possible for ranchers to make better use of their land, for example by directing cattle from an overgrazed section of land to one that is underutilized.
So apparently the Internet of Things will include the Internet of Cows.
It will also, eventually, including the Internet of us. Read this article on the growing field of telemedicine—the ability to get a diagnosis and a prescription remotely, at a lower cost and without having to travel to the doctor’s office—and project how this will create an incentive to develop tools for remotely examining the body, measuring vital signs, or looking at the back of your throat, or running blood tests and throat cultures.
But the biggest, most immediate application of the Internet of Things is in manufacturing.
Here is how this is described in a profile of a start-up that is focused on improving the capabilities of factory-floor digital cameras.
While the guys who are working the lines might pull out their iPhones when they leave the gates of the plant, mobile and web technology is not heavily used at many factories. Here’s an example. This is a part that one of their customers makes that goes into engines. [Follow the link to see the image.] And the marks on this metal tell the story of the process that went into forming it: what condition the metal was in at what temperature, etc. Each of the little lines and the distances between those lines are significant. This metal contains data, in other words, that its manufacturer would like to track. The first step is to scan the product into a computer. Then Sight Machine takes it from there.
“We do a bunch of machine vision. We analyze the image Photoshop-style and pull out all the different features and measurements they want,” said director of R&D Katherine Scott. “We give them numbers. We can show them historically how they are doing with different data sets.”
All of this naturally connects to the increasing use of a wider variety of industrial robots. See this profile of a company that develops inventory robots for warehouses.
If you haven’t been watching the logistics space, Kiva makes squat little robots that work in vast teams in e-commerce fulfillment centers. Instead of humans wandering into vast stacks of merchandise, robots bring that merchandise to the workers. The robots carry out the work according to constantly evolving algorithms that maximize the efficiency of the operation.
The link is worth following if only for the accompanying video, entitled “The Nutcracker performed by Dancing Kiva Order Fulfillment Robots,” but watch this video to get a more exact idea of how the system works. The kicker to this story is that the company’s founder “was unable to find funding in Silicon Valley.” Expect that to change—especially since the firm’s backers just sold the start-up to Amazon for $775 million.
These companies are working on helping the Internet of Things “see” and measure physical objects and move them around. The next step is to be able to manipulate those objects, to work on them and perform the kind of detailed assembly and fine adjustments for which we now rely on human labor.
The New York Times recently published a long and very interesting overview of the development of agile industrial robots.
At the Philips Electronics factory on the coast of China, hundreds of workers use their hands and specialized tools to assemble electric shavers. That is the old way.
At a sister factory here in the Dutch countryside, 128 robot arms do the same work with yoga-like flexibility. Video cameras guide them through feats well beyond the capability of the most dexterous human.
One robot arm endlessly forms three perfect bends in two connector wires and slips them into holes almost too small for the eye to see. The arms work so fast that they must be enclosed in glass cages to prevent the people supervising them from being injured. And they do it all without a coffee break—three shifts a day, 365 days a year.
All told, the factory here has several dozen workers per shift, about a tenth as many as the plant in the Chinese city of Zhuhai.
The key issue is that the cost of this kind of robotic technology is rapidly going down, even as its capabilities are expanding, making robot technology economical for a wider and wider range of jobs.
The technology is also getting more flexible. Another company has developed a kind of all-purpose small robot for manufacturing, which can (it claims) be reprogrammed easily by a small company to perform simple repetitive motions.
Rethink’s goal is simple: that its cheap, easy-to-use, safe robot will be to industrial robots what the personal computer was to the mainframe computer, or the iPhone was to the traditional phone. That is, it will bring robots to the small business and even home and enable people to write apps for them the way they do with PCs and iPhones—to make your robot conduct an orchestra, clean the house or, most important, do multiple tasks for small manufacturers, who could not afford big traditional robots, thus speeding innovation and enabling more manufacturing in America.
This is just scratching the surface of a whole revolution in robotics, which I will cover in Part 2 of this series. But what is important for our current purpose is the broader “hardware renaissance,” the switch in emphasis—for inventors, entrepreneurs, and venture capitalists—from purely digital manipulation of data to using information technology to do things and make things. As the New York Times puts it, “hardware is the new software.”
Rich Karlgaard argues that “social media is already passé,” and that the new frontiers for high technology are “transportation, energy, and manufacturing,” i.e., the old domain of low-tech heavy industry. It is a trend you can already see in the number of hardware start-ups being funded on the “crowdfunding” platform Kickstarter.
At the center of all of this are 3-D printers: machines that can take a digital file and translate it directly into a physical object by depositing layers of material on top of one another, much in the same way that a 2-D printer forms text on a piece of paper by laying down one line of ink at a time.
The term applies to a variety of different manufacturing systems, which are also known as “additive manufacturing” because an object is built by adding layers of material rather than carving the object out of a larger block of material, as you would with a digitally controlled milling machine.
Many different technological routes can be taken to reach the same goal. In one variation, nozzles spray liquid material into layers. Another method, which produces even better results, aims laser beams at finely powdered material, causing the grains to fuse together at precisely the spot where the beam hits. All 3-D printing techniques, however, follow the same principle: The object grows layer by layer, each one just a few hundredths of a millimeter thick, until it acquires the desired shape. This technique can be applied to steel, plastic, titanium, aluminum and many other metals.
Assembling, screwing together, adhering, welding—all these processes are rendered obsolete when even the most complex shapes can be produced by a single machine using this casting technique. The end result can be an artificial hip, a hearing aid, a cell phone case, customized footwear or even the Urbee, a prototype car that has been making a splash....
The printing of electronic components is even in the works. American corporation Xerox, for example, has developed a silver ink that functions as an electrical conductor and can be printed directly onto plastic or other materials, making it possible to integrate simple circuits into printed objects.
The implication are broad, from the economical production of products in small quantities, to a quick way of obtaining replacement parts (which is how 3-D printers are already being used by the US military in Afghanistan), and the ability to design products with a complexity and internal structure—such as spheres within spheres—that could not be produced through any other method.
Many of the widely available 3-D printers—including table-top home versions—build layers out of plastic, but the capabilities of these printers are rapidly increasing, and the next step is to develop 3-D printers for metal.
NASA recently used a technique called selective metal melting (SLM) with great success to build rocket motor components out of steel. NASA’s engineers have been able to produce parts with complex geometry only previously imagined, and with dimensional accuracy beyond that possible with traditional fabrication methods.
The latest development is a British scientist’s discovery of a new method for refining currently expensive materials like tantalum and titanium by a method that is far cheaper and produces a metal powder that can be used for 3-D printing.
[C]heap titanium has many potential uses, such as making car components that would be lighter than steel ones, thus saving fuel. This would require the powdered metal produced by Metalysis’s process to be made into ingots or sheets of the sort currently used in factories. The powder itself, however, could be employed directly in what is known as additive manufacturing, which uses 3D printers to build up objects a layer at a time. Cheaper metal powders would make 3D printing much easier.
In a different direction, one professor has developed a system for 3-D printing a house by laying down layers of concrete. This is still a bit ahead of its time, but 3-D printing with colored plastics is already becoming standard for creating stunning architectural models. And the most practical version of this kind of technology at the moment is a precision layout robot for construction projects.
[Sam Stathis of Theometrics] described a remarkable digital divide between architects and engineers, who’ve been using computers for years to assist their designs, and the construction industry, which still uses chalk-laden string to mark lines on the ground—a technique handed down from the pyramid-building Egyptians, noted Techonomy host David Kirkpatrick. After the designers finish their work on computers, it gets reduced to printouts that building contractors use at the job site. Accuracy sometimes gets lost in the translation from bits to blueprints, as plans are misread or measurements are botched.
Stathis said his company has prototypes of robots that can do the sort of layout-marking tasks that workers now do with tape measures, laser pointers and string.
We can sum up all of these developments as a Third Industrial Revolution.
I came across that term in a blog post attempting to refute the notion that we are entering a low-innovation era in which advances in productivity are likely to peter out. Maybe they will, but for reasons having to do with politics and its suppression of growth and investment. It will not be because of any scientific or technological limits to innovation.
But the concept of a Third Industrial Revolution as presented here is rather ill-defined. (And that’s leaving aside the appropriation of the term for Jeremy Rifkin’s goofy “renewable energy” fantasies.)
[Bob] Gordon divides our progress over the past 250 years into not one, but three Industrial Revolutions. IR #1 was from 1750 to 1830 and gave us steam power and railroads. IR #2 ran from 1870 to 1900 and yielded electricity, internal combustion, running water, indoor toilets, communications, entertainment, chemicals, and petroleum. IR #3 started in 1960 and gave us computers, the Internet, and mobile phones.
I like the idea of a breaking the Industrial Revolution into stages, but I would define them in more fundamental terms. The first Industrial Revolution was the harnessing of large-scale man-made power, which began with the steam engine. The internal combustion engine, electric power, and other sources of energy are just further refinements of this basic idea. The second Industrial Revolution would be the development of interchangeable parts and the assembly line, which made possible inexpensive mass production with relatively unskilled labor. The Third Industrial Revolution would not be computers, the Internet, or mobile phones, because up to now these have not been industrial tools; they have been used for moving information, not for making things. Instead, the rise of computers and the Internet is just a warm-up for the real Third Industrial Revolution, which is the full integration of information technology with industrial production.
The effect of the Third Industrial Revolution will be to collapse the distance between the design of a product and its physical manufacture, in much the same way that the Internet has eliminated the distance between the origination of a new idea and its communication to an audience.
The point of this overview of the new revolution in manufacturing technology is not to boost any of these particular companies or any particular technology. Some of them are undoubtedly overhyped, others will be superseded, many will turn out very differently between now and when they make it to market—and there will be other new innovations that are even better which we won’t even see coming. I am reminded of a fascinating series of ads from 1993 predicting just about all of the technology we have 20 years later, from e-books to tablet computers. But then again, the ads get many of the details wrong, most notably the prediction that all of these things would be brought to us by the ads’ sponsor, AT&T. From my vantage point in the publishing industry, what I find even more ironic is that the ads were published on a CD-ROM distributed by Newsweek—their view of the future of magazine distribution. But in the real future, AT&T turned out to be a very minor player, and as of this year, Newsweek turned up dead.
So I write all of this with the qualification that we are treading in the uncertain waters of the “futurist” who attempts the inherently impossible task of predicting what future innovations will occur and what form they might take. This is inherently impossible, because future innovations, by their nature, are what is not yet known or conceived today.
But as Conor Sen puts it, “we are all futurists now.” There is such a wide convergence of new technology that we are compelled to recognize the vast possibilities that are opening up. We may not be able to predict exactly what this future will look like, any more than Watt and Boulton could have predicted all of the myriad uses of their steam engines. But we can sense that we are at the beginning of a Third Industrial Revolution that has the potential to transform our lives as much as the first two.
This is just a first attempt to sketch out that future. I will continue it in the next installment of this series with a look at new developments in robotics.