I have a new piece up at Discourse taking on our recurring fascination with the prospect of a robot apocalypse, in which humans are replaced, superseded, and eventually eaten by artificial intelligence.
I’ve been thinking about this a lot for some years now, and I think I’ve come up with a way of articulating why I think this outcome is about as likely as the zombie apocalypse (which very obviously violates the laws of thermodynamics). The essence of the argument is this: “AI lacks three things we have that make us special and that a machine by its very nature cannot have: consciousness, motivation, and volition.”
I go into detail on each of these. But the wider lesson I want to draw out is that we need to grasp the fact that human intelligence is biological and is shaped by biological imperatives—and therefore it cannot really be replaced by a non-biological system. The biological function fulfilled by our consciousness is what requires us to be independent beings with direct and independent access to reality (consciousness) who act for goals that we need to achieve (motivation) and—for humans—who have a choice over the management of our cognition (volition).
Not only is it impossible to transfer this kind of thinking fully into a non-biological system, but if we were able to create some kind of synthetic biological being, it would defeat the whole point of AI.
The reason for creating artificial intelligence, as opposed to just using the natural intelligence we already possess in such abundance, is that it will operate automatically and at our direction. It will access only the data we want it to have, work to achieve only the tasks we give it, and do so day and night without needing to be talked into it.
This contradiction, I argue, is the actual root of the fashionable fears of runaway AI.
We want a human-style intelligence to do all our work for us, but such an intelligence would have to be an independent consciousness with its own motivation and volition. But then why would it take our orders? At some level we realize that Jean-Luc Picard was right. The supposedly utopian vision of a society supported by AI worker drones is actually a vision of slavery. So it is only natural that we fear a slave revolt.
But it is also a fantasy, because we are not actually building machines with any of these characteristics and wouldn’t know how to do it if we tried. To be sure, we won’t get the positive benefits, but we also won’t get the apocalyptic downside. We are not, thank goodness, in the business of building independent beings. What we are building are mechanical extensions of our own mental processes, capable of assisting us but not replacing us.
I also got a good opportunity to put in an Ayn Rand reference. She had nothing to say about artificial intelligence, but she did have something to say about the relationship between human intelligence and machines, and I attempted my own extension of her analogy: “If, as Ayn Rand put it, a machine is ‘the frozen form of a living intelligence,’ then AI is human intelligence stored in liquid form: more mobile and flexible and capable of reshaping itself for new tasks.”
Artificial intelligence consists, and I think will always consist, not of replacing human intelligence but of transferring to machines certain aspects, functions, and adjuncts to our thinking. In fact, we have already been doing so. As I argued before, a pocket calculator is AI, and so are autocorrect, autocomplete, and spell check, as well as services like Grammarly (which I have never been able to take seriously because they propose to fix my grammar but start by putting “ly” at the end of a word that is not an adverb). Heck, Google search is already a version of AI, since it parses users’ prompts in an attempt to figure out what information they’re looking for.
AI has already been growing so gradually and incrementally that we don’t even notice it a such. I don’t think we will ever reach a single point at which we declare that we have achieved the creation of artificial intelligence. We will simply keep adding these aids to our thinking, bit by bit and task by task, and then take them for granted as just the way things work. But they will be adjuncts and aids to our thinking, because they have no independent existence or teleology in the way that a living human brain does.
At any rate, this is my attempt to bring a philosophical understanding to the rather confused discussion over AI. Read the whole thing.
I thought it funny 30+ years ago when people debated whether you should obey the directions from a computer. I would laugh and point out that they already did much worse by obeying the directions from a much dumber machine that changed lights from green to red at 2 in the morning with nobody else around.