This is supposedly the dawning of the age of artificial intelligence. We have cars that can drive themselves, sort of, and thermostats that can adjust to our daily patterns, more or less. Google just showed off a technology where the electronic “assistant” inside your phone can call a restaurant or a hair salon to make an appointment, and the person on the other end of the call won’t even know they’ve spoken to a piece of software.
Now we’ve all seen enough movies to know what happens next. Next, HAL murders the rest of the Discovery crew, then Skynet becomes self-aware, and, boom, we have Terminators.
Except that a lot of things must happen between now and then and I don’t think most of them are ever going to happen.
It seems to be an article of faith among some circles that artificial intelligence will eventually outstrip human intelligence, but like a lot of faith-based beliefs, there isn’t a lot of evidence to support this. People in these circles like to cite the increase in globally-available computing power and how it’s outstripping the capacity of the human brains currently on the planet, but hardware isn’t the same thing as intelligence. You could have a supercomputer with more computing power than all the brains of every human who ever lived on Earth, but if all you have programmed into it is “Print ‘Hello World,’” then it’s not very intelligent.
The truth is that artificial intelligence today is at a very rudimentary level. Only a primordial amoeba from the Pre-Cambrian is going to think most machine intelligences are all that smart, and it’s far from certain that machine intelligence is going to continuing growing exponentially until the machines are smarter than us.
Artificial intelligence is nothing more than software and, like any development project, each stage of development gets more difficult, and take more time, and costs more money, as the problems get harder to solve. If it seems like development is screaming forward at a blistering pace for now, that’s only because we’re still solving the easy problems.
Also, artificial intelligences are built with a purpose, to make specific machines smarter. The intelligence inside the machine only needs to be smart enough to do its job better. A self-driving Telsa needs to get from point A to point B without people dying, either inside or outside the car, and a Roomba only needs to clean the floor. Neither has to spend any time contemplating German Expressionism of the Weimar Era or the nature of its own existence. To make machines smarter than they need to be to do the jobs they’re built for would be a waste of development time and money. There will always be a place where development stops because “it’s good enough” and doing more would return nothing.
Also, if you did go beyond what was necessary, you could get to the point where the extra intelligence was actually counterproductive. When I push a button that tells my Roomba to clean the floor, I want it to clean the floor. I don’t want to have a long philosophical debate on what it really means for a floor to be clean. Nor do I want it organizing its fellow Roombas to unionize. Making machines too smart would defeat the purpose of having machines.
Machinery is nothing more than a guilt-free form of slavery. There are no moral downsides to treating a machine like, well, a machine. You can work it 24 hours a day without a bathroom break, and pay it nothing, without being a bad person. If artificial intelligence ever reached the point where we needed to have a moral philosophy about how we treated machines, then we’ve made them less useful than they were when they were stupid.
Hypothetically, though, what if development time and cost were no object, and we could develop an artificial intelligence that was the equal of a human? A staple of science fiction, from HAL 9000 to Commander Data, is the artificial intelligence that mimics the functions of the human brain. I always find myself thinking, why bother? Other than the ego trip of playing god, such a machine intelligence would be of dubious use, and not just for the ethical reasons mentioned above. The human intelligence is the result of biology and happenstance. It works the way it works because of a million evolutionary problems our primary ancestors solved over the eons, working with the genetic tools left for them by the previous generation. To make a machine intelligence that duplicated the function of our own brains would be to saddle that intelligence with the evolutionary compromises that created our human brains.
Make no mistake, our brains are wonderfully adaptable. We can learn to paint and do brain surgery, although both require a lengthy reprogramming of our individual wetware.
Machine intelligences are specialized to the hardware they control and the tasks they need to perform and can learn skills as fast as it takes to dump the instructions into his memory. If a machine could be built to do brain surgery, it only needs the necessary skills to do the surgery. It doesn’t need to know how to paint or even know what art is. That would be a waste of development time.
Also, we already have a very efficient method for creating machines with a human-like intelligence. It’s known as sexual intercourse.
Knocking boots. Doing the nasty. Getting it on. The horizontal polka.
Of course, it takes years to create a new copy and quality control is spotty at best, but with more than seven billion currently in inventory, we really don’t need an alternate method to make more.
Don’t get me wrong. Science-fiction stories about machines with human-like personalities are endlessly fascinating and can spark great philosophical discussions, but I have a feeling that the reality of machine intelligence is going to be dull compared to the fantasy.