Ultron walks into a bar, and orders a drink. The bartender says, “We don’t serve robots!”. Ultron replies, “You will.”
Artificial intelligence (AI) is a reliable theme in science fiction. In the Avengers: Age of Ultron, the AI arises from one of several alien Infinity Stones, with a bit of dabbling by Tony Stark and Bruce Banner, who are trying to protect the planet from potential invaders. The evil AI, named Ultron, seizes on a directive to protect the planet as a green light to justify human extinction.
In the comics, Ultron’s origin is very different from that of the movie. It’s more frightening, starting as an experiment in artificial intelligence that, through rapid self-improvement, quickly escapes human control. Ultron is portrayed with a rather human-like ego imbued with outlandish motivations – not surprising for a comic book villain designed to appeal to a mass audience. Neither the movie nor comics get the threat quite right.
Artificial intelligence is generally classified as “general” or “super” intelligent. While an AI of general intelligence would be somewhat like us, an artificial super intelligence can be expected to possess some useful, but troubling characteristics. It will operate on the basis of mathematics and probability, without necessarily possessing the biases that cloud our own actions. And unlike Ultron, its actions will not be transparent, and may in fact be impenetrable to human logic and motivations.
An AI that means us harm may even leverage time itself to its advantage, making use of the enhanced processing speed and distributed nature of electronic systems to out think us, and like Ultron, to improve itself. To an advanced and empowered AI, a day might be as a human lifetime. Conversely, it might also lay out extended plans that take many human lifetimes to come to fruition – after all, you have all the time in the world when you are immortal and without a biological imperative for reproduction. Human beings are good at detecting rapid changes, but we are bad at reacting to longer-term extinction threats (think global warming). With unlimited time, subtle watching and waiting and gentle nudges could be an effective strategy for a rogue AI to exploit our blind spots.
Artificial intelligence is often portrayed as biologically based intelligence, and we imagine the crowning achievement of AI to be recognisably human (such as Ultron). The truth is that AI could operate very differently than our imagination allows.
How close are we to a dangerous super intelligence? It’s hard to say for sure. But several pieces are coming together.
The Human Brain Project, (cutting edge, scientific research) has as a stated goal the simulation of a human brain within its massive banks of computers. If the effort is successful and the simulation runs, humanity may have created an artificial consciousness. “May have”, because we might not know whether this AI is conscious or not, and it might take some effort to determine this (think of how hard it is to determine whether a coma patient is conscious or not), or whether it possesses artificial general intelligence, or super intelligence.
Super intelligence could quickly move past humanity, and would be to us as we are to monkeys. And one must ask, how would an AI develop an ethical system that would protect humanity from the sort of exploitation that we, in the darkest regions of the human soul, are capable of visiting on the less powerful? Would an AI be less apt to destroy an ecosystem, or an entire world? We won’t know for sure until it arrives.
How do we protect ourselves? We could stop working on AI, but the potential benefits are enormous, from self driving cars to medical diagnosis, and it would be impossible to achieve a ban. Isolated computing environments that are less connected to the physical world, careful restrictions of power, restricting connectivity, limiting access to manufacturing infrastructure, or limiting the clock speed of the architecture on which it depends may be partial solutions, but one must wonder whether we could devise a containment vessel that a super intelligence could not overwhelm.
Another AI presence in the movie, the Vision, may provide a clue that would point to another solution. The Vision was based on Tony Stark’s JARVIS AI, his constant companion throughout the series of movies. JARVIS’s ethics were learned over many years, more similar to the way a human brain learns. Maybe, part of the solution isn’t so much Asimov’s Law of Robotics but something more mundane: raise your children well. Perhaps AIs “raised” by us would outgrow us and move on but would be less likely to hate us. One thing is clear: solving the problem of rogue AI will be terribly difficult before it exists, but it may be impossible afterward. I’m hopeful that expert dialog will deal with the ethical implications of our technological advances in a way that allows humanity to benefit from AI without being endangered by it.
Ultron should be remote and alien, but in the movie he’s either in a fancy body or his AI is distributed to many drones – once he placed himself in a bottle he could be defeated by the Avengers. Ultimately, Ultron’s brand of intelligence wasn’t enough to avoid head-to-head verbal and physical conflicts with his enemies and was only capable of devising a rather brute force extinction scheme that human intelligence could easily comprehend and defeat.
If rogue AI with evil motivations is the stuff of our cinematic nightmares, we should be doubly wary of superintelligent AI that doesn’t play by our storytelling rules.