July 29, 2015
Despite rapid advances in machine learning, androids remain a distant prospect
That is what I call a killer robot — a being that can hold an intelligent conversation with you before wiping you out. It was science fiction in 1982, when Blade Runner, based on Philip K Dick’s dystopian fantasy novel Do Androids Dream of Electric Sheep? came out. It is now faintly plausible — sufficiently for artificial intelligence researchers to warn this week of the dangers of an autonomous arms race.
The killer machines feared by those such as Elon Musk, the founder of Tesla Motors, and Stephen Hawking, the theoretical physicist, are crude terminators by comparison with the Nexus replicants in Blade Runner. No one would fall in love with an armed quadcopter that blows up enemy soldiers, as the hero of Blade Runner does with Rachael, the female android who does not realise that she is a replicant.
Robots can murder us but they cannot understand us. Autonomous killing machines are becoming reality — Israel already has its Harpy anti-radar drone, which loiters in the sky before choosing and destroying targets itself. A sentient, sophisticated machine with common sense and the capacity to grasp people’s moods and predict behaviour is still a distant prospect.
In theory, it will be created. Artificial intelligence researchers do not see the barrier in principle to robots developing higher reasoning powers, or the kind of physical dexterity that humans possess. The last remaining workers on car assembly lines are people who can attach screws nimbly and reach inside the body shells for electrical wiring in a way that has defeated robots to date.
Machines also possess some advantages. They do not have to constrict their processing units to fit into skulls, and they do not need to supply them with oxygen, an energy-hogging technology. Nor are they limited by an evolutionary edict to reproduce, rather than purely to get cleverer.
But despite rapid advances in machine learning, visual and voice recognition, neural network processing — all the elements that are now transforming the potential of artificial intelligence — androids are not with us. Computers can beat humans easily at chess, but poker at the highest level is beyond them — they would need to see through the other players’ bluffs.
“Computers are becoming better and better at perception tasks,” says Fei-Fei Li, director of Stanford University’s artificial intelligence laboratory. “Algorithms can identify thousands of types of cars while I can only tell three of them. But at the cognitive, empathetic, and emotional level, machines are not even close to humans.”
I have also experienced something you people would not believe — Google’s self-driving car. The thing that struck me as it toured Mountain View in California recently was that it felt human. It accelerated from junctions confidently, even assertively, closing the gaps with vehicles in front so others could not rush in. We would be safer if all drivers were equally calm and rational.
Artificial intelligence — the ability to scan, process and analyse large data sets — is not the same as the capacity to perform most human tasks
Inside the car, you can see what it perceives with its sensors and rooftop radar. The outlines of objects around, including pedestrians, buses and other cars, are displayed like hollow, moving shapes on the screen of a laptop held by a Google engineer. The objects are categorised by different colours, so the vehicle knows it should react to them and how far to steer clear.
A self-driving vehicle would, in other words, be a perfectly capable killer robot if you attached a missile launcher to its roof, and machine guns to its sides (not that Google would do such a thing, of course). It could cruise through cities, scanning for warm, slow-moving, pink-coloured objects to destroy.
So it is not scaremongering for scientists to warn of artificial intelligence research being tainted by association with autonomous weapons. The internet itself emerged from research funded by the US Department of Defence in the 1960s, and military and space programmes have the deepest pockets and the keenest interest in developing cutting-edge technology. What would be foolish would be to think the advent of killer robots means that machines are ready to take over the world.
Destroying things is easier than understanding or creating them. Artificial intelligence — the ability to scan, process and analyse large data sets — is not the same as the capacity to perform most human tasks (known as artificial general intelligence).
Even those who warn of machines taking jobs that are now performed by humans accept that managerial, professional, and artistic jobs that demand high level reasoning, empathy and creativity are still safe. A robot that scans a set of features to identify a woman, but cannot grasp her mood, or use common sense to solve an unexpected puzzle, remains very limited.
“Quite an experience to live in fear, isn’t it? That’s what it’s like to be a slave,” Roy Batty remarks to the human bounty-hunter he has defeated in combat before reaching out and rescuing him from falling to his death. Let us not enslave ourselves yet.