CARMAKERS like to talk about autonomous vehicles (AVs) as if they will be in showrooms in three or four years’ time. The rosy scenarios suggest people will soon be whisked from place to place by road-going robots, with little input from those on board. AVs will end the drudgery of driving, we are told. With their lightning reactions, tireless attention to traffic, better all-round vision and respect for the law, AVs will be safer drivers than most motorists. They won’t get tired, drunk, have fits of road rage, or become distracted by texting, chatting, eating or fiddling with the entertainment system.
The family AV will ferry children to school; adults to work, malls, movies, bars and restaurants; the elderly to the doctor’s office and back. For some, car ownership will be a thing of the past, as the cost of ride-hailing services like Uber and Lyft tumbles once human drivers are no longer needed. Going driverless could cut hailing costs by as much as 80%, say optimists. Welcome to the brave new world of mobility-on-demand.
All these things may come to pass one day. But they are unlikely to do so anytime soon, despite the enthusiasm of people like Elon Musk. Within two years, says the Tesla boss, people will be napping as driverless vehicles pilot them to their destinations. Mr Musk has defied conventional wisdom before, and proved critics and naysayers wrong. In this case, however, too many obstacles lie ahead that are not amenable to brute-force engineering. It could take a decade or two before AVs can transport people anywhere, at any time, in any condition—and do so more reliably and safely than human drivers.
Consider how long it has taken something as simple as battery-powered vehicles to carve a niche for themselves. After a couple of decades, hybrid and electric vehicles still account for no more than 2% of new-car sales in most countries. Battery prices and storage capacities are finally approaching a point where sales could feasibly take off. But even under the most optimistic of assumptions (say, electrics accounting for half of all new-car sales), it would be 2035 at the earliest before they represented half the vehicles on American roads. Expect fully autonomous vehicles to face an equally long and winding road.
To put matters in perspective, most cars on the road today require the driver to do practically everything—signaling, steering, accelerating, braking, watching the traffic ahead, to the sides and the rear. This is Level 0 motoring on the scale of autonomous vehicles devised by the Society of Automotive Engineers (SAE) in America. Vehicles equipped with rudimentary forms of driver-assistance, such as cruise control or reversing sensors, are classed as Level 1.
Fitted with wide-angle cameras, GPS sensors and short-range radars, Level 2 vehicles can automatically adapt their speed to the surrounding traffic, maintain a safe distance from the vehicle ahead, keep within their own lane, and even park themselves occasionally. For short stretches of time, the driver’s hands can be removed from the steering wheel and feet from the pedals. But the driver must be ready to take full control of the vehicle at any instant. Tesla’s Autopilot system is classed as Level 2 technology—or was prior to being rolled back recently to Level 1 for safety reasons.
In the accident that killed a Tesla driver in Florida last year, the driver either failed to respond in time to avert the accident, or mistakenly assumed that Autopilot meant more (as its name implied) than mere driver-assistance. Tesla continues to include the Autopilot sensors and software in its cars, but has deactivated the system while further testing is undertaken. The company plans to re-activate it in 2019 or thereafter.
Level 3 autonomous driving is even more controversial. The main difference is that, while the driver must still remain vigilant and ready to intervene in an emergency, responsibility for all the critical safety functions is shifted from the driver to the car. This has a lot of engineers worried. Experience has not been good with control systems that relegate the operator to a managerial role outside the feedback loop, with the sole function of interceding in the case of an emergency.
It was this sort of thinking that allowed an accident at a nuclear power plant at Three Mile Island, in 1979, to escalate into a full-blown meltdown. Plant operators failed to react correctly when a valve stuck open and caused the reactor to lose cooling water. They then made matters worse by overriding the automatic emergency cooling system, thinking there was too much water in the reactor rather than too little. The accident report blamed inadequate operator training and a poorly designed computer interface.
Similar human failings have led to countless airline accidents—most recently, the Asiana Airlines crash at San Francisco in 2013. Over-reliance on automation and lack of systems understanding by the pilots when they needed to interevene were cited as major factors contributing to the Asiana crash. Some carmakers fear that—even more than reactor operators or professional pilots—untrained motorists may only compound the problem when suddenly required to take control of an otherwise fully automated system. Ford believes it is better to skip Level 3 altogether, and go straight to Level 4, even if it takes longer.
In theory, Level 4 technology should at least be safer. Such vehicles will carry out all the critical driving functions on their own, from the start of the journey to the end. The only proviso is that they will be restricted to roads they have been designed for. That means routes that have been mapped in three dimensions and geofenced by GPS signals, to prevent them straying outside their designated zones. Ride-sharing services like Lyft and Uber are likely to be the first to operate Level 4 vehicles.
In fully autonomous Level 5 motoring, the vehicles have to perform in all respects at least as well as human drivers—in short, be capable of going anywhere, in every conceivable condition, and cope with the most unpredictable of situations. That means traveling on dirt tracks off the map, in blizzards, thunderstorms or pitch darkness, with animals bursting out of bushes, children chasing runaway balls, crazy people doing crazy things. To fulfill their promised role, self-driving cars and trucks will have to do all these things and more.
The most crucial piece of technology needed to make this happen is lidar (light detection and ranging). Lidar uses pulses of laser light from a rotating mirror arrangement on the vehicle’s roof to scan the surroundings for potential obstacles. Unlike video cameras, lidar cannot be dazzled by bright light nor blinded by the dark. It is far more accurate than radar at measuring the distance and speed of objects. Better still, it provides an image in three dimensions. Clever mathematics can allow lidar sensors to tell whether an object is hard or soft—in other words, whether it is another vehicle or a wayward pedestrian. The Autopilot in Tesla’s cars does not rely on lidar, though it may yet have to if it is to match the resolution of other ranging and detection systems.
While not cheap, lidar sensors are available from a number of suppliers. But as far as autonomous vehicles go, the real value lies not so much in the laser hardware, but in the software that combines lidar images with signals from radar detectors and video cameras, and overlays the resulting 3D map with GPS data. Waymo, an autonomous-vehicle firm set up by Alphabet (Google’s parent), is believed to have the most sophisticated lidar systems of all. They are expected to be the jewel in the crown when Waymo comes to licensing its autonomous technology to carmakers around the world.
Hence the court battle that has raged in San Francisco, with Waymo accusing Uber of having stolen its intellectual property and copied its lidar designs. In a ruling on May 15th, Uber was ordered to return the stolen property and for the culprit to cease work on the disputed hardware. There is now a real possibility of criminal charges being filed. That would seriously hamper Uber’s plans for fielding a fleet of ride-sharing cars without costly drivers.
Before that can happen, however, numerous issues need resolving. Local governments will have to spend scarce resources making road infrastructure more AV-friendly. Besides that, who will be responsible for all the vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) low-latency wireless networks needed to manage the driverless traffic? On another matter, the courts have yet to determine how AVs should share the road with unpredictable human drivers. Who will be held liable when the inevitable accidents happen: the AV owner, manufacturer, software supplier? And how will AVs be shielded from cyber attack? Meanwhile, the legal and ethical discussions of letting algorithms resolve human dilemmas concerning life and death have barely begun.
The enthusiasts are right on one thing: testing is paramount. Waymo has logged more than 2m miles of autonomous driving. But that is nowhere near enough to gauge whether AVs are safe enough to let loose on the public.
According to the Bureau of Transportation Statistics, there were about 35,000 fatalities and over 2.4m injuries on American roads in 2015. While these numbers sound high, given that Americans drive three trillion miles a year, the accident rates are remarkably low—1.12 deaths and 76 injuries per 100m miles. Because accidents are so rare (compared with miles traveled), autonomous vehicles “would have to driven hundreds of millions of miles, and sometimes hundreds of billions of miles, to demonstrate their reliability in terms of fatalities and injuries,” says Nidhi Kalra of RAND Corporation, a think tank in California. At present, there is no practical means for testing the safety of AVs prior to widespread use. For many, that is a scary thought.
Nor is there any consensus on how safe AVs should be. « Safer than human drivers » ought to be a minimum requirement. Some would go further and require road-going robots to present no threat whatsoever to human life. That would imply it is acceptable for humans to make mistakes, but not for machines to do so. Such safety issues will have to be resolved before any regulatory framework for autonomous vehicles can be put in place.
All of which raises questions for consumers to ponder. The most obvious one is economic. According to the financial-ratings agency Fitch, the average car spends 96% of its usable life parked in a garage or on the street. When maintenance, depreciation, insurance and running costs are totted up, cars are the most underutilised asset most consumers own. So, what happens when people have an alternative that is cheaper than owning a car but just as convenient? Clearly, some will opt to hail driverless vehicles instead of having a car of their own. Carmakers could thus find themselves selling fewer vehicles to consumers, and more to operators of driverless fleets, who will run them 24 hours a day, seven days a week, and scrap them after a year or two. The coming era of “transport-as-a-service” suggests motor manufacturers will need to rethink the way they do business.
So as not to be blind-sided, General Motors has invested $500m in Lyft and forked out nearly $1 billion to acquire Cruise Automation, an AV developer in Silicon Valley. Meanwhile, Ford has replaced its automotive industry stalwart of a boss with an outsider brought in last year to oversee experiments with self-driving cars and ride-sharing services. Fiat-Chrysler has been using 100 minivans to test autonomous technology supplied by Waymo.
Most carmakers have plans to start testing the market with Level 3 or possibly Level 4 autonomous vehicles around 2021. Such AVs may still have steering wheels and pedals, and be able to drive autonomously only on designated roads. The majority are likely to be bought by ride-hailing services. Consumers wanting the flexibility and freedom of full Level 5 vehicles will have to wait a good deal longer. Come that day, though, the choice will be drive or be driven. The betting is that a surprising number of people will still want to drive themselves—whether out of mistrust of the machine or the satisfaction that comes from having total control.