Human error is unforgivable when we shun infallible algorithms

Financial Times Financial Times
Technology beats people especially when they are sick or tired, writes David Siegel
 

Most of us like to make our own choices on matters of life, death and money — and if we cannot decide for ourselves, we would rather turn to a human expert for guidance than to an impersonal program.

Professionals of flesh and blood are supposed to have a healthy sense of responsibility and, when caught in a dark corner, they can often illuminate the way out with a creative spark. That our fate might one day lie with deterministic algorithms is for many people a frighteningly dystopian vision.

It is a prejudice we would do well to overcome. In a world awash with digital information, algorithms are better than people at analysing complex interactions. What they lack in creativity, they more than make up for in consistency and speed.

Consider aviation, where mistakes are often deadly. In the 1960s engineers figured out a way to address a particularly common and catastrophic type of pilot error. Sitting in the cockpits of mechanically sound aircraft, while flying at the right speed and maintaining full control, they would crash into a mountainside or the sea, apparently oblivious to whatever had gone wrong.

Such accidents, which often resulted in the loss of everyone on board, have since been all but eliminated. The ground proximity warning systems now fitted to commercial aircraft can see obstacles that pilots cannot, thanks to comprehensive terrain maps and sensors that work even in darkness and bad weather. As important, they keep track of the aircraft’s position and predict its path, so they are not taken off guard by hard-to-detect dangers such as terrain that is rising faster than an aircraft as it begins its ascent. And, unlike pilots, they are never distracted by other urgent matters, such as a failing engine or an instruction from air traffic control.

In fields as wide ranging as medical diagnosis, meteorology and finance, dozens of studies have found that algorithms at least match — and usually surpass — subjective human analysis. Researchers have devised algorithms that estimate the likelihood of events such as a particular convict lapsing back into crime after being released from custody, or a particular business start-up going bust. When they pitted the predictive powers of these programs against human observers, they found that the humans did worse.

And that, presumably, was on a good day. Aside from their systematic failings, people get sick, tired, distracted and bored. We get emotional. We can retain and recall a limited amount of information under the very best of circumstances. Most of these quirks we cherish, but in a growing number of domains we no longer need to tolerate the limitations they entail. Nor do we have much to gain from doing so. Yet we seem determined to persevere, tending to forgive “human error” while demanding infallibility from algorithms.

Witness the hand-wringing over the safety of driverless cars, even though the National Highway Traffic Safety Administration finds that human error — not mechanical failure — represents the “critical factor” in nearly all of the traffic accidents that occur in the US each year. People, it seems, would rather place their trust in other humans — whose logic and behaviour they know to be flawed — than in hardware and software that operates in a bias-free way.

This bias is, of course, just another instance of the logical error that humans struggle to avoid. There is even a name for it: “algorithm aversion”, a term three University of Pennsylvania researchers coined in a 2014 study. Even after witnessing algorithms make more accurate predictions than humans when using identical data, they found, test subjects were quicker to lose trust in the programs than in the human forecasters. Logically, the reverse should have been true. Apparently, we are more willing to live with our own predictable mistakes than to place our trust in a demonstrably superior method.

Humans have a long record of rejecting new evidence as paradigms shift in uncomfortable ways. It probably does not help that we were raised on dystopian science fiction, or that public intellectuals keep asserting that artificial intelligence — shorthand, an old joke goes, for almost anything a computer cannot yet do — could cause the demise of the human race.

The sooner we learn to place our faith in algorithms to perform the tasks at which they demonstrably excel, the better off we humans will be. If the fear of the unknown really is driving sceptics’ irrational bias against algorithms, then it is the task of practitioners who do understand their power (and limitations) to make the case in their favour.