When Is the Singularity? Probably Not in Your Lifetime

The New Tork Times The New Tork Times

 

Actually: You won’t be obsolete for a long time, if ever, most researchers say.

In March when Alphago, the Go-playing software program designed by Google’s DeepMind subsidiary defeated Lee Se-dol, the human Go champion, some in Silicon Valley proclaimed the event as a precursor of the imminent arrival of genuine thinking machines.

The achievement was rooted in recent advances in pattern recognition technologies that have also yielded impressive results in speech recognition, computer vision and machine learning. The progress in artificial intelligence has become a flash point for converging fears that we feel about the smart machines that are increasingly surrounding us.

However, most artificial intelligence researchers still discount the idea of an “intelligence explosion.”

The idea was formally described as the “Singularity” in 1993 by Vernor Vinge, a computer scientist and science fiction writer, who posited that accelerating technological change would inevitably lead to machine intelligence that would match and then surpass human intelligence. In his original essay, Dr. Vinge suggested that the point in time at which machines attained superhuman intelligence would happen sometime between 2005 and 2030.

Ray Kurzweil, an artificial intelligence researcher, extended the idea in his 2006 book “The Singularity Is Near: When Humans Transcend Biology,” where he argues that machines will outstrip human capabilities in 2045. The idea was popularized in movies such as “Transcendence” and “Her.”

Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.

What has not been shown, however, is scientific evidence for such an event. Indeed, the idea has been treated more skeptically by neuroscientists and a vast majority of artificial intelligence researchers.

A Week of Misconceptions

We’re using the first week of April as an opportunity to debunk some of the misconceptions about health and science that circulate all year round.

For starters, biologists acknowledge that the basic mechanisms for biological intelligence are still not completely understood, and as a result there is not a good model of human intelligence for computers to simulate.

Indeed, the field of artificial intelligence has a long history of over-promising and under-delivering. John McCarthy, the mathematician and computer scientist who coined the term artificial intelligence, told his Pentagon funders in the early 1960s that building a machine with human levels of intelligence would take just a decade. Even earlier, in 1958 The New York Times reported that the Navy was planning to build a “thinking machine” based on the neural network research of the psychologist Frank Rosenblatt. The article forecast that it would take about a year to build the machine and cost about $100,000.

The notion of the Singularity is predicated on Moore’s Law, the 1965 observation by the Intel co-founder Gordon Moore, that the number of transistors that can be etched onto a sliver of silicon doubles at roughly two year intervals. This has fostered the notion of exponential change, in which technology advances slowly at first and then with increasing rapidity with each succeeding technological generation.

At this stage Moore’s Law seems to be on the verge of stalling. Transistors will soon reach fundamental physical limits when they are made from just handfuls of atoms. It’s further evidence that there will be no quick path to thinking machines.