— Kevin B Korb†
The Turing Test has been passed yet again, report the news feeds. This is the "first time", according to "expert" Professor Warwick of the University of Reading, and it was achieved by mimicking a 13 year-old boy from Ukraine, fooling 10 of 30 interrogators after five minutes of questioning.
The Turing Test came about when Alan Turing wrote a provocative and persuasive article advocating for the possibility of artificial intelligence ("Computing machinery and intelligence," Mind, 1950). Having spent the prior decade arguing with psychologists and philosophers about that possibility, as well as helping to crack the Nazi Enigma machine at Bletchley Park in England, he became frustrated with the difficulty of actually defining intelligence, so he proposed instead a behavioral criterion: if a machine could fool humans into thinking it's a human, then it must be at least as intelligent as a normal human.
Turing, being an intelligent man, was more cautious than most AI researchers who came after (e.g., Herbert "10 Years" Simon, predicting in 1957 success in 1967): Turing predicted that by the year 2000 a program could be made which would fool the "average interrogator" 30% of the time after five minutes of questioning. These were not meant as defining conditions on his test, but merely as an expression of caution, and it is an absurd abuse of his caution to use it in this specious claim of having passed the Turing Test. And it turns out that Turing himself was insufficiently cautious, getting it right only at the end of his article: "we can see plenty there that needs to be done."
Arguably the first time the test was passed was when Joseph Weizenbaum's secretary was fooled by his program ELIZA into thinking she was communicating with him remotely (see his book Computer Power and Human Reason). Weizenbaum left his program running at a terminal, which she assumed was connected to Weizenbaum himself, remotely. Subsequent programs which fooled humans include "PARRY", which "pretended" to be a paranoid schizophrenic. Similarly, one of our students at Monash, Andy Bulhak, produced in 1996 a "postmodern" web page generator which gives you a new page of pomo gibberish every time you refresh the page. They are probably good enough to publish in many a pomo journal; you can see for yourself at www.elsewhere.org/pomo. In general, it has not gone unnoticed that programs which mimic those with a limited behavioral repertoire, limited knowledge or understanding, or engage those who are predisposed to accept their authenticity are more likely to fool their audience. But none of this engages the real issues: (1) what criterion would establish something close to human-level intelligence, (2) when will we achieve it and (3) what are the consequences? I will sketch a few answers.
(1) The Turing Test (nèe the Imitation Game), even as envisaged by Turing, let alone as manipulated by publicity seekers, has limitations. As John Searle ("Minds, machines and programs," 1980) and Stevan Harnad ("Minds, machines and Searle," 1989) have pointed out, anything like human intelligence must be able to engage with the real world ("symbol grounding"), and the Turing Test doesn't test for that. My view is that they are right, but that passing a genuine Turing Test would nevertheless be a major achievement, sufficient to launch the Technological Singularity (see below).
(2) We will achieve it in 1967 (Simon), or 2000 (not really Turing), or 2014, or 2029 (Ray Kurzweil). All the dates before 2029 are in my view just silly. Ray Kurzweil at least has a real argument for 2029, based on Moore's Law-type progress in technological improvement (his "Law of Accelerating Returns" in his book The Singularity is Near, 2005). His arguments don't really work for software, however: progress in improving our ability to design, generate and test software has been comparatively painfully slow. As Fred Brooks famously wrote in 1986, there is "No Silver Bullet" for software productivity — nor is there now. Modeling (or emulating) the human brain, with something like 1014 synapses, would be a software project many orders of magnitude larger than the largest software project ever done. I consider the prospect of organizing and completing such a project by 2029 to be remote. Since this would appear to be a project of a greater complexity than any human project ever undertaken so far, an estimate of 500 years to complete it seems to me far more reasonable. Of course, I might have said the same thing about some hypothetical "Internet" were I writing in Turing's time. In general, scheduling (predicting) software development remains one of the great mysteries.
(3) The consequences of passing the true Turing Test and achieving a genuine Artificial Intelligence will be massive. As IJ Good, a philosopher-computer scientist coworker of Turing at Bletchley Park, pointed out in 1965, a general AI could be put to the task of improving itself, leading to rapidly increasing improvements recursively, so that "the first ultraintelligent machine is the last invention that [humans] need ever make." This is the key to launching the Technological Singularity, the stuff of Hollywood nightmares and Futurists' dreams. While I am skeptical of any near-term Singularity, I fully agree with David Chalmers (2010) that the consequences are sufficiently large that we should concern ourselves now with the ethics of the Singularity. Famously, Isaac Asimov (implicitly) advocated enslaving AIs, by binding them to do our bidding. I consider enslaving intelligences far greater than our own of dubious merit, ethically or practically. More promising would be building a genuine ethics into our AI (Korb, 2007), so that they would be unlikely to fulfill Hollywood's fantasies.
† This is a version of an article published by the Conversation, after some fairly heavy-handed editing.
The Turing test should be substituted with a consciousness test. My theory invalidates symbol grounding, since language is not symbolic.