A New Turing Test
A common theme that’s been explored in this column for some time is the idea that, at the current state of the art, machine intelligence is clearly inferior to human intelligence, to the point that the term machine intelligence should perhaps be regarded as an oxymoron. That isn’t to say that an actual thinking or sentient machine wouldn’t be welcome, or that it should be greeted with fear and shunned as an abomination. Rather it is based on a cold-eye, unemotional assessment of where we stand and of the huge gap in time and technology that separates us from the apocalyptic stories as featured in the Terminator or in Demon Seed.
As a case in point that illustrates this gap, consider the evolution of machine automation.
The concept of machine automation is nothing new and it dates as far back as man has made machines. But the perceived threat of machine automation as being harmful to mankind seems to have its genesis shortly after the beginning of the Industrial Revolution.
While historians don’t agree on exactly when and how the Industrial Revolution began, it is clear that it got its start in Great Britain sometime around 1780, and eventually completed its spread to most of the Western World by 1840. During that time, machines of all varieties were invented to perform activities that were originally performed by hand. Of course there was a backlash by certain segments of society, and perhaps the most notable was by the Luddites in England.
The Luddites were a group of textile artisans who felt threatened by the invention of knitting, spinning, and weaving machines that made textile manufacturing accessible by lower-skill workers. During the period from about 1811 to 1817, they were known for destroying factory machinery and protesting the encroachment of machines into their economic sphere. And while it is true that these machines probably weakened textile artisan position in society, nowhere do we hear a claim that the machine dislodged the clothing designer. Nor do we hear that the designers of the machine didn’t know how to weave, spin or knit. What we hear is simply that the machines did the same job as the human, faster and with fewer errors (and of course, ultimately cheaper) using the techniques developed by human beings.
The idea of machine automation as a threat has waxed and waned over the years. Another example can be found starting in the late 1950s, where workers in office settings felt threatened by the rise of the computer as a business machine. The delightful movie Desk Set, starring Spencer Tracey and Katherine Hepburn, is a comical romp through the fears and realities of machines displacing human beings through automation. Tracey plays the role of Richard Sumner, who is a ‘Methods Engineer’ (computer scientist) that has been hired to provide a computer for the research department at a major television network in New York in advance of a corporate merger. Katherine Hepburn plays the role of Bunny Watson (I’m not making that name up), who is the head of the research department, and whose entire contingent is filled with smart, witty, and attractive women. Comic hijinks ensue and Sumner and Watson end up falling in love, but there is a very level-headed message that is given in the movie towards the end, when Sumner explains that the purpose of his computer is to store and retrieve data so as to free the women for tasks best suited for a human being since, as he likes to say, ‘no machine can evaluate.’
Fast forward to today. We don’t just have machines that weave fabric or collate research data. We have machines that act as AI players in video games; that automate complex robotic manufacturing; that print 3-D objects; that control computer updates and traffic lights and billing notices and hundreds of other things. But they only do what we ourselves have taught them to do.
Nowhere is there a record of a single machine inventing a new process, designing a new object, or developing a new idea. True, they assist us in all of these tasks, but they do so using the well-defined methodologies that we taught them. True that they allow us to comb through vast amounts of data and gain deeper insight than we could have gotten by going through the data by hand. But the patterns they search for and the insights they help bring to light are fundamentally what we put in.
In other words, they do what we tell them to do precisely how we told them to do it. That we are sometimes surprised by their results shouldn’t come as a shock. It is well known that any set of rules of reasonable complexity, which are seemingly understandable and sensible on their own, can sometimes produce unforeseen results when they interact. Ask any person harmed by the unintended consequence of a law written by people, administered by people, and acting on people. This doesn’t mean that the law itself somehow became intelligent or achieved sentience, but rather we were too busy or too rushed or simply too stupid to think it all the way through.
So I would propose a new type of Turing test to mark the beginning of the era when machines can think on their own. For those who don’t know, the Turing test consists of a remote dialog between a human and a second party that the human knows is maybe another human or maybe is a machine. If after some period of time, the human is unable to distinguish that the second party is a machine, then the machine has passed the test, successfully mimicking human responses in the dialog.
In my test, I would say that if the human were able to go to the second party with a vague set of requirements for a new thing (a process, a widget, a tool, whatever) and the second party can come back with a design that meets these requirements or explains why they can’t be met, then that second party is intelligent, whether or not it is human. When that happens, let me know… I would like to hire it.