Monthly Archive: December 2014

A New Turing Test

A common theme that’s been explored in this column for some time is the idea that, at the current state of the art, machine intelligence is clearly inferior to human intelligence, to the point that the term machine intelligence should perhaps be regarded as an oxymoron.  That isn’t to say that an actual thinking or sentient machine wouldn’t be welcome, or that it should be greeted with fear and shunned as an abomination.  Rather it is based on a cold-eye, unemotional assessment of where we stand and of the huge gap in time and technology that separates us from the apocalyptic stories as featured in the Terminator or in Demon Seed.

As a case in point that illustrates this gap, consider the evolution of machine automation.

The concept of machine automation is nothing new and it dates as far back as man has made machines.  But the perceived threat of machine automation as being harmful to mankind seems to have its genesis shortly after the beginning of the Industrial Revolution.

While historians don’t agree on exactly when and how the Industrial Revolution began, it is clear that it got its start in Great Britain sometime around 1780, and eventually completed its spread to most of the Western World by 1840.  During that time, machines of all varieties were invented to perform activities that were originally performed by hand.  Of course there was a backlash by certain segments of society, and perhaps the most notable was by the Luddites in England.

The Luddites were a group of textile artisans who felt threatened by the invention of knitting, spinning, and weaving machines that made textile manufacturing accessible by lower-skill workers. During the period from about 1811 to 1817, they were known for destroying factory machinery and protesting the encroachment of machines into their economic sphere.  And while it is true that these machines probably weakened textile artisan position in society, nowhere do we hear a claim that the machine dislodged the clothing designer.  Nor do we hear that the designers of the machine didn’t know how to weave, spin or knit.  What we hear is simply that the machines did the same job as the human, faster and with fewer errors (and of course, ultimately cheaper) using the techniques developed by human beings.

The idea of machine automation as a threat has waxed and waned over the years.  Another example can be found starting in the late 1950s, where workers in office settings felt threatened by the rise of the computer as a business machine.  The delightful movie Desk Set, starring Spencer Tracey and Katherine Hepburn, is a comical romp through the fears and realities of machines displacing human beings through automation.  Tracey plays the role of Richard Sumner, who is a ‘Methods Engineer’ (computer scientist) that has been hired to provide a computer for the research department at a major television network in New York in advance of a corporate merger.  Katherine Hepburn plays the role of Bunny Watson (I’m not making that name up), who is the head of the research department, and whose entire contingent is filled with smart, witty, and attractive women.  Comic hijinks ensue and Sumner and Watson end up falling in love, but there is a very level-headed message that is given in the movie towards the end, when Sumner explains that the purpose of his computer is to store and retrieve data so as to free the women for tasks best suited for a human being since, as he likes to say, ‘no machine can evaluate.’

Fast forward to today.  We don’t just have machines that weave fabric or collate research data.  We have machines that act as AI players in video games; that automate complex robotic manufacturing; that print 3-D objects; that control computer updates and traffic lights and billing notices and hundreds of other things.  But they only do what we ourselves have taught them to do.

Nowhere is there a record of a single machine inventing a new process, designing a new object, or developing a new idea.  True, they assist us in all of these tasks, but they do so using the well-defined methodologies that we taught them.  True that they allow us to comb through vast amounts of data and gain deeper insight than we could have gotten by going through the data by hand.  But the patterns they search for and the insights they help bring to light are fundamentally what we put in.

In other words, they do what we tell them to do precisely how we told them to do it.  That we are sometimes surprised by their results shouldn’t come as a shock.  It is well known that any set of rules of reasonable complexity, which are seemingly understandable and sensible on their own, can sometimes produce unforeseen results when they interact.  Ask any person harmed by the unintended consequence of a law written by people, administered by people, and acting on people.  This doesn’t mean that the law itself somehow became intelligent or achieved sentience, but rather we were too busy or too rushed or simply too stupid to think it all the way through.

So I would propose a new type of Turing test to mark the beginning of the era when machines can think on their own.  For those who don’t know, the Turing test consists of a remote dialog between a human and a second party that the human knows is maybe another human or maybe is a machine.  If after some period of time, the human is unable to distinguish that the second party is a machine, then the machine has passed the test, successfully mimicking human responses in the dialog.

In my test, I would say that if the human were able to go to the second party with a vague set of requirements for a new thing (a process, a widget, a tool, whatever) and the second party can come back with a design that meets these requirements or explains why they can’t be met, then that second party is intelligent, whether or not it is human.  When that happens, let me know… I would like to hire it.

Black Swan Science

We are now 55 years removed from the publication of Karl Popper’s Logic of Scientific Discovery (1959), in which Popper argued that falsification is an essential character of science, and it seems that very few of us in society actually embrace this notion to understand how and why they should be skeptical of ‘studies that show…’.

Popper’s argument goes something like this. Before the age of discovery, when explorers from Europe journeyed to the four corners of the globe, Europeans held the belief that all swans were white.  This seemed to be a natural conclusion.  After all, every swan that had been observed and reported had been white, and it was reasonable to assume that every swan that was, is, or will ever be, is white.  On January 10, 1697, the Dutch explorer Willem de Vlamingh found a habitat sporting a large number of black swans in and around the Swan River on the west coast of Australia.  His observation put an end to any validity of the claim that ‘all swans are white’.

In Popper’s logical structure, de Vlamingh’s observations falsified the hypothesis that an essential property of swans is their color. Using this example as a prototype, Popper then generalizes a guiding principle that no scientific theory can be proven, it can only be falsified.

Now, I expect that most educated people in society would reply, if asked, that they are aware that a scientific theory can never be proven, only disproved, and that they accept this as an essential facet of the scientific enterprise.  Some of the more knowledgeable may even point out the radical modifications required to Newtonian physics caused by the observations that eventually led to the birth of Quantum Mechanics and General Relativity.

And yet, these very same people seem to blindly believe any ‘fact’ that comes to light, as long as its sales pitch starts with ‘scientist have discovered’.  The media continually bombards us with stories of this kind, telling us about how a new study shows that eating this or that raises or lowers the risk of us contracting some horrible malady.  That kids who play video games or watch too much TV (however ‘too much’ is defined) are being driven to greater levels of violence or obesity or whatever.  Regardless of the subject matter, a vast component of our society is gullible and quite ready to believe as proven cause-and-effect any set of statistical correlations that a scientist may happen to discover in a set of data.

Before I am accused of being unfair and overly critical, I do want to state that I recognize that real life is never as simple as the idealized situation portrayed in the black swan anecdote. For example, imagine ourselves as contemporaries of de Vlamingh who stay at home in Holland.  After he arrives home, we happen to be at a meeting where he is presenting his black swan observations.  Why should we just give up on the ‘all swans are white’ hypothesis solely on his say so?  Perhaps the birds he observed are not actually swans but birds that look similar.  Perhaps they were actually white, and a recent fire had covered them with soot.  But suppose he came back with the body of a black swan and all our tests and examinations indicate that the bird is indeed black and a swan.  Does this mean that we have, with certainty, disproved the white swan hypothesis?   I think the answer is a qualified yes.  That is to say, we can no longer cling to the notion that all swans are white even if we later narrow the definition of ‘swan’ so that the hypothesis again becomes acceptable.

On the surface, it may seem that the preceding argument invalidates Popper’s approach, and that this whole enterprise is self-contradictory.  But some careful thought and identification of what is essential versus what is accidental in the scientific method assures us that we are on firm ground.

The essential aspects of the scientific enterprise is that we can believe that the world is understandable and that logic and the scientific method work as tools to reach this understanding.  These beliefs are meta-physical in that they rise above the accidents of any particular scientific hypothesis, theory, or test, and they are not ‘provable’ or ‘disprovable’.  We simply identify them as essential aspects of the world and how we interact with it. The accidental aspects are all that remains.

To try to illustrate this, let’s return for the final time to these annoying black swans and to de Vlamingh, who caused so much trouble.  The essential aspect in this historical narrative is that Europe held the belief in white swans based on a very large number of observations.  To hold a belief about the world is to tacitly assume that the world is understandable and that reason is a tool to understand it.  The accidentals of the narrative are that 1) prior to de Vlamingh’s observations all swans were white and 2) after his observations that belief could no longer be held unchallenged or unmodified.  The fact that we can argue whether the de Vlamingh’s birds are really swans or really black or whatever is only a discussion about the accidentals of the bird and not the essentials of understanding the world.

So what I am criticizing in society is not that people can be confused about how to draw conclusions from scientific data, nor am I criticizing them for drawing conclusions I would not.  What I am criticizing is the acceptance of scientific conclusions without skepticism.  I am criticizing misplaced faith, which focuses on the accidental observations of a given study, and loses sight of the essential unprovable nature of the scientific method.  I worry about a society that uncritically accepts the term ‘Settled Science’ and turns its back on Black Swan Science.