Monthly Archive: September 2015

A Heap of Equivocation

As I write this week’s entry to Aristotle to Digital, I am reflecting on the life and times of Yogi Berra, who just died at the ripe old age of 90.  I fervently hope that he is resting in peace.  In my opinion, he earned it.

In an earlier column, published about a year ago, I wrote about Yogi Berra Logic as I termed the legendary witticisms of one of the greatest catchers to have ever played the game of baseball.  In tribute to his life and passing, I thought I would revisit that whimsical posting with some more thoughts on what made Yogisms have such timeless attraction and talk a little about some other playful uses of natural language.

Before I go deeply into these points, I would like to correct the record about Yogisms.  Several people have used the word malapropisms to describe the various nuggets of thought that he would utter.  This is an incorrect application of malapropism, which is defined as:

malapropism – the mistaken use of a word in place of a similar-sounding one, often with unintentionally amusing effect, as in, for example, “dance a flamingo” (instead of flamenco).

I’m not saying that Yogi never used a malapropism in his life.  I am saying that most, if not all, of his Yogisms don’t fall into this category.  Rather they fall into the category of equivocal speech.  The meaning of words changes, often extremely quickly, from one part of the Yogism to another and one has to read them with the various contexts that they span in mind.

A host of Yogisms can be found at the Yogi Berra Musuem and Learning Center’s list of Yogisms.  To illustrate the point of equivocation in some detail take the Yogism

The future ain’t what it used to be

– Yogi Berra

On the surface, this expression doesn’t seem to have any meaning and would surely throw any natural language analysis software for a loop in trying to assign one.  And yet, there actually is at least a little meaning as evidenced by the smile, chuckle, chortle, snicker, or belly-laugh that each of us has as a reaction upon reading it.

But surface impressions are rarely more than skin deep (a Yogism of my own perhaps?) and with a bit of imagination we can easily parse out some meaning and, perhaps, even profound meaning.  I base this expectation on the fact that Yogi Berra was not a stupid man by any measure – his accomplishments alone should testify to that – and that his Yogisms strike a chord in so many peoples mind.

There are, at least, two fairly poignant meanings that can be mined with a fair amount of confidence from the Yogism above.  The first is that the hope and aspirations for the future that filled his head at a younger age are now replaced with far less hopeful ideas for what the future holds now that he has grown older.  On other words, the next 20 years looked brighter to him when he was younger compared to how he perceives the same 20 year span into the future now that he is an older man.  The second is that when he was younger, say 25 years old, and looking forward to what the world would offer when he was 40, he had huge dreams of what might come true.  Now that he has turned 40, he’s found that ‘the future’ wasn’t as wonderful as he imagined it might be.

Notice the structure of this particular Yogism. It invokes these two ideas compactly and with humor in a way that a plainer and more logical composition that avoided equivocation cannot do.  It’s a masterpiece of natural language if not of pure predicate logic and I think we should be thankful for that.

I don’t know with certainty but I suspect that the next example of natural language gymnastics would have likely captured Yogi’s fancy as well.  It is known as the continuum fallacy.

In the continuum fallacy, natural language is used to allow one to cross a fuzzy line without even knowing one is doing it.  One form of the continuum fallacy (really the sorites paradox, but they are essentially the same thing) reads something like this.

  • We can all agree that 1,000,000 grains of sand can be called a heap
  • We can also agree that if we take 1 grain away from this heap, it’s still a heap
  • Then we can also agree that 999,999 grains of sand can also be called a heap
  • And in continuing in this fashion we can soon arrive at the idea that a heap of sand need not have any sand in it at all.

In the general explanations of why this line of argumentation is a fallacy, analysts will cite that reason being the vague nature of the definition of heap (vagueness of predicates).  Certainly it is true that at some poorly undefined (or undefinable?) line exists between where the heap turns into a non-heap.

This ‘paradox’ is not confined to linguistics.  Take the image below.

continuum_paradox

The color gradient from red to yellow is so gradual that it is hard to say that any single color is really different from its neighbors and yet red is not yellow and yellow is not red.  And where does the orange begin and end?

This vagueness seems to be a universal feature that is built into most everything.  And while it may throw linguists, logicians, perceptual psychologists, and computers into a tizzy, I suspect that Berra, the playful king of vagueness, would have had as much fun with this as with uttering Yogisms.

Fallacies, Authority, and Common Sense

Logical fallacies are everywhere.  Just make a search online using the string ‘logical fallacies list’ (e.g., here, here, and here), and you’ll come across many lists citing many more fallacies that an arguer can employ, and why they are wrong, bad, or otherwise socially unacceptable.  The authors of such lists argue that it is desirable, when crafting a valid argument, to avoid as many of these fallacies as possible and, when consuming an argument, to be sensitive to their presence.

And yet the number of fallacies in day-to-day discourse never seems to diminish.  So, clearly, people aren’t getting the message.

Of course, not everyone making an argument is really interested in making their argument valid.  Certainly, politicians are interested more in getting votes or passing their particular bills into law than they are ever interested in logic and logical fallacies.  Advertisers also bend the rules of good logic to make their product stand out so that potential customers will select their product over a competitor’s.  So people who fall into these classes reject the message because embracing it would compromise their goals.

But there is another facet worth considering as well.  There is a possibility that people do get the message and simply reject it since they judge that the message itself is flawed.  Is it possible that some people’s common sense allows them, perhaps unconsciously, to see that some arguments about fallacies are themselves fallacious?  Is it possible that some people who argue about avoiding fallacies are engaging in fallacies about fallacies?

Now, before I explain how some arguments about fallacies can be fallacious, I would like to clarify a couple of points.  First, I think the best definition of a fallacy is provided by the Stanford Encyclopedia of Philosophy, which says that a fallacy is a deceptively bad argument; an argument where the conclusion that does not follow from the premises being offered and that it is not manifestly obvious why. Second, that that definition, while being the best out there, is still fairly inadequate.  The reason is that, if one can detect the fallacy, how deceptive is it actually?  The point here is that the very concept of a fallacy is a slippery one and, in fact, there is substantial controversy about the nature of fallacies as can be seen from the long discussion here.

So, for the sake of this post, I am going to argue that a fallacy is a bad argument that is deceptive for people who are not trained in detecting and correcting it.

Some fallacies are relatively easy to detect and fix.   The simplest ones seem to originate in deductive reasoning.  The following example of the fallacy of the undistributed middle comes from syllogistic logic:

All dogs have fur
My cat has fur
Therefore my cat is a dog

These types of errors are easy to see even if they are not easy to explain.  These types of fallacies are benign because they aren’t very deceptive.

A much more common and truly deceptive fallacy comes in the form of equivocation, where the meaning of a term changes mid-argument and, if one isn’t careful, one misses it and becomes either confused or, worse, convinced of an invalid conclusion.

When the argument is simple, equivocation is fairly easy to find, as in this example:

The end of life is death.
Happiness is the end of life.
So, death is happiness.

Clearly the word ‘end’ in the first line means the termination or cessation whereas the word ‘end’ in the second means goal or purpose.   When the argument is much larger in length or involves an emotional subject it is much harder to detect equivocation.  As an example on that front, I once read a blog post (unfortunately I can’t source it anymore) where the author was celebrating a story in which an Amish man boarded a bus and challenged the people onboard about television.  As the story goes, the Amish man asked how many of the passengers had a TV and every hand went up.  He then asked them how many of them thought TV was bad and almost every hand went up as well.  He then asked why, if they thought it was bad, did they tolerate a TV in their homes.  The blogger obviously didn’t notice or care that the definition of TV had changed from the first question, where it meant the device, to the second question, where it meant the programming.  All that mattered was the emotional delivery.

Perhaps the trickiest kind of fallacy concerns appeals to authority.  And it is in this case where we find fertile ground where grow the fallacy of fallacies.

An appeal to authority can actually be a reasonable thing to do when dealing with custom, or policy, or doctrine.  As long as the authority is proper, the appeal can be a solid piece in an argument.  When the appeal is to the authority of the public or to someone whose motives are questionable, then the appeal to authority becomes a fallacy.

That said, an appeal to authority is never valid when it comes to science.  Nonetheless, it is a common place appeal offered by those who talk about ‘settled science’.  They tell us that a scientific conclusion is valid based solely on the idea that ‘X percent of the scientists in the world agree on proposition Y’.  They also tell us that anyone who objects is necessarily engaged in a logical fallacy by either ignoring a proper appeal to authority (the X percent of scientists who believe proposition Y) or by making an incorrect appeal to authority (the 100 – X percent of scientists who reject proposition Y).

To my way of thinking, as a physicist, this type of argument goes against common sense and is just wrong.  Consider the case in physics at the turn on the 20th century.  A majority of scientists felt that mankind had basically all the rules in place.  The science of mechanics was well understood in terms of Newton and his 3 laws and the science of electricity, magnetism, and optics had just been united by Maxwell.  Sure there was this pesky little problem with the ultra-violet catastrophe, but the majority of scientists were willing to ignore this or believe that a small tweak was all that was needed to fix things.   Of course, that ‘small tweak’ ushered in the science of quantum mechanics that forever changed the way we think about science and philosophy.

Now a careful reader may argue that I indulged in a logical fallacy of my own about the majority of scientists when I pronounced that they were willing to ignore or believe all that was needed was a small tweak.  After all, was I there to interview each and every one of them?  But that assessment is backed up by an overwhelming amount of evidence that shows that the advancement of Planck was surprise to the physics community.

So what to make of those ‘settled science’ folk? Well they seem to want to ignore the logic underpinning the scientific method by appealing to authority as if scientific conclusions are immutable as long as they are based on a kind of popularity.  They also use the form and structure developed to explain logical fallacies as an additional appeal to authority (in this case to the community of logicians rather than scientists) to dismiss anyone who believes the contrary to their doctrine as being illogical.   And here they commit a two-fold error.  By failing to recognize that there is no certainty encompassing either science or logic in their entirety, these individuals use the machinery of avoiding fallacies as a logical fallacy itself. They look on those who support the doctrine as pure in motive and look upon those who reject it as either corrupt or unqualified and stupid.  They heap on layer after layer of emotionalism while telling their critics that they are mired in emotional thinking.

Fortunately, it seems, the human mind has a built-in safety valve in the form of common sense that allows us to reject these fallacies of fallacies even if we don’t know why we do it.  I suppose, intrinsic to the human condition, is a natural skepticism for just how far logic can takes us.  After all, it is a tool not a god and we should treat it as such.

The Power of Imagination

A couple of weeks ago, I wrote about the subtle difficulties surrounding the mathematics and programming of vectors.  The representation of a generic vector by a column array made the situation particularly confusing, as one type of vector was being used to represent another type.  The central idea in that post was that the representation of an object can be very seductive; it can cloud how you think about the object, or use it, or program it.

Well, this idea about representations has, itself, proven to be seductive, and has lead me to think about the human capacity that allows imagination to imbue representations of things with a life of their own.

To set the stage for this exploration, consider the well-known painting entitled The Treachery of Images by the French painter Magritte.

Magritte_pipe

The translation of the text in French at the bottom of the painting reads “This is not a pipe.”  Magritte’s point is that the image of the pipe is a representation of the idea of a pipe but is not a pipe itself; hence his choice of the word ‘treachery’ in the title of his painting.

Of course, this is exactly the point I was making in my earlier post, but a complication in my thinking arose that sheds a great deal of light on the human condition and has implications for true machine sentience.

I was reading Scott McCloud’s book Understanding Comics, when he presented a section on what makes sequential art so compelling.  In that section, McCloud talks about the inherent mystery that allows a human, virtually any human old enough to read, to imagine many things while reading a comic.  Some of the things that the reader imagines include:

  • Action takes place in the gutters between the panels
  • Written dialog is actually being spoken
  • That strokes of pencil and pen and color are actually things.

You, dear reader, are also engaging in this kind of imagining.  The words you are reading – words that I once typed – are not even pen and pencil strokes on a page.  The whole concept of page and stroke is, of course, virtual: tracings of different flows of current and voltage humming through micro-circuitry in your computer.

Not only is that painting of Magritte’s shown above not a pipe, it’s not a painting.  It is simply a play of electronic signals on a computer monitor and a physiological response in the eye.  And yet, how is it that it is so compelling?

What is the innate capacity of the human mind and the human soul to be moved by arrangements of ink on a page, by the juxtaposition of glyphs next to each other, by movement of light and color on a movie screen, by the modulated frequencies that come out of speakers and headphones?  In other words, what is the human capacity that breathes life into the signs and signals that surrounds us?

Surely someone will rejoin “it’s a by-product of evolution” or “it’s just the way we are made”.  But these types of responses, as reasonable as they may be, do nothing to address the root faculty of imagination.  They do nothing to address the creativity and the connectivity of the human mind.

As a whimsical example, consider this take on Magritte’s famous painting, inspired by the world of videogames.

Mario_pipe

Humans have that amazing ability to connect to different ideas by some tenuous association to find a marvelous (or at least funny) new thing.  The connections that lead from the ‘pipe’ you smoke to the virtual ‘pipe’ in Mario Brothers are obvious to anyone who has been exposed to both of them in context.  And yet, how do you explain them to someone who hasn’t?  Even more interesting:  how do you enable a machine to make the same connection, to find the imagery funny?  In short, how can we come to understand imagination and, perhaps, imitate it?

Maybe we really don’t want machines that actually emulate human creativity, but we won’t know or understand the limitations of machine intelligence without more fully exploring our own.  And surely one vital component of human intelligence is the ability to flow through the treachery of images into the power of imagination.

Balance and Duality

There is a commonly used device in literature that big, important events start small.  I don’t know if that’s true.  I don’t know if small things are heralds of momentous things but I do know that I received a fairly big shock from a small, almost ignorable footnote in a book.

I was reading through Theory and Problems in Logic, by John Nolt and Dennis Rohatyn, when I discovered the deadly aside.  But before I explain what surprised me so, let me say a few words about the work itself.  This book, for those who don’t know, is a Schaum’s Outline.  Despite that, it is actually a well-constructed outline on Logic.  The explanations and examples are quite useful and the material is quite comprehensive.  I think that the study of logic lends itself quite nicely to the whole approach of Schaum’s since examples seem to be heart of learning logic and the central place where logicians tangle is over some controversial argument or curious sentence like ‘this sentence is false’.

As I was skimming Nolt and Rohatyn’s discussion about how to evaluate arguments I came across this simple exercise

Is the argument below deductive?

Tommy T. reads The Wall Street Journal
$\therefore$ Tommy T. is over 3 months old.

– Nolt and Rohatyn, Theory and Problems in Logic

Their answer (which is the correct one) is that the argument above is not deductive.  At the heart of their explanation for why it isn’t deductive is the fact that while it is highly unlikely that anyone 3 months old or younger could read The Wall Street Journal, nothing completely rules it out.  Since the concept of probability enters into the argument, it cannot be deductive.

So far so good.  Of course, this is an elementary argument so I didn’t expect any surprises.

Nolt and Rohaytn go on to say that this example can be made to be deductive by the inclusion of an additional premise.  This is the standard fig-leaf of logicians, mathematicians, and, to a lesser extent, scientists the world over.  If at first your argument doesn’t succeed, redefine success by axiomatically ruling out all the stuff you don’t like.  Not that that approach is necessarily bad; it is a standard way of making problems more manageable but usually causes confusion in those not schooled in the art.

For their particular logical legerdemain, they amend the argument to read

All readers of The Wall Street Journal are over 3 months old.
Tommy T. reads The Wall Street Journal
$\therefore$ Tommy T. is over 3 months old.

– Nolt and Rohatyn, Theory and Problems in Logic

This argument is now deductive because they refuse to allow the possibility (no matter how low in probability) that those amongst us who are 3 months old are younger cannot read The Wall Street Journal. They elevate to metaphysical certitude the idea that youngsters such as they can’t by simple pronouncement.

Again there are really no surprises here and this technique is a time honored one.  It works pretty well when groping one’s way through a physical theory where one may make a pronouncement that nature forbids or allows such and such, and then one looks for the logical consequences of such a pronouncement.  But a caveat is in order.  This approach is most applicable when a few variables have been identified and/or isolated as being the major cause of the phenomenon that is being studied.  Thus it works better the simpler the system under examination is.  It is more applicable to the study of the electron than it is to the study of a molecule.  It is more applicable to the study of the molecule than to an ensemble of molecules and so on.  By the time we are attempting to apply it to really complex systems (like a 3-month old) its applicability is seriously in doubt.

Imagine then, my surprise by the innocent, little footnote associated with this exercise that reads

There is, in fact, a school of thought known as deductivism which holds that all of what we are here calling “inductive arguments” are mere fragments which must be “completed” in this way before analysis, so there are no genuine inductive arguments

– Nolt and Rohatyn, Theory and Problems in Logic

Note the language used by the pair of logicians.  Not that the deductivism school of thought wants to minimize the use of inductive arguments or maximize the use of deductive ones.  Not that its adherents want to limit the abuses that occur in inductive arguments.  Nothing so cautious as that.  Rather the blanket statement that “there are no genuine inductive arguments.”

A few minutes of exploring on the internet led me to slightly deeper understanding of the school of deductivism but only marginally so.  What could be meant by no genuine arguments?  A bit more searching led me to some arguments due to Karl Popper (see the earlier column on Black Swan Science).

These arguments, as excerpted from Popper’s The Logic of Scientific Discovery, roughly summarized, center on his uneasiness with inductive methods as applied to the empirical sciences.  In his view, an inference is called inductive if it proceeds from singular statements to universal statements.  As his example, we again see the black-swan/white-swan discussion gliding to the front.  His concern is for the ‘problem of induction’ defined as

[t]he question whether inductive inferences are justified, or under what conditions…

-Karl Popper, The Logic of Scientific Discovery

Under his analysis, Popper finds that any ‘principle of induction’ that would solve the problem of induction is doomed to failure since it would necessarily be a synthetic statement, not an analytic one.  From this observation, one would then need a ‘meta principle of induction’ to justify the principle of induction and a ‘meta-meta principle of induction’ to justify that one and so on, to an infinite regress.

Having established this initial work, Popper jumps into his argument for deductivism with the very definite statement

My own view is that the various difficulties of inductive logic here sketched are insurmountable.

-Karl Popper, The Logic of Scientific Discovery

And off he goes. By the end, he has constructed an argument that banishes inductive logic from the scientific landscape, using what, in my opinion, amounts to a massive redefinition of terms.

I’ll not try to present anymore of his argument.  The interested reader can follow the link above and read the excerpt in its entirety.  I would like to try to ask a related but, in my view, more human question.  To what end is all this work leading?  I recognize that it is important to understand how well a scientific theory is supported.  It is also important to understand the limits of knowledge and logic.  But surely, human understanding and knowledge are not limited by our scientific theories nor are they adequate described by formal logic.  Somehow, human understanding is a balance between intuition and logic, between deduction and induction.

Popper’s critiques sound too much like the sounds of someone obsessing over getting the thinking just so without stopping to ask if such a task is worth it.  Scientific discovery happens without the practitioners knowing exactly how it happens and what to call each step.  Should that be enough?

Of course, objectors to my point-of-view will be quick to point out all the missteps that logicians can see in the workings of science – all the black swans that fly in the face of a white-swan belief.  My retort is simply “so what?”

Human existence is not governed solely by logic nor should it be.  If it were, a part of the population would be frozen in indecision because terms were not defined properly, another part would be stuck in an infinite loop, and the last part would be angrily arguing with itself over the proper structure.  There is a duality between induction and deduction that works for the human race – a time to generalize from the specific to the universal and a time to deduce from the universal to the specific.

Perhaps someday, someone will perfect deductivism in such a way so that scientific discovery can happen efficiently without all the drama and controversy and uncertainty.  Maybe… but I doubt it.  After all, we know that we humans aren’t perfect – why should we expect one of our enterprises to be perfectible?