Knowledge and Uncertainty

The disciplines of the natural sciences and philosophy enjoy a rich, complicated, and, at times, subtle relationship.  Philosophic pursuits help to guide and inform the scientific enterprise while the phenomena, which science discovers, categorizes, and explains, expands and grounds the philosophic thought.  Nowhere is this interaction more interesting and, perhaps, more important than in the area of knowledge and uncertainty.

Epistemological ideas dealing with what is knowable, unknown, and unknowable have played a large role since the earliest days of philosophy.  In the Platonic dialog The Meno, Socrates puts forward the idea that much (or perhaps all) human learning is really a kind of remembrance of knowledge attained in past incarnations of the soul (anamnesis).  How exactly the cycle starts and what knowledge the proto-soul possesses or whether Plato/Socrates actually worried about an infinite regress is not clear.

Questions of knowledge continue on for thousands of years without much of a change in the focus or tenor until the rise of quantitative scientific methods in the post-Renaissance world.  Almost overnight, there is now a way to discuss three vital components of knowing, at least within the context of physical systems:

  • Knowledge characterized by measurement
  • Uncertainty characterized by error
  • Mathematical description of how the two propagate their influence

These new ingredients are not developed to shed light on ages-old debates but rather to determine just how to deal with these new models of the physical world – differential equations.  In differential equations, man had an operational model for cause-and-effect; a laboratory wherein the ideas of what is known and unknown/unknowable could be made the subject of experimentation.  Nature’s own fabric helped to shape and mold how mankind saw knowledge.

These ideas matured in many different directions subject to need and taste.  The three most interesting ones are:

  • Control theory
  • Quantum mechanics
  • Statistical mechanics

In control theory, the basic notion is one of a state whose evolution is subject to a set of differential equations that describe the influence of the natural environment and the man-made controls used to guide the evolution into a desired behavior.  The physical world is divided into pieces known and unknown.   Generally, the known pieces are considered to be deterministic and the unknown pieces are random.  The random variables are assigned probability distributions that describe what sort of state realizations can occur and how often they are likely to come on the scene.  Sometimes, there is a further division of the random variables as either being aleatory or epistemic.  The former term, aleatory, is best described as saying the randomness is somehow intrinsic to the system being modeled.  In contrast, the latter term, epistemic, refers to randomness that is due to lack of measurement precision.  The errors in the knowledge of the initial state of a system is often thought of as epistemic while the uncertainties in the evolution of the differential equation is often thought of as aleatory.  The distinction being that the initial state knowledge may be improved by better measurements while the evolution model for the system, the so-called right-hand side of the differential equation, will never be able to accurately represent the true dynamics due to random fluctuations in the forces that cause the motion.

Generally, the control system community does delve too deeply into the ontological nature of these uncertainties, contenting themselves with the need to operationally model them.  And this approach is reasonable since it isn’t nearly as important to understand where ‘noise’ comes from as it is to determine how to deal with it.

Nonetheless, the very concept of noise and randomness and the study of how they arise can guide the techniques used to control and mitigate their presence.  This is where the two disciplines in physics, statistical mechanics and quantum mechanics, shine.

These two disciplines are, properly speaking, two sides of the same coin, but it is often convenient to separate out the randomness into two separate bins, one dealing with the quantum nature and the other with the many-particle nature of the system being studies.  Although the terminology is rarely used by physicists, the descriptions of aleatory and epistemic fit these bins nicely, at least at the conceptual level.  However, hard pushing on these concepts will soon show that the divisions are not as clear cut as they might first appear.

First, consider quantum mechanics.  By the very nature of the quantum wave function, the state of a system at any time cannot be determined with infinite precision; so a complete knowledge of conjugate pairs of variables (e.g. position and momentum) is impossible.  In some sense the system is aleatory.  But the evolution of the wave function is mediated by the Hamiltonian, whose nature is considered known.  The state evolution is completely deterministic and the only insertion of randomness comes in the measurement step, where the wave function collapses into an eigenstate of the measurement Hamiltonian.  Thus the measurement process is aleatory but this randomness can be used to an advantage since the initial state of the system can be prepared so that it is perfectly an eigenstate of the measurement Hamiltonian and hence has no state uncertainty.

Statistical mechanics deals with the added complication of having an enormous number of degrees of freedom (e.g. many particles) so that a complete description of the state is practically impossible. (It is interesting to note that not all systems with enormous or even infinite degrees of freedom are intractable; the common field theory – say the wave equation – has an infinite number of Fourier modes that all behave in a describable fashion.)  In classical statistical mechanics, the state of the system is not limited by the uncertainty principle.  So the specification of the initial state is probabilistic only due to our ignorance, thus it is epistemic.  Since tracking separate their individual motions, and hence their interactions, is also intractable, the evolution is subject to ‘noise’ but of an epistemic nature as well; since in principle, if the individual states could be tracked (e.g. on a computer), then complete state knowledge would be possible.

Statistical mechanics becomes richer when combined with quantum mechanics.  The initial state of the system can be statistically distributed across multiple eigenstates.  For example, 10 percent of the system can be in one quantum state while 90 percent in another.  The density matrix formalism is designed to handle the case where epistemic uncertainty is layered on top of aleatory uncertainty.

All this is well and good but things become complicated when these concepts are pushed to their logical boundaries by asking some ontological questions about the nature of uncertainty.  The most intriguing question deal with the boundary between the epistemic and the aleatory.  Many researchers are fascinated with the idea that the aleatory uncertainty of quantum mechanics may give way to hidden variables, pilot waves, and the like.  The unspoken goal is eliminate or, otherwise, get around the uncertainty principle.  But the more interesting question flows the other way.  Is our ignorance a physical manifestation of aleatory rather than of epistemic uncertainty?  Buried deep under these distinctions is the notion of a human who can possess knowledge of the physical world; an observer in the language of quantum mechanics.  But no matter how the knowledge possessor is names, it is still a physical object.  Its knowledge is represented by physical patterns of matter and energy.  Its ability to measure and interact are still mediated materially.  So where does the actual boundary lie?  Just how separate is the measurer from the measured?  The answer is, to close with a pun, completely uncertain.

Leave a Comment