Now having introduced the chaos game and analyzed how a single iteration works, it is worth taking stock of what we actually know.  Clearly the repeated application of the affine map causes points to collect only in certain places in the plane creating the self-similar fractals that are often called the attractors of the map or strange attractors.  We’ve also established what the parameters of the affine map do in the particular case of the Sierpinski triangle, getting a sense of how those parameters map to the rules discussed in the Numberphile video.  But the exact mechanism for why only certain points are hit is based on some specialized mathematics and little intuition is obtained from pursuing the study without a lot of initial effort being devoted to understanding the ‘nuts and bolts’.

So, this post is devoted to an experimental approach to the chaos game that produces the Sierpinski triangle.  Each experiment performed involves changing one or more of the parameters that make up the affine maps used in the game and then observing what happens to the pattern produced.  Doing so will help us understand the stability of the map.

Stability can be defined in numerous ways.  There are definite schools of thought with different interpretations.  For this exploration of the chaos game, I will define two types of stability:  deterministic and stochastic.

Deterministic stability looks to see in what range of parameters does the chaos game yield fractal-like results.  The parameters themselves are chosen at the beginning of the game and remain fixed during its progression.   The idea is to answer such questions as:  How accurate do the parameters need to be?  How many different shapes or patterns result?  How many different types of games are there?

To get a sense of what answers may result and to clarify what is meant by ‘types of games’, consider the investigation detailed in the last post.  As a direct byproduct of understanding a single iteration, the significance of the constant terms $f^{(p)}$ and $g^{(p)}$ used in three Sierpinski affine maps became clear; multiplying these parameters by two gave the coordinates of vertices of the triangle.  The same Sierpinski triangle resulted even if these values were changed.  Thus, the family of all possible values for $f^{(p)}$ and $g^{(p)}$ form an equivalence class where the scale and overall geometry may change but the shape is undeniably a Sierpinski triangle.  I term this equivalence class as a single game.

In contrast, stochastic stability involves regarding the parameters of the affine maps as random variables, which vary at each iteration.  The time history of each random variable forms a unique realization that is different each time the game is played.  This aspect can be quite important because noise can creep into the game in a variety of scenarios.  Computationally, the finite precision floating point representation of the rational numbers in the game are not exact and small errors arise as the game progresses through its iterations.  Physically, an analog of the game in the material world is a messy affair with the parameters of the maps only inexactly realized due to all the small but unmodelled effects.

One possible way of implementing a stochastic game would be to generalize the deterministic expression for a new point $(x,y)$ in the Sierpinski triangle game,

\[ \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{cc} 1/2 & 0 \\ 0 & 1/2 \end{array} \right] \left[ \begin{array}{c} x_{current} \\ y_{current} \end{array} \right] + \frac{1}{2} \left[ \begin{array}{c} x_{vertex} \\ y_{vertex} \end{array} \right] \; , \]

(where $(x_{vertex},y_{vertex})$ are the coordinates of one of the three vertices of the triangle, chosen randomly, and $(x_{current},y_{current})$ is the current points coordinates) to a physical system by making the transformation matrix look like

\[ \left[ \begin{array}{cc} 1/2 & 0 \\ 0 & 1/2 \end{array} \right] \rightarrow \left[ \begin{array}{cc} 1/2 + \eta_{11} & \eta_{12} \\ \eta_{21} & 1/2 + \eta_22 \end{array} \right] \; , \]

where the $\eta_{ij}$ are random variables, with some specified moments, reflecting noise that is ubiquitous in the world.

For this post, we’ll content ourselves with exploring deterministic stability for the Sierpinski triangle, looking at what results from changing the values in the transformation matrix and the relative probabilities that each vertex is selected.  In order to facilitate the experiments, each game’s results will be over-plotted (in black) the results from the standard Sierpinski triangle (red).

The first experiment, we’ll change the transformation matrix so that the new point is only moved one third along the line from the current point to the vertex (this was actually suggested in the Numberphile video).  This modification amounts to the following change for the transformation matrix

\[ \left[ \begin{array}{cc} 1/2 & 0 \\ 0 & 1/2 \end{array} \right] \rightarrow \left[ \begin{array}{cc} 1/3 & 0 \\ 0 & 1/3 \end{array} \right] \; . \]

Running the game gives

At first glance, lowering the values in the transformation matrix seems to have simply shrank the scale of the triangle but closer inspection shows that there is a fundamental change in the geometry.  The missing ‘center piece’ is now a squat hexagon instead of a triangle as is more easily seen with the following annotated figure.

This small change has resulted in a new, but closely related game.

Scaling up the values in the transformation matrix from $1/2$ to $3/4$ leads to no discernible structure (with the original triangle now peaking thru in the lower left corner).

This result looks as if scaling the values in the transformation matrix above $1/2$ causes the points to overlap, suggesting that the value of $1/2$ is some sort of divider or separatrix between non-overlapping and overlapping points.  We might expect that a small value above one half, say $0.55$ would begin to blur the Sierpinski triangle by creating a small amount of overlap.  Running the game with this value yields

Some of the overlap regions are highlighted within red circles and the remaining ones can deduced by the symmetry exhibited by this self-similar fractal.

Likewise, a value of $0.45$ produces a triangle where the sub-triangles no longer smoothly join together but where there are gaps.

If this process were to occur in a physical system (and there are certain researchers constructing such systems – see Physicists wrangled electrons into a quantum fractal) then the scales selected in the transformation matrix must stay confined to a relatively narrow range of parameter space ($0.45-0.55$) before the results become so different as to suggest that the system is fundamentally different.  That said, the geometry produced by the game, regarded strictly in a visual sense, is remarkably resilient.  Triangles within triangles result over a fairly wide range of values.

The results of the final experiment show that the game is even much more tolerant to changes in the probabilities that determine how often each vertex is picked.  In the original implementation of the game, each vertex had a $1/3$ chance of being the one selected.  To see how sensitive the game is to changes in this respect, the relative probabilities were adjusted so that one vertex had an 80% chance of being selected while there was only a 10% chance of selecting from the remaining two.  As the figure below shows, the result of this change are to simply ‘ghost out’ the vertices that are less likely.

The conclusion here is straightforward.  The chaos game method of producing the Sierpinski triangle is remarkably stable to a range of deterministic changes in the parameter set.  This robustness makes the chaos game a convenient numerical laboratory for exploring other emergent fractals, some of which we will see in the next post.