Next: Artificial Intelligence


Lecture 14: Information, Chaos and Order, Part 2

In this lecture, I would like to discuss rabbits. One of the things rabbits are proverbially good at is making more rabbits. Left undisturbed, rabbits display population growth similar to that of the human race, but on a shorter time scale. Fortunately for us, however, there are natural forces that reduce the rabbit population. One such force is the fox. Where there are many rabbits, there will shortly be many foxes, leading to fewer rabbits.

This can be formalised: if there are x(n) rabbits in the current generation, there will be x(n+1) rabbits in the next generation, where

x(n+1)=(1+r)x(n) - rx(n)^2

We now want to ask about the long-term behaviour of the rabbit-fox ecology. Do the rabbits or the foxes eventually triumph, or does the system reach steady state? We can investigate this graphically, using the graphs distributed at the beginning of the lecture.

Select an arbitrary starting point on the x-axis. Go up to the parabola, and read off the number of rabbits in the next generation. Now move forward in time until that next generation becomes the current generation -- that is, go to the diagonal line and drop back to the x-axis. Repeat the whole process until you reach steady state.

Using different values for r, we find one of several things can happen: the population can attain a steady state, or it can cycle between two or more values, or it can vary with no apparent pattern. Let's look at this last case in greater detail. Put the value of r on the horizontal axis. Suppose that for each value of r we take an arbitrary starting point, allow several hundred generations to pass so that the effect of the starting point is washed out, then plot the next several hundred values of the population on the vertical axis. What we get is the Verhuelst diagram.

As r increases, there are values for which the initial choice never gets washed out. The population varies between fixed limits, but never approaches any kind of steady state or limiting cycle. As r increases further, there are occasional returns to stability, but these again give way to chaos. This can be better visualized with the help of a short video:

All this reminds us of last week's discussion of Newton's method. And in fact, there is a correspondence between the Verhuelst diagram and the Mandelbrot set. [This was to have been illustrated by a neat slide I didn't have time to show.] The gaps of simplicity in between the chaos correspond to `midgets': miniature replicas of the Mandelbrot set strung out along the x-axis. Note that the midgets are surrounded by contours, and that the contours form `petals' around each midget. The number of petals doubles at intervals as you get closer to the midget; the number of contours included in each period-doubling increases as one moves to the left.

The transition from orderly to chaotic behaviour as the parameter r passes a critical value reminds us of the transition from laminar to turbulent flow as Reynolds number passes its critical value. And in fact turbulence is a chaotic phenomenon, like the population variation in fast-breeding rabbits. What are the characteristics of chaos?

Let's look at a short section of the graph of rabbit population. We see a gradual rise, punctuated by a dip. If we ask, ``What is the reason for this dip?'', we have no useful answer to give; it just happens. And yet there's nothing unknown about the system; we've written down the equation fully describing its behaviour. So there's nothing left to discover, yet some features are still inexplicable.

The next thing to notice is that initial conditions are not washed out: because the system doesn't converge towards a single value, the current position always remains sensitive to the starting point. In fact, the system may magnify initial minor differences in position. When we combine this with Gelbart's Law of Engineering: ``No measurement has more than six significant figures" we get the conclusion that the results of a chaotic process may be impossible to predict. Differences too small for us to measure may grow into entirely different destinies. And this unpredictability has nothing to do with randomness: a chaotic process may be wholely determinate.

If chaos is not randomness, what distinguishes chaos from order?

The pictures of chaos that we've been looking at appear interesting. They are clearly pictures of something; some are symmetric, others display self-similarity. If I cover up part of a picture, you would have a reasonable chance of filling it in by extending patterns in other parts of the image. This is not what we'd expect of a picture with maximal information content. In such a picture, each pixel's colour would be independent of its neighbours; so most such pictures would resemble a close-up of mud, and be quite uninteresting -- though very informative. Such a picture could be generated by choosing the colour of each pixel at random. So chaos is something very different to randomness, although it may appear superficially similar. We know that our slides of chaos can be described in less than a megabyte of information. They can be described very briefly by giving the formula used to generate them. A formula for the whole Mandelbrot set can be given in 120 characters. Let's suppose there are the 256 possibilities of the ASCII lexicon for each character. Then the total number of possible messages of this length is 256^120, or 2^960. So the information content of any one such message is 960 bits -- a long way short of the 1-megabyte maximum. Yet the fact that the fractal images have less than maximal information content seems to be a necessary precondition of their being interesting to look at.

Our final conclusion seems to be, then, that chaotic situations occupy a middle ground between linear and random systems; they require more information to describe than do purely linear systems, but can nevertheless embody hidden levels of order, which mean that they can be described more concisely than wholly random systems. Given a situation governed by chaotic dynamics, in which we do not know the initial conditions in full detail, we will not be able to give a detailed description of the final state. But we may still be able to say that the final state is one of a large class of possible states, all of which have certain properties in common. For example, we do not know the positions or velocities of the 10^23 gas molecules inside a balloon, but we are still able to predict the balloon's shape, because virtually all of the possible states of the molecules correspond to the same macroscopic behaviour.

To take another example, we cannot predict in detail the flow of rainwater down a newly logged slope. But we may be able to predict that, whatever the initial conditions, the outline of the final drainage system will have a fractal dimension between 1.5 and 1.6. (Mandelbrot, ``The Fractal Geometry of Nature"). And this prediction may be all we need for calculating what we want to know, such as rate of erosion.

What does chaos mean for engineers? Chaos puts limits on what can usefully be modelled: the purpose of a model is to make predictions, but in a chaotic situation, the prediction may vary wildly as the inputs to the model change in the ninth decimal place. And, by Gelbaert's law, we can't measure past the sixth decimal place. So when designing a system, engineers have tried to eliminate chaos if they can. Dr Saif, in his lecture on control, mentioned that control engineers seldom tackle black boxes with non-linear models. It's easier to linearise the system, which will eliminate any possibility of chaotic behaviour.

However, in the last two decades engineers have found that they can make use of chaos; some of the work in this area is summarised in Nature, Vol. 364, 19 August 1993. (In the resource files). Chaotic systems may possess hidden order, and we are still in the process of discovering tools, such as the concept of fractal dimension, which can allow us to detect this order and use it as a basis for design.


Essay: Suggestions for Essay



John Jones
Tue Dec 02 15:01:50 PST 2003