next up previous
Next: Chaos & Order (2)

Chaos and Order (Part 1)

I first encountered a pocket calculator as an undergraduate. One of the profs had it; it cost four hundred dollars, and it looked like this:

I now have my own calculator. It cost twenty dollars, and it looks like this. Let me draw your attention to one particular button on the calculator, the one labelled `solve'. This allows you to enter any equation and have the calculator find a solution. How do you think it does this?

For some equations it's easy; for a quadratic equation, for example, there's a well-known formula, and there is a similar, though more complex, formula for the solution to a cubic equation. There is even a formula for the solution of a quartic equation, though it occupies several pages. For quintic and higher order equations, there is no formula. Yet the calculator can handle a quintic equation with no apparent effort. What method is it using?

Research in the back of the instruction manual discloses that it's using something called the secant method. What's this?

Let me illustrate it with a graphical example. (I've distributed copies of this graph to the class, so you can follow along with me.) This is a graph of f(x) = x3 - 1 = 0. We want to find the zeroes of this equation. Given the graph, of course, this is easy. Suppose we didn't have the graph, and suppose we couldn't factor the equation. Is there a blind algorithm that will find the zeroes? Try this: guess an answer, x(0). Evaluate the left-hand side of the equation. Does that work? No. Guess again: x(1). Evaluate the equation. Does that work? No. Now draw the secant cutting through (x(0),f(x(0))), (x(1), f(x(1))). Note where this cuts the x-axis. Call this x(2). Evaluate f(x(2)). Does that work? No, but it's an improvement. Draw a new secant through (x(1),f(x(1))), (x(2), f(x(2))). If we continue in this way, we eventually close in on the zero.

Does this always work, or does it depend on being lucky with the first two guesses? Let's try some other guesses. The next slide shows the series of successive approximations, {x(i)}, for three pairs of initial guesses. This seems to show that it sometimes works and sometimes not. Can we get a more global picture?

We could try a greater number of pairs. But this slide is already rather crowded. How many more pairs can we investigate before the slide becomes unreadable? This is really a question about the quantity of information we can fit on the slide. How much do we have on it now?

Well, the graph has 64 datapoints, each representing a pair of numbers, and each number is represented to 3-digit precision. (I may have plotted them to higher precision, but you can't tell this from looking at the graph.) We saw from the `mystery number' example that one decimal digit is worth about 3.32 bits. So the whole graph is worth 3.32 * 64 * 2 * 3 = 1,275 bits.

This is not very good; we should be able to pack 10,000 times this much information onto the slide. How can we do that? What we want to know is how successful the secant method is in solving an equation, given an initial pair of guesses. In those cases where it does succeed, it would also be interesting to know how long it takes. So, let the horizontal axis represent the first guess, the vertical axis represent the second guess. Now each pixel on the screen represents a starting point for the secant method. Run the method for each pixel and see how long it takes to work, if it works at all. Then colour the pixel according to the time taken to reach a solution. If we haven't reached a solution after 1,000 iterations, give up and colour the point violet.

Complex Numbers

At this point, it will turn out to be useful to be able to discuss complex numbers. So here's a five-minute introduction to them.

Equations like x2+1=0 were long supposed to have no solution -- x2 will be positive whether x is positive or negative, so how could x2+1 be zero? But then someone suggested defining a number i as that number which, when squared, would give -1. It turns out that, given this modest addition, we can now solve every equation. We refer to i and its multiples as imaginary numbers.

A complex number is now a number that has both a real and an imaginary part. For example, z=x+iy is a complex number, where x and y are real numbers. Addition of complex numbers is governed by the rule:

(a+ib) + (c+id) = (a+b) + i(c+d)

and multiplication by:

(a+ib) * (c+id) = (ac-bd) + i(ad+bc)

Our original equation, x3-1=0, actually has multiple roots: one real root, x=1 and two complex ones x=-0.5+i sqrt(3)/2 and x=-0.5-i sqrt(3)/2 . But so far the secant method has only found the real root. How could we find the complex ones?

We can again use the secant method, but we will now allow complex numbers as guesses for x. This presents us with a problem: we need the whole slide to represent the real and complex components of a single guess; to represent both guesses, we'd need a four-dimensional slide. However, there are some ways round this; we can dissect the four-dimensional object by taking slices through it at different angles. First, we'll say that if our first guess is (x0, y0), then our second guess will be (x0, -y0). For a second slice, we can say that the second guess will be (0, 0), whatever the first guess is. And for a third slice, we'll put the second point very close to the first point.

The first slice gives us a mess -- we've cut at an oblique angle to the interesting structure. The second slice, however, reveals a definite pattern, as does the third slice.

When we take the third slice, we're choosing guesses that are separated by a very small interval -- we're sliding the two end-points of the initial secant closer together. In the limit of this process, the secant becomes a tangent. And doing that turns the secant method into something else: Newton's method. Like the secant method, Newton's method solves a generic equation f(x)=0 by constructing a series of approximations, using the formula x(i+1)=x(i) - f(x_i)/g(x_i), where g(x) = df(x)/dx

If we apply this method to the same equation, x3-1=0, we obtain a very similar result.

Before leaving Newton's method, I'd like to apply it to one more equation, f(x)=(x+1)(x-1)(x-a), where a varies according to the value of our initial guess; specifically, if our initial guess is b, then a=3b.

We'll solve this equation for various different values of the complex number b. And again we'll display our results as a bitmap, the horizontal and vertical axes corresponding to the real and imaginary parts of b respectively, and each pixel being coloured according to the root that it leads to. In fact, we'll use two colours for each root -- one if Newton's method takes us to the root in an odd number of interations, another if it takes an even number of iterations. And we'll use a seventh colour for those points that don't converge to a root at all.

This set of slides shows that the set of points which don't converge has a familiar shape -- the Mandelbrot set, which occurs in many iterative non-linear mappings. The Mandelbrot set is infinitely detailed; if you magnify any portion of its boundary, you get an unending series of novel features. So the 1-megabyte of information presented in this slide doesn't even scratch the surface. The next step in increasing information content is a movie: the following short sequence examines the same equation at gradually increasing levels of magnification.





next up previous
Next: Suggestions for Essay



John Jones
Mon Oct 21 08:32:47 PDT 2007