The Impossibility of Artificial Intelligence
This is the best summary I can manage of the argument put forward by Dr Bird to show that artificial intelligence, however far it may go, will never approach human intelligence.
We start by considering an AND gate. It is clear that there is no space for intelligence within the confines of an AND gate: for a given input, there is a single possible output. The gate is wholly defined by the mapping that takes the inputs {(0,0), (0,1) and (1,0)} to the output (0), and the input (1,1) to the output (1).
We can characterise this as a many-to-one mapping (for inputs (0,0), (0,1) and (1,0)), and a one-to-one mapping (for input (1,1)). Both these types of mapping have the feature that, once the input is specified, only one output is possible. Whereas the behaviour of human beings seems to exhibit one-to-many mapping: the fact that you've behaved a particular way in a given situation before is not enough to allow me to predict with certainty that you'll behave the same way next time.
Anything whose behaviour can be characterised using only many-to-one and one-to-one maps is an agent governed by necessity, or gbn, that is to say, its behaviour is wholly determined by its environment -- its `situation space'.
The second idea is that of a class closed under a particular operation. For example, the class of positive integers is closed under addition: if you add any number of positive integers, the result will be another positive integer. The class of positive integers is not closed under subtraction -- I can subtract one positive integer from another and come up with a negative integer. The class of all integers is closed under subtraction, but not under division.
It is Dr Bird's claim that the class of gbn agents is closed under composition; that is to say, if we wire together two, or n AND gates, the result will still be a mechanism describable by a many-to-one mapping, and hence will still be wholly incapable of choice, responsibility, or genuine intelligence. And the same goes for any other mechanisms, such as disk drives, tape decks, or robot manipulators, that we wire into the circuit.
At this stage, Dr Bird introduces a distinction between two views of intelligence, which he calls the `Lower view' and the `Higher view'. On the `Lower' view, which, says Dr Bird, is the view most widely held today, intelligence is a continuum, ranging from viruses and AND gates on the extreme left to ENSC students and profs on the extreme right. Somewhere between these two extremes, responsibility and choice mysteriously emerge.
Dr Bird contrasts this with the `Higher' view, which holds that there are two entirely separate continuums, one for agents governed by necessity, the other for agents not governed by necessity. The former ranges from the less complex to the more complex, but nowhere on it do we find any intelligence or choice. All computers, he claims, lie on this continuum. The latter continuum ranges from less intelligent to more intelligent.
It is important to note that Dr Bird does not claim to know of even a single example [with the possible exception of himself] of an agent that is definitely not governed by necessity. Thus, he does not purport to show that humans have free will, only that computers certainly do not.
A corollary of this argument is that intelligence cannot be determined from behaviour, since there's no way for an observer to tell whether the behaviour is the end result of a causal chain, each link of which is governed by necessity, from an initial stimulus from the environment. Thus Dr Bird rejects the Turing test as a test for intelligence.
In solving large problems, computer scientists sometimes make use of a method called `the Monte Carlo method'. This may be illustrated by a simple example: suppose we wish to calculate a value for pi. We construct a circle of unit radius. If we could measure its area, that would give us the value. We'll suppose, however, that we cannot directly measure the area, though we can tell whether any given point is inside or outside the circle. (This is obviously unrealistic for something as simple as a circle, but is quite reasonable for the regions in multi-dimensional space where this method would actually be used.)
I have a random-number generator, based on measuring the time between successive disintegrations of nuclei in a lump of a radio-isotope. I normalize a series of these random numbers to lie between -1 and 1, and take successive pairs of numbers as the coordinates of points within the square having opposite vertices at (-1,-1) and (1,1). I place a large number of points, and count the fraction that land within the circle. This fraction is an estimate of pi/4.
Applying this method repeatedly would give different answers every time, though if the number of points were large enough, they would all be fairly close to the right answer.
This might be a very small step in a much larger calculation. I might ask a grad student to provide an estimate of pi, and not care about how he got it. So in describing the large calculation, it would appear to me that each step followed by mathematical necessity from the previous step. Yet repeating the whole process might give me a different answer.
Now, Dr Bird has anticipated this objection, and his response is to separate the random-number generator from the rest of the system, and to argue that the rest of the system remains an agent of necessity. But this may not be possible. Suppose, for example, that we are considering a neural-net architecture of 1,000,000 neurons, and each neuron makes decisions by saying to itself, ``If the sum of my inputs is greater than pi, I will fire; otherwise not.'' Its estimate of pi is, of course, provided by its own Monte Carlo routine and associated radio-isotope. John's argument requires him to take the net apart and consider the programmable part and the random-number generators separately; but then he's no longer studying the architecture I've designed.
So for Searle, and for Dr Bird, we can never really know if a creature is intelligent or not. Suppose, for example, that a spaceship lands in the AQ tomorrow and an alien emerges. How are we to tell if the alien is really intelligent? We could try giving him tests or drawing conclusions from the fact that he's the one who's built a spaceship and reached our planet, not vice-versa; but Searle's Chinese Room argument has already ruled these out. So, can we look inside?
Assuming the alien will humour us for a while, we strap him on an operating table and prepare our X-rays, CAT scans and electron microscopes. But having got him on the table, it strikes us: we haven't the faintest idea what we're looking for. If we look at the alien's brain, we can expect to see matter in motion. But we already know that matter in motion obeys the laws of physics, and the laws of physics are of only two kinds: there are deterministic laws, governing the behaviour of macroscopic matter, and there are probabilistic laws, governing the decay of quantum wave-functions under measurement. Science knows of no other way for matter to behave. So it seems that we're putting the alien through a test that none of us could pass: for Dr Bird to call him intelligent, his brain has to behave in a way that contradicts the known laws of physics. But we've never seen any other system behave in such a way, so why are we looking for it now?
As soon as we return from the world of philosophy to the real world, all these difficulties disappear. We know very well what intelligence is and how to measure it. Dr Bird himself, when assigning grades for a course, will test the understanding of his students to see if they've really understood the material or are just repeating phrases they've memorised from a text book. And he does this, not by probing their brains, but by asking questions: ``How will the circuit behave if this resistance is increased? What could cause this circuit to go unstable?'' This is the approach we all use in making judgements of intelligence, something we do every day with no difficulty.
Karl Popper has provided a test to determine whether any given statement can be considered a scientific law. The test is, is there an experiment which could potentially disprove the law? If there is not, then the law is compatible with any state of affairs whatever, and thus has no empirical content. So we must ask Dr Bird, what experiment would you accept as potentially falsifying your statement?
One way of falsifying it would be to produce an agent of necessity which is intelligent. But to do this, we would need some independent test for the presence of intelligence. And this is just what Dr Bird denies is possible. So it would seem that his statement is unfalsifiable, and thus devoid of scientific content. That is, the truth of his statement is compatible with any behaviour we care to specify on the part of an agent of necessity. It provides no ground for denying that a computer can play chess, write good poetry, perform surgery, or compose symphonies. It just refuses to call this behaviour `intelligent'.
Is it possible, then, that Dr Bird's statement is a definition? Definitions cannot be true or false, but they may be useful in clarifying our thinking. And Dr Bird's paper does in places claim that he is just expanding on the `intensional definition' of intelligence. But I would argue that, considered as a definition, the statement has the disadvantages that: i) it implies that to be intelligent is to be illogical; ii) it implies that to be intelligent is to be unprincipled and inconsistent; and iii) it implies that we can never know whether a given person is intelligent or not. These disadvantages make the statement virtually useless as a definition, and, moreover, suggest that it cannot possibly be a paraphrase of the way we use the word in everyday speech.
To think logically is to have one's ideas follow, one from another, in accordance with the laws of logic. The very argument which Dr Bird is offering us is an example of these laws: if we accept the two premises, we are compelled to accept the conclusion. But we usually consider, contrary to what Dr Bird would have us believe, that the more intelligent a person is, the more logically they think. And a computer is just a system which we have designed so that its physical necessities model the necessities of logic.
Dr Bird also claims that the hallmark of intelligence is the capacity to act in different ways under identical circumstances. This does not seem to accord with common wisdom. For example, suppose every time I offer this course, a student offers me $100 for a higher grade. Am I really acting more intelligently if I sometimes say yes and sometimes no?
There's another way the alien can convince us of his intelligence; all he has to do is to demonstrate a one-to-many mapping from his input space to his output space. Of course, before he can do this, we must present him with the identical situation on at least two occasions, ensuring that he doesn't know the time and that he doesn't remember the first situation when the second one is presented. This isn't an easy experiment to perform; in fact, it's impossible. But even if we could somehow perform it, and observed different behaviour on the two occasions, how could we tell it was prompted by intelligence, and not, say, some internal random number generator? And if the first response seemed appropriate to the situation, why should we consider it a mark of intelligence to act differently the next time round?
There is one final shred of hope: Dr Bird says that to be intelligent is to be accountable. This could get us somewhere, if we had a test for accountability. Do we? Of course not. We have opinions about accountability; for example, a century ago many people believed that women and black people could not be held accountable. That was why they weren't allowed to vote. But if we ask for an objective measure of accountability, something that could give us a reading on an accountability meter, we have no hope of such a thing. So, in the end, Dr Bird has told us that intelligence is something that we can neither recognise nor measure, and defined it in terms of another something, accountability, which we can likewise neither define nor measure. This does not strike me as a promising foundation for scientific analysis of the question.
Next: Modelling the Leung Accelerometer