Next: The Big Machines


Design: The Unteachable Part of Engineering

We have now surveyed the greater part of what's formally taught in the School of Engineering: communications, control, robotics, thermodynamics and information. The courses taught in these areas give you the power to analyse most existing devices, to predict and explain their behaviour in terms of underlying physical law. However, the analysis of an existing device is normally described as `reverse engineering'; in contrast to the normal task of engineers, which starts from a description of the desired behaviour and ends with a specification of a device that will accomplish that behaviour. That is, the normal task of an engineer is synthesis, not analysis.

But we don't usually teach synthesis in the enginering curriculum. This isn't because we're no good at it; some at least of our faculty are prolific inventors. But although we can do it, we don't know how we do it. In general, no-one knows how to do it. There is no algorithm for design.

We have tried to deal with this by project courses. Another approach would be to seek for the principles that underlie good design and teach those. This search has intensified in recent years, in the hope of achieving computer-aided design. As you may know, computer salespersons have been talking about CAD/CAM for more than a decade now. But it still doesn't exist. What does exist is computer-aided drafting, as in AutoCAD. This speeds up the designer's work, but it doesn't replace the designer. We can consider a series of improvements in the level of help a design system can provide:

Only the last of these improvements involves synthesis, and very little progress has been made in this direction. But research continues, motivated by the fact that most of the cost associated with a particular product is incurred in the first stages of the design; a mistake at this stage can make the project impossible, or very expensive, to complete. One example would be the concept that obsessed the first designers of flying machines, that an aeroplane should fly by flapping its wings. In the later stages of design, incremental improvements can increase efficiency or reduce costs by a few percent. Yet almost all the help provided by modern CAD is at the level of detail design. If we could automate conceptual design, we could potentially realise much greater savings.

At the very early stages of design, a rational approach might be to conceive four or five different approaches to the problem, develop them in parallel, compare their strengths and weaknesses, and eventually pick a winner for further development, perhaps combining the features of several candidates. For example, a designer of flying machines might try approaches based on the bird, the flying fox, the bee, and the flying fish. In 1987, a group of researchers at Oregon State University observed expert designers at work on a trial design problem -- to develop a mechanism for dipping the two ends of a rectangular bar into a chemical solution. The designers were asked to `talk their way through' the design process. None of them followed the rational approach. Each of them selected a single alternative at the very beginning and stuck to it single-mindedly. If the initial choice led to problems, they'd look for the minimum adjustment that would fix the problem, rather than re-examine the initial choice.

Why is this problem so hard? We'll examine some theories of what design is, and how it can be done. First, consider some examples: you're doing design when you write a thesis; when you develop a new car; when you choreograph a new dance; when you fix a wobbly table; when you design a building.

Conceptual design plays a part in all these activities, though it's more important in some than in others. Dr Chandrasekaran at Ohio State University has suggested a three-part classification for design problems, according to the relative roles of conceptual and detail design. In a Class 1 design problem the design requirements are ill-defined, no accepted solution method exists, and success will lead to a major new invention. Examples of Class 1 design are rather rare; the Manhattan Project would be one. In Class 2 design, the basic structure of the artifact to be designed is known, but some major components are novel: for example, the RISC-chip-based computer. Lastly, in Class 3 design there are well-defined procedures for working through each part of the problem, and the criteria for success are clear and quantitative.

According to Chandrasekaran, only Class 3 problems can be successfully automated at present. Yet we know that human beings can do Class 2 and Class 1 designs. How do they do it?

Let us first establish some terminology. Design begins with a need. The need can be stated as a number of functional requirements, that is, a list of things that the artifact will have to satisfy. In general, such a list will be neither complete nor final, so it may be updated during the design process.

The artefact that satisfies the functional requirements can be described by the numerical values of a list of properties; when each item on this list has a value, the artefact has been fully designed and the list can be passed to manufacturing. We call the items on this list the design parameters. The values of the design parameters are the numbers that appear on the final blueprints. In general, we don't know what's on the list of design parameters until we've finished the conceptual and configuration stages of design.

In addition, a design task may be characterised by a number of constraints. Unlike a design requirement, a constraint may often be stated as an inequality; for example, ``The product must cost less than $5.00." Some constraints are imposed by the laws of physics, some by economics, aesthetics, or safety.

Christopher Alexander, an architect, addressed this problem in Notes on the Synthesis of Form, 1964. His advice to the conceptual designer was, ``Decompose the requirements into a hierarchy of nearly independent subsystems.''

Alexander's approach is based on the observation that human beings aren't good at holding many things in mind at once; according to George Miller, a psychologist who investigated this in the 1950's, we can't think about more than seven things at a time. So our only hope of dealing with a complex problem is to decompose it into simpler parts, and solve the parts one at a time. That's why the subsystems have to be nearly independent. Of course, whether two requirements are independent of each other depends partly on how we're planning to satisfy the requirements. Alexander's advice would be, always choose a method of satisfying the requirements that keeps them independent of each other.

An illustration of this can be taken from refrigerator design: a refrigerator has the dual requirements of keeping food cold, and providing easy access to the food. If we choose a vertically-hung door for the fridge, these two requirements will be in conflict, whereas the horizontally hung door of a chest-type freezer will allow both to be satisifed independently. (This example is actually taken from a later writer, Dr Nam Suh at M.I.T., who published ``The Principles of Design" in 1990.) Suh's first principle is essentially a re-statement of Alexander's advice:

``Divide the design parameters into non-overlapping sets; each set should correspond to a single functional requirement.''

Before setting the numerical values of any variable in the design, we can evaluate proposed solutions by seeing how successful they are in allowing us to decompose the functional requirements. Given a proposed configuration, the functional requirements and the design parameters are linked by the laws of physics. Of course, once the functional requirements have been decomposed, we can repeat the decomposition process to create a hierarchy:

hier.eps

bucket.eps

If this process is successful, the structure of the artefact will have a hierarchical decomposition parallel to the decomposition of the functional requirements; each aspect of the artefact's structure will satisfy exactly one functional requirement. The decomposition process stops when we arrive at components of the artifact which do not themselves have parts, but which can be characterised by the values of a small number of parameters. The values of these parameters can often be found by a process of constrained optimization. Techniques for linear and non-linear optimization can be found in the field of Operations Research (which we will discuss briefly next week.)

The best example of this hierarchical decomposition may be writing software. Unless we have unusually stringent limitations on the size of the code, or its speed, we can always make different parts of the code independent of each other. The power of hierarchical decomposition is explicitly recognised in the rise of object-oriented programming, where the individual objects correspond to the lowest levels of the hierarchy. Electronic design is slightly harder, but it is still often possible to ensure that separate requirements are satisfied by separate circuit components. Interaction between these components can be minimized. The hardest design problems are those found in mechanical design or architecture, where the artefact has to be visualised in three-dimensions.

Suh has two principles of design. The second principle is rather counter-intutitive: ``Minimize the information content of the design''. This is the same concept of information as we discussed two weeks ago. Suh's argument is that the information content of a measurement L specified to within a tolerance dL is L/dL. The general implication of this axiom is that one should avoid specifying unnecessarily precise tolerances. (My own view is that, while this is a reasonable guideline, it is not clear in all cases how it should be applied, or why it should be considered more important than, say, minimizing the cost of manufacture.)

The `design decomposition' approach gives us no guide as to how the requirements for a design should be decomposed, or what kind of artefact might satisfy them. Chandrasekaran, and Dr John Dixon at Amherst, suggest that designers usually solve this by recognising any new problem as a variant on something they've tackled before, using the same plan of attack, and re-designing to adapt it to the changed requirements. Thus the challenge of creating something original is replaced by the easier task of modifying an existing, not-quite-right solution. Although this activity seems less admirable than pure invention, it is faster, and probably much more common in practice. Admittedly, it may not yield the optimum solution. However, Herbert Simon has suggested that the search for a true optimum is impractical for all but a small class of artificially restricted problems; in the real world, he suggests, we search for solutions that are satisfactory rather than optimum.

Design decomposition has been criticised by other researchers, notably Boothroyd at the University of Massachusetts and Ulrich and Seering, MIT. They offer a reductio ad absurdum of decomposition, illustrated by a nail-clipper in which each conceivable functional requirement is realised by an independent part of the device. The resultant clipper weighs 18 kg and costs $80. This leads them to suggest an additional stage to the design process, in which individual elements of the design are integrated into a simpler whole.