Books

Short works

Books : reviews

Gerald M. Edelman.
Bright Air, Brilliant Fire.
Penguin. 1992

rating : 3.5 : worth reading
review : 15 August 1998

Philosophers make a big deal out of the intentionality aspect of consciousness, how it is that:

Beings with minds can refer to other beings or things; things without minds do not refer to beings or other things.

Here, Edelman summarises his own theory of mind, consciousness and self-consciousness, as laid out in more technical detail in his earlier trilogy. He starts from a biological perspective, from the staggering complexity of the brain, and notes that:

Biological organisms (specifically animals) are the beings that seem to have minds. So it is natural to make the assumption that a particular kind of biological organization gives rise to mental processes.

When I first read that passage, I felt it was not a natural assumption at all. After all, up until relatively recently, we could just as easily have said "Biological organisms (specifically birds) are the beings that seem to have wings. So it is natural to make the assumption that a particular kind of biological organization gives rise to flight."

However, by "biology" Edelman means: the sheer staggering complexity of biological systems such as the brain; the consideration of the historical component of individuals' developmental (both in evolution of the species, and the idiosyncratic growth of each individual); and embodiment, or how the mind evolves and grows in interaction with an equally staggeringly complex, open-ended environment containing other minds. He does not seem to mean "wetware is intrinsically different".

He goes into quite a bit of detail about the biology of the brain as currently understood, and then describes his model of mind and consciousness. In this model, the self-conscious mind has various evolutionarily determined value systems, and feedback loops between them, the categorization of inputs, and language centres. He claims that language is necessary for self-consciousness. (I don't understand why he feels it had originally to be a spoken one.)

All that is needed for consciousness in this model is known physics and biology. By known physics he means no strange 'conscious particles', no Penrosian quantum gravity, no 'spooks'. He states he is not a "carbon chauvinist" either: he admits we might one day be able to build self-conscious artifacts, although not in the near future, because of the difficulty of the task. (And so he argues that we don't need to worry yet if such a thing would be ethical. Personally, I'd rather argue the ethics before the deed...)

So far, so eminently reasonable. But Edelman seems to have a real hang-up about computers, constantly insisting that minds are intrinsically different from computers, in several passages such as:

Decisions in such systems are based on the statistics of signal correlations. Notice the contrast with computers; these changes occur within a selectional system rather than depending on the carriage of coded messages in a process of instruction.

And yet, only two paragraphs after this dismissal, we get

My colleagues and I have simulated complex automata based on the [theory of neuronal group selection] in supercomputers to demonstrate that perceptual categorization can be carried out on value in a global mapping.

So, minds aren't computers, but we can use computers to simulate minds? Why is this 'simulation' not a mind? (Its external behaviour certainly gives ample appearance of mind-like characteristics.) The reason for this self-contradiction seems to lie in his understanding of simulation:

... no effective procedure simpler than the simulation itself can predict the outcome.

Possibly true, but certainly irrelevant. So what if there is no "effective procedure simpler than the simulation itself"? No-one is saying you have to be able to predict the behaviour of a program without running it, or by running a simpler one.

Edelman provides a Postscript with a more detailed 'rebuttal' of the mind as computer view. It contains arguments such as:

The facile analogy with digital computers breaks down for several reasons. The tape read by a Turing machine is marked unambiguously with symbols chosen from a finite set; in contrast, the sensory signals available to nervous systems are truly analogue in nature and therefore are neither unambiguous nor finite in number. Turing machines have by definition a finite number of internal states, while there are no apparent limits on the number of states the human nervous system can assume. ... The transitions of Turing machines between states are entirely deterministic, while those in humans give ample appearance of indeterminacy.

This passage is replete with confusions, such as: choosing symbols from a finite set doesn't mean that there are only finitely many combinations of those symbols (binary notation uses only two digits, but can represent rather more numbers); the output on my computer screen is controlled digitally, but high resolution pictures displayed on it can be made to appear sufficiently analogue to me; finite doesn't mean limited or bounded; deterministic doesn't mean predictable. And what about those weasel words: "no apparent limits", "ample appearance of indeterminacy".

So I think that, although Edelman understands the biology superbly, and would never dream of confusing levels with, say, chemistry ["those carbon atoms, so inflexible, each the same as the other, always wanting precisely four bonds -- there's no way you could ever build a conscious mind out of them"], he needs to do the same for software. But Dennett and Hofstadter are much better at rebutting this sort of position than I am.

Gerald M. Edelman.
Neural Darwinism: the theory of neuronal group selection.
OUP. 1987