Books

Short works

Books : reviews

Andy Clark.
Microcognition: philosophy, cognitive science, and parallel distributed processing.
MIT Press. 1989

Andy Clark.
Associative Engines: connectionism, concepts, and representational change.
MIT Press. 1993

Andy Clark.
Being There: putting brain, body and world together again.
OUP. 1997

rating : 2.5 : great stuff
review : 11 October 2004

Clark provides an excellent, thought-provoking, and immaculately-argued account of embodiment: how intelligence is intimately linked with our active perception of, and action in, the physical world, forming a closely coupled temporal loop of "continuous reciprocal causation". Thinking is best understood as a temporal process both highly constrained by our interactions with the environment, and very much helped by the way the environment plays a part in orchestrating our behaviours, and by the "stigmergic" way we manipulate and modify that environment (including other people, and the use of language). Clark shows how the extreme symbolic representationalists and the extreme non-representationalists are thus both wrong: the truth (as always) lies somewhere between the extremes.

Clark builds up to the argument that language is our ultimate stigmergic artefact, allowing us to do complex things, and that written language allows us to do even more complex things, by enabling us to think about our thoughts. This leads to the obvious question: what then might be the next level of complexity, and the next, that we invent and use to manipulate our environment? The computer springs to (my!) mind. Maybe virtual reality is the next step? Or maybe not. Given Clark's emphasis on the importance of embodiment in a rich environment, (current) virtual reality may well prove to be too impoverished. Maybe making the rich physical world "smarter" (embedding computers in the world, rather than the world in computers) is therefore the way to go.

One aspect I found particularly intriguing is the idea that sensors deliberately provide a filter on the world, cutting out extraneous information, and gathering only what is needed. So "better" sensors might not be a good thing. The physical body is important, because the system can use the natural physical properties of its body (such as damping) to help control its actuators. And the complexity of the environment and the feedback from it changing in response to the system's actions is a crucial contribution to the complexity of the system's own behaviours.

I provide a few key quotations below, but I feel that I want to quote essentially the whole work! The book is a closely-argued whole, and deserves to be read as such.

pp24-25. Von Uexkull introduces the idea of the Umwelt, defined as the set of environmental features to which a given type of animal is sensitized. ... Von Uexkull's vision is thus of different animals inhabiting different effective environments. The effective environment is defined by the parameters that matter to an animal with a specific lifestyle. The overarching gross environment is, of course, the physical world in its full glory and intricacy. ... Biological cognition is highly selective, and it can sensitize an organism to whatever (often simple) parameters reliably specify states of affairs that matter to the specific life form. ... It is a natural and challenging extension of this idea to wonder whether the humanly perceived world is similarly biased and constrained. Our third moral claims that it is, and in even more dramatic ways than daily experience suggests.

p51. The immediate products of much of perception ... are not neutral descriptions of the world so much as activity-bound specifications of potential modes of action and intervention. Nor are these specifications system-neutral. Instead ... they are likely to be tailored in ways that simply assume, as unrepresented backdrop, the intrinsic bodily dynamics of specific agents.

pp60-1. we are generally better at Frisbee than at logic. Nonetheless, we are also able ... to engage in long-term planning and to carry out sequential reasoning. If we are at root associative pattern-recognition devices, how is this possible? One [factor is] ... external scaffolding. ... The combination of basic pattern-completing abilities and complex, well-structured environments may thus enable us to haul ourselves up by our own computational bootstraps.

p73. The biological brain, which parasitizes the external world ... so as to augment its problem-solving capacities, does not draw the line at inorganic extensions. Instead, the collective properties of groups of individual agents determine crucial aspects of our adaptive success.

p96. If robot behaviour depends closely on sensor readings, highly sensitive devices can become overresponsive to small perturbations caused by relatively insignificant environmental changes, or even by the operation of the sensor itself. Increased resolution is thus not always a good thing. By using less accurate components, it is possible to design robots in which properties of the physical device ... act so as to damp down responses and hence avoid undesirable variations and fluctuations. ... it may even be misleading to think of the sensors as measuring devices---rather, we should see them as filters whose role is, in part, to soak up behaviorally insignificant variations so as to yield systems able to maintain simple and robust interactions with their environment. Real physical components ... often provide much of this filtering or sponge-like capacity "for free" .... Simulation-based work is thus in danger of missing cheap solutions to important problems by failing to recognize the stabilizing role of gross physical properties ....
     Another problem with a pure simulation-based approach is the strong tendency to oversimplify the simulated environment .... This furthers the deeply misguided vision of the environment as little more than the stage that sets up a certain problem. ... the environment [is] a rich and active resource---a partner in the production of adaptive behavior. Related worries include the relative poverty of the simulated physics (which usually fails to include crucial real-world parameters, such as friction and weight), the hallucination of perfect information flow between "world" and sensors, and the hallucination of perfectly engineered and uniform components (e.g., the use of identical bodies for all individuals in most evolutionary scenarios). .... Simulation offers at best an impoverished version of the real-world arena, and a version impoverished in some dangerous ways: ways that threaten to distort our image of the operation of the agents by obscuring the contributions of environmental features and of real physical bodies.

p112. a phenomenon is emergent if it is best understood by attention to the changing values of a collective variable. ...
• A collective variable is a variable that tracks a pattern resulting from the interactions among multiple elements in a system ...
• Different degrees of emergence can now be identified according to the complexity of the interactions involved. Multiple, nonlinear, temporally asynchronous interactions yield the strongest forms of emergence; systems that exhibit only simple linear interactions with very limited feedback do not generally require understanding in terms of collective variables and emergent properties at all.
• Phenomena may be emergent even if they are under the control of some simple parameter, just so long as the role of the parameter is merely to lead the system through a sequence of states themselves best described by appeal to a collective variable ...
• Emergence ... is linked to the notion of what variables figure in a good explanation of the system. ... it does not depend on the vagaries of individual expectations about system behaviour.

p114. As the complexities of interaction between parts increases, the explanatory burden increasingly falls not on the parts but on their organization.

p120. these "pure" [dynamical systems] models do not speak directly to the interests of the engineer. The engineer wants to know how to build systems that would exhibit mind-like properties, and, in particular, how the overall dynamics so nicely displayed by the pure accounts actually arise as a result of the microdynamics of various components and subsystems. ... he or she will not think such [dynamical] stories sufficient to constitute an understanding of how the system works, because they are pitched at such a distance from facts concerning the capacities of familiar and well-understood physical components. ... there will be multiple ways of implementing the dynamics described, some of which may even divide subtasks differently among body, brain, and world. The complaint is ... commanding a good pure dynamical characterization of the system falls too far short of possessing a recipe for building a system that would exhibit the behaviors concerned.

p140. [adaptive autonomous agents] researchers propose modules that interface via very simple messages whose content rarely exceeds signals for activation, suppression, or inhibition. As a result, there is no need for modules to share any representational format-each may encode information in highly proprietary and task-specific ways .... This vision of decentralized control and multiple representational formats is both biologically realistic and computationally attractive. But it is ... fully compatible both with some degree of internal modular decomposition and with the use of information-processing styles of (partial) explanation.

p156. Partial programs would ... share the logical character of most genes: they would fall short of constituting a full blueprint of the final product, and would cede many decisions to local environmental conditions and processes. Nonetheless, they would continue to constitute isolable factors which, in a natural setting, often make a "typical and important difference."

p156. Consider the very idea of a program for doing such and such.... The most basic image here is the image of a recipe-a set of instructions which, if faithfully followed, will solve the problem. What is the difference between a recipe and a force which, if applied, has a certain result? Take, for example, the heat applied to a pan of oil: the heat will, at some critical value, cause the emergence of swirls, eddies, and convection rolls in the oil. Is the heat (at critical value) a program for the creation of these effects? Is it a recipe for swirls, eddies, and convection rolls? Surely not---it is just a force applied to a physical system. The contrast is obvious, yet it is surprisingly hard to give a principled account of the difference. Where should we look to find the differences that make the difference?

p158. it is a program that will yield success only if there is a specific backdrop of bodily dynamics (mass of arm, spring of muscles) and environmental features (force of gravity). It is usefully seen as a program to the extent that it nonetheless specifies reaching motions in a kind of neural vocabulary. The less detailed the specification required (the more work is being done by the intrinsic-long-term or temporary dynamics of the system), the less we need treat it as a program. We thus confront not a dichotomy between programmed and unprogrammed solutions so much as a continuum in which solutions can be more or less programmed according to the degree to which some desired result depends on a series of moves (either logical or physical) that require actual specification rather than mere prompting.

pp159-60. It is, alas, one of the scandals of cognitive science that after all these years the very idea of computation remains poorly under stood. ... we would find computation whenever we found a mechanistically governed transition between representations, irrespective of whether those representations participate in a specification scheme that is sufficiently detailed to count as a stored program. In addition, this relatively liberal notion of computation allows easily for a variety of styles of computation spanning both digital computation (defined over discrete states) and analog computation (defined over continuous quantities). On this account, the burden of showing that a system is computational reduces to the task of showing that it is engaged in the automated processing and transformation of information.

p162. it surely remains both natural and informative to depict the oscillator as a device whose adaptive role is to represent the temporal dynamics of some external system or of specific external events. The temporal features of external processes and events are, after all, every bit as real as colors, weights, orientations, and all the more familiar targets of neural encodings. It is, nonetheless, especially clear in this case that the kind of representation involved differs from standard conceptions: the vehicle of representation is a process, with intrinsic temporal properties. It is not an arbitrary vector or symbol structure, and it does not form part of a quasi-linguistic system of encodings.

p164. The question, however, must be whether certain target phenomena are best explained by granting a kind of special status to one component (the brain) and treating the other as merely a source of inputs and a space for outputs. In cases where the target behavior involves continuous reciprocal causation between the components, such a strategy seems ill motivated. In such cases, we do not, I concede, confront a single undifferentiated system. But the target phenomenon is an emergent property of the coupling of the two (perfectly real) components, and should not be "assigned" to either alone.

p186. Much of what goes on in the complex world of humans may thus, somewhat surprisingly, be understood as involving something rather akin to ... "stigmergic algorithms" ... Stigmergy ... involves the use of external structures to control, prompt, and coordinate individual actions. Such external structures can themselves be acted upon and thus mold future behaviors in turn. ... the computational nature of individual cognition is not ideally suited to the negotiation of certain types of complex domains. In these cases, it would seem, we solve the problem ... only indirectly---by creating larger external structures, both physical and social, which can then prompt and coordinate a long sequence of individually tractable episodes of problem solving, preserving and transmitting partial solutions along the way.

p193. Public language is in many ways the ultimate artifact. Not only does it confer on us added powers of communication; it also enables us to reshape a variety of difficult but important tasks into formats better suited to the basic computational capacities of the human brain.

p195. [when performing complex tasks] the role of language is to guide and shape our own behavior---it is a tool for structuring and controlling action, not merely a medium of information transfer between agents.

p210. The emergence of such second-order cognitive dynamics is plausibly seen as one root of the veritable explosion of types and varieties of external scaffolding structures in human cultural evolution. It is because we can think about our own thinking that we can actively structure our world in ways designed to promote, support, and extend our own cognitive achievements. This process also feeds itself, as when the arrival of written text and notation allowed us to begin to fix ever more complex and extended sequences of thought and reason as objects for further scrutiny and attention.

p212. Suppose ... that language is ... an artifact that has in part evolved so as to be easily acquired and used by beings like us. It may, for instance, exhibit types of phonetic or grammatical structure that exploit particular natural biases of the human brain and perceptual system. If that were the case, it would look for all the world as if our brains were especially adapted to acquire natural language, but in fact it would be natural language that was especially adapted so as to be acquired by us, cognitive warts and all.

p217. Thoughts, considered only as snapshots of our conscious mental activity, are fully explained, I am willing to say, by the current state of the brain. But the flow of reason and thoughts, and the temporal evolution of ideas and attitudes, are determined and explained by the intimate, complex, continued interplay of brain, body, and world.

p220. biological systems profit profoundly from local environmental structure. The environment is not best conceived solely as a problem domain to be negotiated. It is equally, and crucially, a resource to be factored into the solutions.

Andy Clark.
Mindware: an introduction to the philosophy of cognitive science.
OUP. 2001

rating : 3.5 : worth reading
review : 6 July 2003

In this introductory text, Clark gives a clear overview and critique of most of the popular computational theories of cognition. We get a progression from good old-fashioned symbolic computation versus neural network connectionist approaches, to the importance of embodiment and use of robots, and then to the importance of the environment and culture. The critiques of most of the "pure" theories (pure symbology, pure connectionism, pure whatever) explain how "it's more complicated than that": maybe we do have a connectionist substrate, but with some symbolic reasoning abilities layered on top; embodiment surely is important, but so is reasoning; and so on. There is also an in depth "suggested reading" section at the end of each chapter, listing works on all sides of the argument.

As an introduction, most of the book is a summary and critique of the subject area, of others' work. But towards the end, Clark gets into some of his own work (although still built on multiple authors' ideas, of course). He moves a step beyond embodiment (don't consider a "naked brain", but a brainwithin a body) to emphasise the importance of what he calls wideware (don't stop at just the body, but go on consider the importance of environment, particularly language and culture). Stewart and Cohen similarly emphasise the importance of what they call extelligence, but here Clark seems to have it all in a much tighter feedback cycle, as a necessary part of the very process of cognition itself. He provides an example of how artists work using sketches, because the mind's eye does not have the same visual capabilities as does the real eye: artists can see things in their sketches they cannot imagine. Hence the existence of pencil and paper allows them to "think" things they could not think purely mentally. Similarly people use notes and drafts (as well as language) as an essential part of organising complex abstract ideas, such as writing a paper. The environment in which this all happens is not fixed, but is an adaptable part of this expanded cognitive space: people carefully and actively organise their environment, all the way from writing shopping lists, to pinning up visual cues to help cope with cognitive deficiencies such as Alzheimers. And as technology increases and improves, our cognitive abilities can do so, too. [I can appreciate this view: I already feel "disabled" if I am out of contact with Google.] This is a very optimistic view, because, although our wetware is of a fixed and limited size, our environment, our wideware, is much larger, and open.

Human brains ... reap the incalculable benefits of language, culture, and technology. We distribute subtasks, across time and space, preserve intermediate results, and create all manner of tools, props and scaffolding to help us along the way. It is not obvious what ultimately limits the cognitive horizons of such inveterate mind expanders

Andy Clark.
Natural-Born Cyborgs: minds, technologies, and the future of human intelligence.
OUP. 2003

rating : 3.5 : worth reading
review : 30 March 2005

In Being There, Clark argues that the fact of our embodiment, of our being in the world, is an essential peart of our intelligence. In this book he explores the question of where does that embodiment end: just where is the boundary between ourselves and the rest of the world. He argues that it is not simply the obvious "skin-bag", but that what our brain considers to be our embodied self can readily include other components and tools:

p105. Our sense of bodily presence is always constructed on the basis of the brain's ongoing registration of correlations. If the correlations are reliable, persistent, and supported by a robust, reliable causal chain, then the body image that is constructed on that basis is well grounded. It is well grounded regardless of whether the intervening circuitry is wholly biological or includes nonbiological components.

He uses this fact to discuss how technolgy can be used and designed to extend our capabilities.

p109. The idea would be to allow the technologies to provide for the kinds of interactions and interventions for which they are best suited, rather than to force them to (badly) replicate our original forms of action and experience. .... This point is nicely made in a short piece by two Bellcore researchers, Jim Hollan and Scott Stormetta. ... A human with a broken leg may use a crutch, but as soon as she is well, the crutch is abandoned. Shoes, however (running shoes especially), enhance performance even while we are well. Too much telecommunications research, they argue, is geared to building crutches rather than shoes. Both are tools. We may become as accustomed to the crutches as the shoes, but crutches are designed to remedy a perceived defect and shoes to provide new functionality. Maybe new technologies should aspire to the latter.

This discussion covers some of the same ground as Being There, particularly the importance of language and writing. But it also branches out into more recent technologies, and is yet again thought-provoking.

Andy Clark.
Supersizing the Mind: embodiment, action, and cognitive extension.
OUP. 2008

Andy Clark.
Surfing Uncertainty: prediction, action, and the embodied mind.
OUP. 2016

How can a thoroughly physical being think, dream, and feel? How can such a being act in ways that reflect what it knows and that serve its ever-changing needs? An answer is emerging at the busy intersection of neuroscience, psychology, philosophy, artificial intelligence, and robotics.

In this groundbreaking work, philosopher and cognitive scientist Andy Clark explores exciting new theories that depict brains like ours as prediction machines—devices that have evolved to anticipate the incoming sensory barrage. Such brains do not wait passively for sensory stimulations to arrive. Instead, they are constantly buzzing, trying to predict the sensory stream and nudging the body to help harvest the information they need. These are the brains of active agents, able to structure their own worlds, building and re-building them in ways that alter the very things their brains must engage and predict. What emerges is a stunningly unified vision in which predictive brains enable situated agents to make the most of body, world, and action.

Peter J. R. Millican, Andy Clark, eds.
Machines and Thought: the legacy of Alan Turing, volume I.
OUP. 1996

rating : 3.5 : worth reading
review : 24 February 2001

Essays in commemoration of Alan Turing. This volume focuses on the Turing Test (or Imitation Game, as Turing called it); on humans as (or not as) Turing machines. The essays cover a wide range of viewpoints, and introduce many thought-provoking ideas. One or two do get rather technical for a general audience. But there's a lot to be learned here about recent advances, and recent opinions, on computability and intelligence.

Contents

Robert M. French. Subcognition and the limits of the Turing Test. 1996
No machine could fool someone that it was instead a person, without having been deeply immersed in human culture. But that doesn't mean it might not be intelligent.
Donald Michie. Turing's Test and conscious thought. 1996
The importance of sub-articulate and sub-cognitive thought
Blay Whitby. The Turing Test: AI's biggest blind alley?. 1996
The Turing Test is not a good operational definition of intelligence, nor did Turing intend it to be
Ajit Narayanan. The Intentional Stance and the Imitation Game. 1996
Daniel Dennett uses the Intentional Stance (that certain things, computers included, are best described as if they have intentions) to argue that machines can be conscious. Narayanan claims that this concept involves a circular argument, and we ought instead split it into the Representational Stance (that the computer behaves as if certain symbols in it represent certain things in the world) and the Ascriptional Stance (that explicitly treats the system as being conscious).
Herbert A. Simon. Machine as Mind. 1996
A discussion of various AI programs, and the conclusion that computers already think
J. R. Lucas. Minds, Machines, and Godel: a retrospect. 1996
A reappraisal of the original 1961 paper, which argued that humans were not Turing machines, based on their being able to see the truth of a Gödel sentence. Lucas argues further along these lines, and says the only way out of this conclusion is if humans are inconsistent, a position that he does not wish to support.
Robin Gandy. Humans versus mechanical intelligence. 1996
Humans can do meta-reasoning, such as recognising the truth of a Godel sentence, because they don't just manipulate symbols, they understand what those symbols represent. There is no reason to assume a computer could not reason in the same way.
Anthony Galton. The Church-Turing thesis: its nature and status. 1996
The Church-Turing thesis asserts the equality of the class of effectively computable functions and the class of recursive (or Turing-machine computable) functions. The latter class is precisely defined, whereas the former is a rather vague intuitive notion.
We thus find, in conventional computability theory, on the one hand the requirement that a computation be completed in finitely many steps, and on the other hand the lack of any constraint on the complexity of a computation. The justification for the former requirement is that infinite computations cannot actually be implemented physically; the justification for the latter non-requirement is that our theory of computation is purely abstract and hence should not be constrained by physically motivated considerations.
Galton explores the Church-Turing thesis (how can two notions of such very different degrees of precision actually be equivalent?), investigates "effectively computable", and tries to sharpen the notion, whilst preventing it trivially ending up with being "Turing-computable".
Chris Fields. Measurement and computational description. 1996
A physical description of computation: a series of measurements made on a dynamical physical system are interpreted as the computation.
The feature of computation---that simulating computing is computing---is what sets it apart from uninterpreted dynamic processes such as fluid flow. ... Searle ... fails to understand the 'systems reply' to the Chinese Room example---the claim that simulating understanding is understanding---because he misses this point.
A physical system can be measured in different ways; only some can be interpreted as a meaningful computation.
...a single physical system can often be interpreted as different virtual machines on the basis of different sets of measurements, by interpreting the values of different sets of variables as indicative of the state of the system.
Fields uses examples of measuring only input/output, in contrast to all intermediate states, to give interpretations of "higher level" computations. [This insight applies equally well to "orthogonal" measurements, such as recent work on cracking crypto algorithms using differential power analysis: looking at the power drawn by a device while it is processing.]
Aaron Sloman. Beyond Turing equivalence. 1996
Intelligence isn't so much about computation, as about architecture. We need better concepts, and a higher level view. We should be examining the building blocks, the structures, how they can be put together, and how they can vary and grow.
Iain A. Stewart. The demise of the Turing Machine in complexity theory. 1996
Many computational complexity classes can be characterised as classes of logical constructs. For example, NP corresponds to those problems that can be specified using existential second-order logic, and P to those that can be specified using first-order logic, successor and the least fixed point operator. Such characterisations may be more natural and useful than the Turing machine description.
Peter Mott. A grammar-based approach to common-sense reasoning. 1996
Most research into reasoning has focussed on validity and expressibility. But real world reasoning needs to trade these off against speed. A syllogistic approach based on pattern matching against the grammatical structure of an utterance may be the way minds actually do some of their common-sense reasoning. (The semantics is then captured in the syllogisms and in the allowed pattern matches.) Simple pattern matching works for a large number of cases, but there are some cases where it doesn't. For example, although we can take he came in with a (box of chocolates) and infer he came in with a (box), we cannot take he came in with a (big (box of chocolates)) and infer he came in with a (big (box)). [Clearly not.  Big is not the same kind of adjective as, say, red. It is a relative term: something is big only relative to a set of somethings. The two sets here, boxes of chocolates, and boxes, are different. The kind of pattern matching substitution used here is valid only if the meaning is monotonic. But this whole approach is based on a dislike for non-monotonic logics. Based on introspection, my own "common-sense reasoning" was not grammatical in this case: I found myself visualising a generic big box of chocolates, and a generic big box, "seeing" the difference, and hence realising the substitution was not valid. Maybe the reasoning step itself is based on pattern matching, but not the semantic step of selecting which matches are allowed.]
Joseph Ford. Chaos: Its past, its present, but mostly its future. 1996
[I failed to understand the point the author was making in this essay.]
Clark Glymour. The hierarchies of knowledge and the mathematics of discovery. 1996
Philosophers have lost interest in theories of knowledge and discovery at about the same time that a rich new mathematical basis has been developed for investigations. New work on "knowledge in the limit", and even incorporating relativism, has led to important new results.

Andy Clark, Peter J. R. Millican, eds.
Connectionism, Concepts, and Folk Psychology: the legacy of Alan Turing, volume II.
OUP. 1996

Contents

Paul M. Churchland. Learning and conceptual change: the view from the neurons. 1996
Mario Compiani. Remarks on the paradigms of connectionism. 1996
Joop Schopman, Aziz Shawky. Remarks on the impact of connectionism on our thinking about concepts. 1996
Frank Jackson, Philip Pettit. Causation in the philosophy of mind. 1996
Jon Oberlander, Peter Dayan. Alteresd states and virtual beliefs. 1996
Christopher Peacocke. The relation between philosophical and psychological theories of concepts. 1996
Michael Morris. How simple is the simple account?. 1996
Beatrice de Gelder. Modularity and logical congnitivism. 1996
Murray Shanahan. Folk psychology and naive physics. 1996
Christopher J. Thornton. Why concept learning is a good idea. 1996
Douglas R. Hofstadter. Analogy-making, fluid concepts, and brain mechanisms. 1996
Ian Pratt. Encoding psychological knowledge. 1996
L. Jonathan Cohen. Does belief exist?. 1996