Books

Short works

Books : reviews

Matthias Scheutz, ed.
Computationalism: new directions.
MIT Press. 2002

rating : 3.5 : worth reading
review : 11 February 2011

"Computationalism" is the view that the mind is (in some sense) a computer, that the mind can be described programmatically, and that computers can have mental states. This view has been critiqued for a variety of reasons, and here we have several essays examining some of the issues. Some of the problems appear to arise when it is assumed that the real world concept of "computer" is synonymous with one particular and influential abstraction, that of the "Turing machine":

px. Perhaps the problem is not with computing per se, but with our present understanding of computing, in which case the situation can be repaired by developing a successor notion of computation that not only respects the classical (and critical) limiting results about algorithms, grammars, complexity bounds, and so on, but also does justice to real-world concerns of daily computational practice. Such a notion that takes computing to be not abstract, syntactic, disembodied, isolated, or nonintentional, but concrete, semantic, embodied, interactive, and intentional offers a much better chance of serving as a possible foundation for a realistic theory of mind.

Now, I think I understand what those properties mean, except for "intentional". This is a word that crops up all the time in philosophy of cognition. According to the Stanford Encyclopedia of Philosophy (an excellent on-line resource), "Intentionality is the power of minds to be about, to represent, or to stand for, things, properties and states of affairs", which doesn't really help me a lot. Firstly, it is assumed to be a property of minds, something we are trying to understand in the first place. Secondly, once we start getting pernickety, then, well, what does it mean "to be about" or "to stand for", or for that matter, what is a "thing" or a "state of affairs"? And so on, unto circularity (let's not even get into the definition of "mechanism" or "machine"!). Anyhow, some of these essays address this question (although they seem to conflate "semantics" and "intention", so maybe I don't understand the (philosophical) meaning of "semantics", either).

Nevertheless, there's a lot of interesting food for thought here.

Contents

Computationalism---the next generation. 2002

Scheutz provides an overview chapter, linking the following contributions. (He also helpfully provides a summary section at the beginning of each contributed chapter, which helps to understand the more obscure formulations).

Brian Cantwell Smith. The foundations of computing. 2002

Like Smith's On the Origin of Objects (300+ pages on "what is a thing", which I am part way through, and have been for a while), reading this is like wading through treacle: it's hard going, but the substance is delicious.

Rather than assuming we know what we mean by "computation", and applying that to cognition, Smith turns the argument around. He doesn't assume that we know what computation is, and asks, if computationalism is true (that is, if cognition is a form of computation), what would that mean for (a theory of) computation?

p26. Theorizing is undeniably a cognitive endeavor. If the computational theory of mind were correct, therefore, a theory of computation would be reflexive---applying not only (at the object-level) to computing in general, but also (at the metalevel) to the process of theorizing. That is, the theory's claims about the nature of computing would apply to the theory itself.

Because of this uncertainly about what computation is, some of the claims about and criticisms of computationalism potentially disappear:

p27-8. there are two ways in which these writers could be wrong. In claiming that people are formal symbol manipulators, for example, Fodor would naturally be wrong if computers were formal symbol manipulators and people were not. But he would also be wrong, while the computational theory of mind itself might still be true, if computers were not formal symbol manipulators, either.

Additionally, Smith identifies intentionality as the key property of computation that interests cognitive science: semantics is a problem for understanding minds, computation seem to achieve it too, so minds maybe are computers. (Here we have one of the statements that, in the philosophy of mind/cognition at least, "semantical" and "intentional" are synonymous.)

p32. the most profound difficulties have to do with semantics. It is widely (if tacitly) recognized that computation is in one way or another a symbolic or representational or information-based or semantical---that is, as philosophers would say, an intentional---phenomenon. Somehow or other, though in ways we do not yet understand, the states of a computer can model or simulate or represent or stand for or carry information about or signify other states in the world (or at least can be taken by people to do so). ... Furthermore---and this is important to understand---it is the intentionality of the computational that motivates the cognitivist hypothesis. The only compelling reason to suppose that we (or minds or intelligence) might be computers stems from the fact that we, too, deal with representations, symbols, meaning, information, and the like.

Picking through the implications of this leads Smith to state one of the main differences between the abstract notion of computation, of an isolated "brain in a vat", and the actual, and very different, participatory way computers work in the real world:

p40. Rather than consisting of an internal world of symbols separated from an external realm of referents, … real-world computational processes are participatory ... : they involve complex paths of causal interaction between and among symbols and referents, both internal and external, cross-coupled in complex configurations.

Following through even further leads Smith to the startling conclusion that the "Theory of Computation" is not what we thought it was:

p42-3. the so-called (mathematical) "Theory of Computation" is not a theory of intentional phenomena---in the sense that it is not a theory that deals with its subject matter as an intentional phenomenon.
     …
     … what goes by the name "Theory of Computation" fails not because it makes false claims about computation, but because it is not a theory of computation at all.
     … What has been (indeed, by most people still is) called a "Theory of Computation" is in fact a general theory of the physical world---specifically, a theory of how hard it is, and what is required, for patches of the world in one physical configuration to change into another physical configuration. It applies to all physical entities, not just to computers. It is no more mathematical than the rest of physics, in using (abstract) mathematical structures to model (concrete) physical phenomena. Ultimately, therefore, it should be joined with physics---because in a sense it is physics.
     …
     … the mathematical theory based on recursion theory, Turing machines, complexity analyses, and the like widely known as the "Theory of Computation"---is neither more nor less than a mathematical theory of the flow of causality.

This further leads to the overall conclusion that "Computation is not subject matter", it is not a "distinct ontological category". He takes this as a positive conclusion, however:

p52-3. the fact that neither computing nor computation will sustain the development of a theory is by far the most exciting and triumphal conclusion that the computer and cognitive sciences could possibly hope for.
     … such theory as there is … will not be a theory of computation or computing. … because computers … do not constitute a distinct, delineated subject matter. Rather, what computers are … is neither more nor less than the full-fledged social construction and development of intentional artifacts. That means that the range of experience and skills and theories and results that have been developed within computer science … is best understood as practical, synthetic, raw material for no less than full theories of causation, semantics, and ontology---that is, for metaphysics full bore.
     ...
For sheer ambition, physics does not hold a candle to computer or cognitive---or rather ... ---epistemic or intentional science. Hawking (1988) and Weinberg (1994) are wrong. It is we, not the physicists, who must develop a theory of everything.

Smith's deeply thoughtful picking apart the foundations of computation is fascinating stuff. In this chapter he refers several times to his forthcoming 7 volume The Age of Significance, based on his 30+ year investigation into these foundations. As of the time of writing this review, 8 years after the publication of the reference, the work is still "forthcoming"; only the 45 page Introduction has yet appeared. I do hope it all eventually gets published (written?) -- Smith's view on computation is fascinating and deeply important.

B. Jack Copeland. Narrow versus wide mechanism. 2002

Copeland points out that the Turing machine is an abstraction and formulation of what human "computers" (clerks) do, and not (necessarily) a formulation of what any mechanism can do. The limitations of TMs are consequences of the limitations of humans (when acting as computer-clerks), not the other way round.

p66. "Effective" and its synonym "mechanical" are terms of art in mathematical logic. A mathematical method is termed "effective" or "mechanical" if and only if it can be set out in the form of a list of instructions able to be followed by an obedient human clerk---the computer---who works with paper and pencil, reliably but without insight or ingenuity, for as long as necessary. ... Turing showed … that there is no effective method for determining whether or not an arbitrary formula Q of the predicate calculus is a theorem of the calculus.
     Notice that this result does not entail that there can be no machine for determining this (contrary to various writers). The Entscheidungsproblem for the predicate calculus is the problem of finding a humanly executable procedure of a certain sort, and the fact that there is none is entirely consistent with the claim that some machine may nevertheless be able to decide arbitrary formulae of the calculus; all that follows is that such a machine, if it exists, cannot be mimicked by a human computer. Turing's (and Church's) discovery was that there are limits to what a human computer can achieve; for all that, their result is often portrayed as a discovery concerning the limitations of mechanisms in general.

He then goes on to pick apart how writers (mis)interpret the Church-Turing thesis, which is actually about what humans (when acting as computer-clerks) can do, not what any mechanism can do. Maybe there are mechanisms that can do more. He illustrates this by means of "thesis M", that: "All functions that can be generated by machines (working on finite input in accordance with a finite program of instructions) are Turing-machine-computable". This admits two interpretations, neither of which is provably true:

p68. Thesis M itself admits of two interpretations, according to whether the phrase "can be generated by a machine" is taken in the this-worldly sense of "can be generated by a machine that conforms to the physical laws (if not to the resource constraints) of the actual world," or in a sense that abstracts from whether or not the notional machine in question could exist in the actual world. The former version of thesis M is an empirical proposition whose truth-value is unknown. The latter version of thesis M is known to be false. As I explain in the next section, there are notional machines that generate functions that no Turing machine can generate.

Personally, I don't care for thinking about notional machines that cannot exist in the real world -- these are "magic", and of no real interest (provided the magic is logical impossibility, not mere physical impossibility: after all, the laws of physics are not completely known, and something impossible with today's physics might be possible with tomorrow's). A notional machine where we don't know whether or not it can exist, well, that's part of the first interpretation, and to be decided empirically. Part of the problem (from a pragmatic if not a philosophical viewpoint) is that no-one has implemented such a machine -- they are all still "notional" (even the non-magical ones). But that is just an engineering problem.

Copeland points out that Turing himself invented (notional) machines that could out-compute a TM -- the so called O-machines (O stands for "oracle", or, just possibly, "magic").

p73. Each O-machine carries out some well-defined operation on discrete symbols.

The crucial term here is "well-defined", which is not synonymous with "effective". A process can be well-defined in terms of its operation and outcome without giving an effective procedure for how to achieve that outcome. (Again, "effective" is a technical term.) Copeland gives the classic example of such a well-defined operation: the Halting function.

p73. The Churchlands hold that Turing's "results entail ... that a standard digital computer ... can display any systematic pattern of responses to the environment whatsoever" ... But Turing had no result entailing this. What he did have was a result entailing the exact opposite. The theorem that no Turing machine can generate the Halting function entails that there are possible patterns of responses to the environment, perfectly systematic patterns, which no Turing machine can display. The Halting function is a mathematical characterization of just such a pattern.

This is clear, although often misunderstood. There are many perfectly well-defined mathematical functions that are (Turing) non-computable: there is a whole sub-discipline on determining which functions are non-computable! The crucial question is, are there any mechanisms that can evaluate (by a necessarily non-Turing-computation) any of these functions?

p78. Turing … enriched mechanism with an abstract theory of (information-processing) machines, presenting us with an indefinitely ascending hierarchy of possible machines, of which the Turing machines form the lowest level. His work posed a new question: If the mind is a machine, where in the hierarchy does it lie?

So Copeland's argument might be summarised as: stop thinking that TM's are the top of this hierarchy, and that therefore the mind, if it is a computer, must be (at most) a TM. (I would add, but don't use "magic" when arguing about the existence of higher level machines.)

Aaron Sloman. The irrelevance of Turing Machines to Artificial Intelligence. 2002

Sloman argues that all the fuss about minds as Turing machines is irrelevant -- the kinds of computation that minds do should be related to the kinds of computation that real computers in the world do, not to some artificial abstraction. Indeed, he argues that if the concept of TMs had never been articulated, it would make no difference to the field of practical Artificial Intelligence.

p89. there are (at least) two very different concepts of computation: one of which is concerned entirely with properties of certain classes of formal structures that are the subject matter of theoretical computer science (a branch of mathematics), and another that is concerned with a class of information-processing machines that can interact causally with other physical systems and within which complex causal interactions can occur. Only the second is important for AI (and philosophy of mind).

Real computing machines, computation-in-the-wild machines, have two kinds of properties, informational and physical.

p90. the second strand [that of abstract calculating machines], starting with mechanical calculating aids, produced machines performing abstract operations on abstract entities, for example, operations on or involving numbers, including operations on sets of symbols to be counted, sorted, translated, etc. The operation of machines of the second type depended on the possibility of systematically mapping those abstract entities and abstract operations onto entities and processes in physical machines. But always there were two sorts of things going on: the physical processes such as cogs turning or levers moving, and the processes that we would now describe as occurring in a virtual machine, such as addition and multiplication of numbers. As the subtlety and complexity of the mapping from virtual machine to physical machine increased, it allowed the abstract operations to be less and less like physical operations.
p93. When a machine operates, it needs energy to enable it to create, change, or preserve motion, or to produce, change, or preserve other physical states of the objects on which it operates. It also needs information to determine which changes to produce, or which states to maintain.
p95. Both the energy and the information required to drive a machine may be provided by a user in either an online or a ballistic fashion.

The TM abstracts away from the physical, and is ballistic, but physical and online aspects are crucial for embodied, interactive AI.

p106. This kind of mathematical universality may have led some people to the false conclusion that any kind of computer is as good as any other provided that it is capable of modeling a universal Turing machine. This is true as a mathematical abstraction, but it is misleading or even false when considering problems of controlling machines embedded in a physical world.

Sloman then carefully picks apart several features that are needed in real computers. These include things like state (and access to it), laws of behaviour (that can self-monitor, self-modify, and self-control), conditional behaviour, coupling to the environment, and multiprocessing. Crucially, none of these make any mention of TMs or universality, and several are outside the Turing model.

As an aside, he makes an interesting point about the suitability of continuous dynamical systems as a computational approach:

p111. Systems controlled by such conditional elements can easily be used to approximate continuous dynamical systems, as is done every day in many computer programs simulating physical systems. However, it is hard to make the latter simulate the former---it requires huge numbers of carefully controlled basins of attraction in the phase space. One way to achieve this is to build a machine with lots of local dynamical systems that can be controlled separately---that is, a computer!

So, in summary: the TM is an abstraction that is useful for various mathematical analyses. But what it has abstracted from, particularly the real-time embodiment in a participatory physical environment, is the very stuff that is crucial to AI. So AI shouldn't restrict itself to considering only the irrelevant abstraction of the TM.

Philip E. Agre. The practical logic of computer work. 2002

Huh? Well, I suppose one incomprehensible chapter in a philosophy book isn't too bad going, really. This has something to do with various problems of duality in AI --- mind/body, plan/behaviour, concrete/abstract --- but it assuming a lot of knowledge of the area I don't have, mixed with very technical philosophical jargon. On to the next chapter.

Stevan Harnad. Symbol grounding and the origin of language. 2002

Harnad addresses the symbol grounding problem: how can an internal mental symbol be grounded, given meaning, when it is dislocated from its external referent? Here the discussion has two parts. Firstly, basic symbols can be grounded not through a linkage to the distant referent, but through localised "sensorimotor toil", by interactions localised to a sort of bodily "surface" on which the external referent is actively projected by the body's sensor and motor experiences. Secondly, symbols can the grounded by "theft": "Darwinian theft" where our ancestors grounded symbols through their own sensorimotor toil, and we inherit that grounding; and "symbolic theft" where others use language to tell us about their own grounded symbols. Robots will need language in order to gain this enormous advantage of grounding by "symbolic theft" over "honest toil".

John Haugeland. Authentic intentionality. 2002

Haugeland addresses the semantic/intentionality issue from a different perspective. Here there is an interesting study on the kinds of self-critique needed for scientific knowledge, mixed up with an enormous assumption that all knowledge is gained in a suitably analogous manner to scientific knowledge ("suitably analogous" to allow the scientific knowledge argument to carry over). I don't think the second part has been demonstrated at all.

Haugeland identifies numerous issues of computationalism, and focusses on the issue of semantics:

p160-1. the idea that maybe semantics is essential to computation as such. … the long-sought assimilation of computation to cognition---for cognition surely presupposes semantics. ... I want to formulate a thesis about what is required for semantics---or, as I prefer to call it, intentionality---especially insofar as it is prerequisite to cognition.

(Here we have an explicit statement that "semantics" and "intentionality" are synonymous.) Haugeland goes on to distinguish two kinds of intentionality: derivative (conferred by something else) and original (not derivative). He then makes an orthogonal classification: authentic, ordinary, and ersatz (that which only looks like intentionality, as in things described from Dennett's "intentional stance", and here also subhuman animals and robots). I haven't defined the difference between authentic and ordinary intentionality, because they are defined in terms of Haugeland's concepts of responsibility. But the difference is not important until the end, and Haugeland refers to them collectively as genuine intentionality (as distinct from ersatz).

p163. The first point I want to make about genuine original intentionality, whether ordinary or authentic, is that it presupposes the capacity for objective knowledge. Indeed, since derivative intentionality presupposes original intentionality, the argument will apply to all genuine intentionality. In the present context, I mean by 'objective knowledge' beliefs or assertions that are true of objects nonaccidentally.

So all we need for intentionality (ignoring the ersatz stuff for now) is non-accidentally true beliefs or assertions. I'll focus on the "non-accidentally true", in some representation or other, because I'm not sure what the definition of "belief" or "assertion" is here. So, for some "cognitive state" to have meaning, what it represents must be "non-accidentally true" -- "non-accidentally" to get round the possibility that it is true by chance, but with no way of knowing that. How do we gain "non-accidental truths", or knowledge? Haugeland focusses how we gain scientific knowledge.

p164. I will focus on a distinctive species of objective knowledge, namely, scientific knowledge. The reason is not just that scientific knowledge is especially explicit and compelling, but rather that it has been much studied, so its features and factors are fairly well known. But I am convinced that (perhaps with more work and less clarity) basically the same case could be made for common-sense knowledge.

I'm not in the least bit convinced of that: the whole reason we have this structure and framework for gathering scientific knowledge is that it is not typical, or representative, of the way we gain knowledge. Neither am I convinced that scientific knowledge and common-sense knowledge exhaust the possible forms of knowledge: there is also innate and embodied knowledge. More on this later.

Haugeland then has an interesting discussion of the responsibility of human scientists to critique their work, on three levels. Summarising brutally: Firstly, there is self-critique: have I done the experiment correctly? Secondly, there is critique of the scientific norms: is this the right experiment to do to test this hypothesis? do these techniques actually constitute a demonstration? Thirdly, and only when all else fails (because it is such a drastic step), there is critique of the paradigm: is this theory right? All these are essential for objective scientific knowledge.

In summary, Haugeland's argument (as I understand it, at least) runs as follows:

   1. genuine intentionality requires knowledge: non-accidental truths
   2. science gains such truths through a process of experiment and critique
   3. scientists are responsible for performing this critique
   4. therefore, responsibility is a prerequisite for genuine intentionality
   5. no subhuman animal or robot can accept responsibility, so cannot have genuine intentionality, or cognition

Wait ... what? I was with you up to point 4. But there seem to be two problems here. The minor one: just because human scientists accept responsibility to perform this essential critique does not necessarily mean that other forms of scientist could not maybe be programmed to perform this critique. But the major one: just because science gains knowledge this way does not necessarily mean that all knowledge (non-accidental truth) is gained this way. Consider an extreme example: the adaptive immune system. Cohen claims this is a cognitive system, in part because it "contains internal images of its environment". Whether or not you agree with Cohen's cognitive claim, these internal images (antibodies, memory cells, whatever) are indeed "non-accidental truths", but were certainly not acquired by any process of scientific investigation: they were acquired by biological processes including amplification and selection. Okay, they are not absolutely guaranteed to be true (but then neither are scientific theories), but they are very much more than "accidental". Similar arguments could be applied to other innate and embodied knowledge (such as possessed by subhuman animals, too). So we can gain objective knowledge via evolution and other biological processes. Indeed, this includes the process of what Harnad above calls "Darwinian theft".

So, I like the three level critique model of science, but I am in no way convinced that the ability to accept responsibility to critique knowledge is a prerequisite for intentionality (whatever that actually means).