Books

Short works

Books : reviews

David Deutsch.
The Fabric of Reality.
Penguin. 1997

rating : 2 : great stuff
review : 16 May 1999

Science in the 20th century has become focussed on the what, with scant regard for the why. Deutsch wants explanation put back into our way of doing science -- science is our way of understanding the world, not just tersely describing what it does. This book is his attempt to describe what An Explanation of Everything might look like (as contrasted with physicists' quest for a theory of everything, by which they mean one single equation to describe fundamental physics). This explanation is structured around four theories central to modern science:

He argues these theories should be 'taken seriously'; that is, scientists should be exploring their (possibly extreme) logical consequences, not just applying them in narrow domains.

In all [four] cases the theory that now prevails, though it has definitely displaced its predecessor and other rivals in the sense that it is being applied routinely in pragmatic ways, has nevertheless failed to become the new 'paradigm'. That is, it has not been taken on board as the fundamental explanation of reality by those who work in the field.

Despite the fact that they are not taken seriously, usually because the consequences are disliked, there are no better alternatives. Deutsch argues that this results in some people supporting worse alternatives, leading to a lot of wasted effort.

so long as the proponents of our best theories of the fabric of reality have to expend their intellectual energies in futile refutation and re-refutation of theories long known to be false, the state of our deepest knowledge cannot improve.

I like the rigorously rational view Deutsch takes:

there is no room for magic in a comprehensible reality. Anything that seems incomprehensible is regarded by science merely as evidence that there is something we have not yet understood, be it a conjuring trick, advanced technology or a new law of physics.

He also has little time for those who artificially handicap themselves:

intuitionism is precisely the expression, in mathematics, of solipsism.

He makes it clear that what we like to think of as purely abstract computation is in fact deeply grounded in physics, with the physics affecting the kinds of computers we can build, and therefore the kinds of models of computation we devise. Analogue classical physics gives us a fundamentally different model of computation.

the continuous motion of classical systems would have allowed for 'analogue' computation which did not proceed in steps and which had a substantially different repertoire from the universal Turing machine. Several examples are known of contrived classical laws under which an infinite amount of computation (infinite, that is, by Turing-machine or quantum-computer standards) could be performed by physically finite methods. Of course, classical physics is incompatible with the results of countless experiments, so it is rather artificial to speculate on what the 'actual' classical laws of physics 'would have been'; but what these examples show is that one cannot prove, independently of any knowledge of physics, that a proof must consist of finitely many steps.

[This focus on what we mean by proof in 'finite proof' contrasts rather nicely with Lavine's focus on what we mean by finite.] Similarly, the discrete 'classical' physics of Turing machines also gives us an incorrect model of computation.

Turing hoped that his abstracted-paper-tape model was so simple, so transparent and well defined, that it would not depend on any assumptions about physics that could conceivably be falsified, and therefore that it could become the basis of an abstract theory of computation that was independent of the underlying physics. 'He thought,' as Feynman once put it, 'that he understood paper.' But he was mistaken. Real, quantum-mechanical paper is wildly different from the abstract stuff that the Turing machine uses. The Turing machine is entirely classical, and does not allow for the possibility the the paper might have different symbols written on it in different universes, and that those might interfere with one another. ... That is why the resulting model of computation was incomplete.

Quantum computation is fundamentally different from classical computation. Deutsch states quite clearly that there are quantum programs that cannot be run on a classical Turing machine.

Quantum computation is more than just faster, more miniaturized technology for implementing Turing machines. A quantum computer is a machine that uses uniquely quantum-mechanical effects, especially interference, to perform wholly new types of computation that would be impossible, even in principle, on any Turing machine and hence on any classical computer.

This statement confused me when I first read it: I have listened to quantum computing researchers describe their emulations of quantum computations on classical computers, they just require exponentially increasing resources (either processors, or time). These two statements seem incompatible. But then a few pages later I came across a paragraph that affects his meaning of 'in principle' in the quote above.

... not only are universal [computers] possible, it is possible to build them so that they do not require impracticably large resources to [compute]. From now on, when I refer to universality I shall mean it in this sense...

Now, a universal computer must, by definition, be able to emulate any other computer using only a 'similar' amount of resources (where 'similar' has a technical meaning that excludes 'exponentially more'). But a 'Universal' Turing Machine does need exponentially more resources than a quantum computer in order to emulate one, and so is not truly 'Universal' by this definition.

So Deutsch, being firmly grounded in physical law and the universe we are living in, argues that there are certain computations that cannot be performed classically in a tractable time, because there are insufficient resources in our single universe, but that can be performed quantumly, when the resources of exponentially many parallel universes can be brought to bear. There are computations that require exponential resources on classical computers, and so are intractable, but are are tractable on quantum computers. One such kind of computation is that of calculating the state of a multi-particle quantum system itself.

... with several interacting particles, such a computation could easily ... become 'intractable'. Yet since we could readily obtain its result just by performing this experiment, it is not really intractable after all.

Deutsch weaves together his four strands, and comes up with some rather interesting, and sometimes startling, conclusions. In particular, his use of the Turing principle to define a universal virtual reality renderer, and to use that to infer consequences for the laws of physics, in particular, for time travel, is quite ingenious. And one has to admire the author who can conclude that his view

is the conservative view, the one that does not propose any startling change in our best fundamental explanations

a mere 15 pages after describing Tipler's omega-point argument that it is possible to perform an infinite amount of computation in a universe with a particular configuration, inferring that we are in such a universe, and further inferring

just from the Turing principle and some other independently justifiable assumptions, that intelligence will survive, and knowledge will continue to be created, until the end of the universe

This is an excellent book, with some fascinating ideas. It is particularly nice to find a real practicing physicist who is willing to come out and admit that explanation is what it's all about. Most of the book is solid scientific extrapolation, but his description, in the last chapter, of a universe full of beings who must continue to evolve and grow in knowledge for ever, is particularly exhilarating. (I found it made an optimistic contrast to Chaitin's slightly gloomy view of the place of randomness in increasing knowledge.)

I do have one area where I have some confusion, however, and would have liked more explanation. That is of the Many Worlds interpretation itself. It certainly does give an intuitive explanation of how quantum computers work (or rather, where they do all their work), but I am less convinced that it is the inevitable explanation of quantum interference experiments. (Deutsch is a much better physicist than I am, however.) Cramer's Transactional interpretation, in particular, seems to offer equally plausible explanations of these experiments, without being so profligate with universes. I would also have liked a little more detail of what the Many Worlds interpretation is: before reading this I had a vague picture of a universe branching at every decision point; Deutsch talks of reams of pre-existing identical universes subsequently evolving in different ways depending on the choice.

Despite my area of doubt and uncertainly, I highly recommend this book. It is very well written (the dialogue between Deutsch and the crypto-inductivist is particular fun), brings together some important ideas, avoids the excesses of Penrose and Tipler (whilst exploiting their good parts), and gives a view onto a humane and rational explanation of the world.

David Deutsch.
The Beginning of Infinity: explanations that transform the world.
Penguin. 2011

rating : 2.5 : great stuff
review : 4 May 2012

In our search for truth, how far have we advanced? This uniquely human quest for good explanations has driven amazing improvements in everything from scientific understanding and technology to politics, moral values and human welfare. But will progress end, either in catastrophe or completion – or will it continue infinitely?

In this profound and seminal book, David Deutsch explores the furthest reaches of our current understanding, taking in the Infinity Hotel, supernovae and the nature of optimism, to instill in all of us a wonder at what we have achieved – and the fact that this is only the beginning of humanity’s infinite possibility.

14 years after his previous plea for the return of explanation to science, Deutsch makes the plea again. Last time, the focus was on the science and the effect of taking certain theories seriously. This time, the focus is more on the underlying philosophy, and some of the consequences of that.

The philosophy he champions is that of fallibilism, as opposed to any of the dreadful contortions that the philosophy of science has undergone in its abject failure to respond to the challenges of quantum mechanics. There is an "evolution" of knowledge, with many and varied ideas being conjectured, and the better of these selected through a process of criticism and experimentation. Fallibilism applies more widely than science: it is the process of gaining good knowledge in all areas of life. And hence it is imperative that criticism and experiment be available, used, supported, and encouraged in all areas, to whittle away the bad ideas. A "good" idea is hard to vary without making it an inferior explanation. Deutsch has many strong words to say about the "bad philosophies" (a philosophy that is not merely false, but actively prevents the growth of other knowledge [p308]) that currently infest science and other disciplines. These include logical positivism, behaviourism, and, of course, post-modernism:

p314. Creating a successful postmodernist theory is indeed purely a matter of meeting the criteria of the postmodernist community -- which have evolved to be complex, exclusive and authority-based. Nothing like that is true of rational ways of thinking: creating a good explanation is hard not because of what anyone has decided, but because there is an objective reality that does not meet anyone’s prior expectations, including those of authorities. The creators of bad explanations such as myths are indeed just making things up. But the method of seeking good explanations creates an engagement with reality, not only in science, but in good philosophy too -- which is why it works, and why it is the antithesis of concocting stories to meet made-up criteria.

He defines knowledge, whether it be found in human brains or DNA, as information physically embodied in a suitable environment that tends to cause itself to remain so [p78].

pp93-4. the knowledge embodied in genes is knowledge of how to get themselves replicated at the expense of their rivals. Genes often do this by imparting useful functionality to their organism, and in those cases their knowledge incidentally includes knowledge about that functionality. Functionality, in turn, is achieved by encoding, into genes, regularities in the environment and sometimes even rule-of-thumb approximations to laws of nature, in which case the genes are incidentally encoding that knowledge too.

And this knowledge has to come from somewhere: it evolves through the process of variation and selection. Hence explanations such as "spontaneous generation" for life are bad explanations: they do not explain where the knowledge embodied in the complex living organism come from. Hence also there is no "inductivism": ideas are first conjectured, then tested, not the other way round. He is making a very deep claim here: knowledge cannot be derived, or predicted, or otherwise deduced; it has to be developed through "guess and check". [The second step is crucial of course: pseudoscience is all guess and no check.]

And we can always do better. Not only do explanations get better, and problems get solved, but the solution to a problems opens the way to discovering new, different, better problems. So there are always unsolved problems, and we should not be surprised or concerned by this.

p97. the existence of an unsolved problem in physics is no more evidence for a supernatural explanation than the existence of an unsolved crime is evidence that a ghost committed it.

The creation of scientific and other knowledge can be faster and more efficient than the knowledge acquired through biological evolution. And it has another crucial feature: its progress can leap across the ideas landscape without being constrained by viability of intermediates.

p114. In an evolving species, the adaptations of the organisms in each generation must have enough functionality to keep the organism alive, and to pass all the tests that they encounter in propagating themselves to the next generation. In contrast, the intermediate explanations leading a scientist from one good explanation to the next need not be viable at all. The same is true of creative thought in general. This is the fundamental reason that explanatory ideas are able to escape from parochialism, while biological evolution, and rules of thumb, cannot.

Deutsch’s requirement for good explanations, and good science, does not lead him to reductionism; in fact, it leads him away. He gives an example of a domino computer that calculates primality, where the "output" domino falls only if the input is not prime. The machine is set running (or falling) to test the primality of 641; the output domino does not fall. Why? The argument is about the quality of the explanation: "because 641 is prime". He contrasts his view with that of Hofstadter (who introduces this example in I Am a Strange Loop)

pp117-8. primality must be part of any full explanation of why the dominos did or did not fall. Hence it is a refutation of reductionism in regard to abstractions. For the theory of prime numbers is not part of physics. It refers not to physical objects, but to abstract entities -- such as numbers, of which there is an infinite set.
     …
     … Hofstadter eventually concludes that the ’I’ is an illusion. Minds, he concludes, can’t ’push material stuff around’, because ’physical law alone would suffice to determine [its] behaviour’. Hence his reductionism.
     But, first of all, physical laws can’t push anything either. They only explain and predict. And they are not our only explanations. The theory that the domino stands ’because 641 is a prime (and because the domino network instantiates a primality-testing algorithm)’ is an exceedingly good explanation. What is wrong with it? It does not contradict the laws of physics. It explains more than any explanation purely in terms of those laws. And no known variant of it can do the same job.
     …
     ... There is no inconsistency in having multiple explanations of the same phenomenon, at different levels of emergence. Regarding micro-physical explanations as more fundamental than emergent ones is arbitrary and fallacious. There is no escape from Hofstadter’s 641 argument, and no reason to want one. The world may or may not be as we wish it to be, and to reject good explanations on that account is to imprison oneself in parochial error.

That this philosophical approach based on "good explanations" does not reduce to "fundamental physical theories" gives it more "reach". This is shown when Deutsch applies the philosophy of good explanations to moral theory:

p120. you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not. Thus factual knowledge can be useful in criticizing moral explanations.

He also has a stab at aesthetics, to do with the beauty of flowers. Flowers and insects communicate across a wide species gap:

p362. my guess is that the easiest way to signal across such a gap with hard-to-forge patterns designed to be recognized by hard-to-emulate pattern-matching algorithms is to use objective standards of beauty. So flowers have to create objective beauty, and insects have to recognize objective beauty. Consequently the only species that are attracted by flowers are the insect species that co-evolved to do so -- and humans.

I’m doubtful about this guess, not least because so many flowers are cultivated, and have been evolved through artificial selection to meet human standards of beauty. However, fallibilism has the answer: this guess should be checked. Are there other species that have to communicate across a wide species gap, and have they evolved beauty in order to do so? Are there flowers we do not find beautiful, and what are they communicating with? What about things we find beautiful that have not evolved to communicated across a wide species gap (trees, sunsets, starry night skies, ...)? Kevin Kelly instead conjectures that "It may be that any highly evolved form is beautiful"; that covers the trees and maybe (if Smolin is right) the starry skies: what explains sunsets?

Deutsch’s argument that criticism is the essential path to knowledge is enlivened by a particularly fine Socratic dialogue, between Socrates and Hermes, contrasting the rigid static Sparta and the more flexible, critical Athens.

pp232-3. If the Spartan Socrates is right that Athens is trapped in falsehoods but Sparta is not, then Sparta, being unchanging, must already be perfect, and hence right about everything else too. Yet in fact they know almost nothing. One thing that they clearly don’t know is how to persuade other cities that Sparta is perfect, even cities that have a policy of listening to arguments and criticism ...

Whereas if I am right that Athens is not in such a trap, that implies nothing about whether we are right or wrong about any other matter. Indeed, our very idea that improvement is possible implies that there must be errors and inadequacies in our current ideas.
...
... Could it be that the moral imperative not to destroy the means of correcting mistakes is the only moral imperative? That all other moral truths follow from it?

(There are, of course, analogues of the Spartan and Athenian philosophies in existence today.) The dialogue ends with a delicious little scene where Socrates is recounting what he learned from Hermes to his followers, and a puppyishly enthusiastic Plato keeps misunderstanding and misrecording everything he says. (This fits perfectly with Popper’s excoriation of Plato as one of the enemies of the Open Society.)

This imperative not to destroy the means of correcting mistakes leads to an interesting viewpoint on voting systems. "First Past the Post" (FPTP) leads to a government that does not represent a large proportion of the electorate, and other voting systems are suggested. Deutsch argues in favour of FPTP, not because it results in representative government (it doesn’t necessarily), but because it makes it easier to remove a bad government. This is fallibilism at work: what is key is not getting things right, but removing what is wrong. This is an intriguing viewpoint, although I’m not totally convinced, since FPTP can bias towards extreme governments trying to differentiate themselves from other parties (so no "better conjectures" are ever possible). However, it shows how a different viewpoint about what elections are for can lead to a different conclusion.

One feature he notes in explanations is that some are universal: they have unbounded "reach", much greater than the domain they initially described. Achieving universality is thus the beginning of infinity. Mathematics, DNA, and computation are universal in this sense.

p166. The best explanation of anything eventually involves universality, and therefore infinity.

This infinite reach is the underlying result of fallibilism. In fact, the book starts off looking at the consequences of this reach, with a scenario of far-future manipulation of matter in cubic light-years of space that make some extropians look positively humble. This actually put me off for a while, but the rest of the book settles down after that. (This viewpoint wasn’t described until the end of the previous book.)

Although the book is mainly about the philosophy and fallibilism and good explanations, there are two rather more technical chapters. The first is on infinity; the second (which will come as no surprise to those familiar with Deutsch’s work) is on the many-world interpretation of quantum mechanics. And the chapter on infinity is there to support the many-worlds explanation, and also the idea of infinite reach. He starts off the infinity chapter with the familiar story of Hilbert’s Infinity Hotel, with its (countably) infinite number of rooms, and some of the counter-intuitive things that can happen there. (One nice example given is: guests in low room numbers are better off; every room number is unusually close to the beginning, so every guest is better off than almost all other guests!) This story is a preamble to showing the difficulties of defining probabilities over infinite sets, and how you need to define an order in which to traverse an infinite set of universes to make such probabilities well-defined, which is all needed for the many-worlds chapter. It is also needed to unpick arguments about fine-tuning, and that we are "probably" living in a computer simulation. Most of this explanation is in terms of countable infinities, whereas the many-worlds model appears to require an uncountable infinity, from the way it is described. Uncountable infinities are even more counter-intuitive.

How does fallibility work when we can calculate things about physical laws, and hence predict? Deutsch points out that this is conflating the mathematical model and the physical reality it models. And he goes further: even mathematical proofs are embodied in physical devices, and so depend on physical laws.

p182. Only the laws of physics determine what is finite in nature.
p183. the mistake is to confuse an abstract attribute with a physical one of the same name. Since it is possible to prove theorems about the mathematical attribute, which have the status of absolutely necessary truths, one is then misled into assuming that one possesses a priori knowledge about what the laws of physics must say about the physical attribute.
p186. there is nothing mathematically special about the undecidable questions, the non-computable functions, the unprovable propositions. They are distinguished by physics only. Different physical laws would make different things infinite, different things computable, different truths -- both mathematical and scientific -- knowable.
p188. Whether a mathematical proposition is true or not is indeed independent of physics. But the proof of such a proposition is a matter of physics only. ... Mathematical truth is absolutely necessary and transcendent, but all knowledge is generated by physical processes, and its scope and limitations are conditioned by the laws of nature. ... A mathematical ’theory of proofs’ has no bearing on which truths can or cannot be proved in reality, or be known in reality; and similarly a theory of abstract ’computation’ has no bearing on what can or cannot be computed in reality.
     ... a computation or a proof is a physical process .... It works because we use such entities only in situations where we have good explanations saying that the relevant physical variables in those objects do indeed instantiate those abstract properties.
     ... the reliability of our knowledge of mathematics remains for ever subsidiary to that of our knowledge of physical reality. Every mathematical proof depends absolutely for its validity on our being right about the rules that govern the behaviour of some physical objects, like computers, or ink and paper, or brains. ... Proof theory is a science: specifically, it is computer science.

So Deutsch thinks computer science is a physical science. I suspect that not all of my colleagues would agree with him, although it is a stance that resonates with me. He also has arguments against the laws of physics being generated by some external universal computation, since the argument from "computational universality" is thereby circular:

pp190-1. ... the misconception that the set of classically computable functions has an a-priori privileged status within mathematics. But it does not. The only thing that privileges that set of operations is that it is instantiated in the laws of physics. The whole point of universality is lost if one conceives of computation as being somehow prior to the physical world, generating its laws. Computational universality is all about computers inside our physical world being related to each other under the universal laws of physics to which we (thereby) have access.

For the many-worlds chapter, Deutsch used a concept I haven’t seen used before: the idea of fungible entities. The whole approach is illustrated with a science fictional parable, and gives a clear explanation of the concepts. There is a slight wobble at the crucial point of explaining how two separated worlds can be reunited back into a single one (needed to understand quantum interference): this is done too fast. Everything else is given an intuition, but I felt the intuition here was much weaker. But the whole vision of the multiverse as an infinite set of universes gradually diverging is clearly laid out. Then, at the end, Deutsch brings in knowledge again:

p302. we sentient beings are extremely unusual [information] channels, along which (sometimes) knowledge grows. This can have dramatic effects, not only within a history (where it can, for instance, have effects that do not diminish with distance), but also across the multiverse. Since the growth of knowledge is a process of error-correction, and since there are many more ways of being wrong than right, knowledge-creating entities rapidly become more alike in different histories than other entities. As far as is known, knowledge-creating processes are unique in both these respects: all other effects diminish with distance in space, and become increasingly different across the multiverse, in the long run.

Deutsch emphasises that fallibilism is an essentially optimistic viewpoint, and that many others are inherently pessimistic. Pessimistic philosophies include Utopias, since their perfection does not admit any improvement or progress. Since new knowledge requires a creative step in the "guess and check" approach, it is unknowable before it has been discovered; attempts to "prophesy" through this unknowability can be a route to pessimism, such as prophesies about the "end of physics" (which occurred both at the end of the 19th century, just before quantum mechanics and relativity, and more foolishly at the end of the 20th century, since we know that we don’t know how to unify these two theories). Deutsch brings these arguments together in his Principle of Optimism: All evils are caused by insufficient knowledge [p212]. This is optimistic because is implies that all evils can be overcome by sufficient knowledge. Of course, we have to obtain that knowledge.

This leads to his penultimate chapter, where he talks about "sustainability". All through he has been criticising static societies, because in order to remain static, they must suppress change and criticism. Any such society is unsustainable, because there is always some problem never encountered before [invaders, a new disease or plague, resource collapse, earthquake, climate change (anthropogenic or not), supervolcano eruption, meteor strike, ...], and a static society does not have the resources needed to create a novel solution to the novel problem. So static societies inevitably eventually collapse.

p423. The Easter Islanders may or may not have suffered a forest-management fiasco. But, if they did, the explanation would not they about why they made mistakes -- problems are inevitable -- but why they failed to correct them.

He argues that the only sustainability is indefinite progress in an optimistic, dynamic society. The future is unknowable, because it will have new knowledge that cannot be predicted, and new problems caused by solutions to previous problems, and new potential disasters. The society need to be structured to cope with such unknowability.

p436-7. there is no resource-management strategy that can prevent disasters, just as there is no political system that provides only good leaders and good policies, nor a scientific method that provides only true theories. But there are ideas that reliably cause disasters, and one of them is, notoriously, the idea that the future can be scientifically planned. The only rational policy, in all three cases, is to judge institutions, plans and ways of life according to how good they are at correcting mistakes: removing bad policies and leaders, superseding bad explanations, and recovering from disasters.
     ...
     ... we need the capacity to deal with unforeseen, unforeseeable failures. For this we need a large and vibrant research community, interested in explanation and problem-solving. We need the wealth to fund it, and the technological capacity to implement what it discovers.

This is a description of "resilience", rather than of the more static "sustainability". The philosophy of fallibilism, that mistakes are inevitable so we need mechanisms to recover from them, gives a very different perspective on how we should order our societies. The final message is simultaneously upbeat and a warning: not only can we continue to gain new knowledge without bound, but we must, in order to survive.