Modification history

- 4 Jan 2002 -- new embryology section -- expanded emergence section -- reorganised/expanded research programme section
- 1 Jan 2002 -- FPGAs split into separate page
- Oct 2001 -- initial draft

This is a highly compressed whistle-stop tour of non-standard computation -- what it is, and what interesting, valuable and deep research can be done in this area.

Non-Standard computation is in contrast to "standard" or "classical" computation, by which I mean our classical Turing / von Neumann perspective. Over the last decade or so we have begun to understand just how narrow a perspective this is.

Our appreciation of the **physical embodiment of
computation**, our development of **biologically-inspired
computational models**, and our understanding of **emergent
systems** has reached exciting heights. Breakthroughs in all these
areas are occurring, or are on the horizon.

Firstly, they have begun to reach sufficient maturity that common themes
can be observed within and between these areas, that interesting and deep
research questions can be asked. Yet they are not so well developed that
all their novelty has been mined -- there are a lot of exciting
fundamental new discoveries still to be made. The time is right to commit
to **Research Programmes** in non-standard
computation.

Secondly, there is now sufficient computational power and capacity available to implement or simulate large, interesting models -- getting qualitatively different results impossible to obtain any other way. For example, the UK's development of e-science and the GRID could provide such power. Additionally, non-von Neumann style devices such as Field Programmable Gate Arrays (FPGAs) offer exciting new possibilities for large scale simulations.

There is a categorical difference between a computational *model*
and the computational *device* that might implement it. A model is a
mathematical abstract description, say of an algorithm. A computer is a
physical device existing in the real world, subject to errors, wear and
tear, finiteness, and the actual laws of physics themselves, that
progresses through a series of steps to execute the algorithm.

The model tends to get most of the study in computer science. But we need to make explicit just how the choice of model is driven by the physical reality, and just how the view of what is computable is driven by the choice of model.

It is well-known that analogue computational models describe
computations that cannot be achieved in discrete digital computational
models -- they can achieve what to a digital model looks like an *infinite*
amount of computation in a finite time. Since the physical world is itself
digital, these analogue models cannot be effectively implemented.
(However, the world is digital only on *very* small scales. How much
computation can an analogue implementation actually do?)

Quantum computational models also describe computations that cannot be
achieved effectively with classical digital models. But the physical world
*is* quantum mechanical, so these models potentially can be
realised. **Turing machines are not an adequate description of
computability**. The real world routinely performs quantum-mechanical "computations"
that cannot be effectively implemented on a Turing machine. Since the
Turing machine is the basis of all our theory of computation, algorithms,
programming languages, specification models, refinement calculi, and so
on, a lot of work needs to be done to upgrade these to include quantum
models. We might have to wait a while for commercial quantum computers,
but when they arrive, Moore's law suggests they will grow in power *very
quickly*. Doubling a classical computer's register length (roughly)
doubles its computing speed, but adding just *one bit* to a quantum
computer's register doubles *its* speed. We need to be ready to
exploit them once they arppear.

When we perform a real computation, we choose to interpret certain
observations of the executing physical device as corresponding to aspects
of the mathematical model. But **the concrete physical device includes
features not in the abstract computational model**. (Consider protein
folding -- if DNA is modelled abstractly just as a string of bases, then
it is puzzling that greatly separated positions are correlated. The
solution is clear once we consider the physical realisation in the
concrete real world: these correspond to spatially nearby positions when
the corresponding protein is folded in 3D.) We can choose to observe other
features, such as the time taken by a computation, or the power drain,
which might correspond to different computations. Known to the security
world as "side channels", these have recently come to the fore
in *Differential Power Analysis* (DPA), a way of cracking crypto
algorithms. Focussing on the abstract model to the detriment of the
physical implementation leads to oversights.

**Cellular automata** are inspired by certain physical models
(crystals, spin networks, etc.) Ironically for a so-called "non-Von"
paradigm, they were developed by von Neumann. They perform their
computations as an array of simple cells, each a state machine that evolve
through time in a way dependent on its own current state and the states of
neighbouring cells. Conway's "Game of Life" is the most
well-known CA. CAs can act as universal computers, and have emergent
structures (such as the famous "gliders"). Wolfram has studied
1D CAs in detail, and shown their close connections with chaos theory.
Unsurprisingly, due to some of their inspiration, they can be used for
physics simulations, from study of condensed matter phase transitions to
self-organising quantum gravity spin networks.

Our computing artefacts are not the only things that can be considered
to perform computations. Computation is in the eye of the beholder, and we
can choose to perceive many "natural" devices as performing
computations. And these do not fall into the classical von Neuman model.
Since much of the natural world can be perceived as doing some form of
**search, optimisation or pattern recognition**, and since many
problems of interest can be cast in these terms, these computations, as
well as being interesting in their own right, hold the promise of
inspiring novel problem-solving techniques.

These approaches tend to be "non-symbolic" computations. They
explore a staggeringly vast phase space, yet can necessarily consider only
a vanishingly small part of it. Obviously they can provide no guarantees
of *optimal* results. However, they are astonishingly good at
providing "satisficing" results. (But we get too hung up on
theoretical best case analyses: "good enough" really *is*
good enough.) They also tend to be robust in the presence of noise, error,
and change -- in other words, in the presence of the real world.

**Neural networks** are inspired by the workings of the brain. A
network of weighted nodes can be used as a robust pattern matcher, and
pattern categoriser. The "learning algorithms" that decide on
the weights are a variety of optimisation algorithms, not as biologically
inspired as the network itself.

**Darwinian evolution** provides another inspiration. Genetic
algorithms, genetic programming, and much Artificial Life research uses
concepts such as mating, variation and selection of the fittest from
biological evolution.

Evolution in the wild involves many species. **Ecology** studies
heterogeneous agents coevolving, involved in cooperation and parasitism,
in arms races, and forming dependency webs.

Darwinian evolution is slow. Nothing the agents learn during their lives
is passed on to their offspring, except at second order. **Lamarckian
evolution** is faster -- and can be considered the route taken by
intelligent, learning agents. Economics, trading, and so on can be thought
of in Lamarckian evolutionary terms. **Swarmware** -- flocking or
schooling behaviour -- is inspired by the behaviour of large swarms of
similar agents copying their neighbours' behaviours, and thereby
potentially improving their own.

The **immune system** is another biological pattern recognition and
classification system -- it learns to *distinguish* self from other,
and *destroy* the other. Additionally, the immune system performs
important maintenance and repair functions. Whereas evolutionary paradigms
are interested in improving the performance of best of a large population
of essentially similar agents, immune systems' behaviour is an emergent
property of the entire population of diverse agents, and improve by in
weeding out the weakest players, replacing them by agents as different as
possible from them. The immune system is probably currently one of the
computationally least understood of all the biological paradigms.

Biologically-inspired models are often used where the problem is too
difficult to solve by more conventional means, and part of that difficulty
lies in determining an appropriate initial state for a large complex
system. Appealing again to biology, one solution is to start from a
relatively simple initial state, and let it "grow" to a suitable
maturity, rather than attempting to "switch on" a large
arbitrary, or hand-crafted, configuration that is almost certainly
non-saisficing. So one area of biology that should have important lessons
for computing is **embryology**, the study of development from a
single cell to an entire organism. For example, nanotechnology assemblers
might have to grow their own numbers before they start to build the
artefact of interest. In a multi-agent network,
the initial population might start small, and decide for itself how to
grow to fulfil its purpose.

Intelligence via embodiment and bottom-up layering of functionality is
promulgated as an alternative to top-down symbolic processing. Brooks'
**subsumption architecture**, inspired partly by insect models, has
made great strides in intelligent robotics. Collision avoidance, path
finding, mapping, and so on can arise naturally from layered
architectures. Embodiment ensures that problems of noisy sensors and
faulty actuators are addressed, and robust solutions found.

These techniques can be fruitfully combined, too. Evolutionary algorithms have been used to find good neural network topologies. CAs can be used to implement swarm algorithms. Small neural nets can be used for a layer in a subsumption architecture. And so on. This areas is in its infancy. There is vast potential for using biologically-inspired algorithms to optimise and classify a vast range of problem -- and of meta-problems (finding not only good values of problem-specific parameters, but second order parameters describing the shape of the problem space itself, then third order parameters, and so on).

We can choose to study these natural computational models for two separate, independently valuable reasons. • We can try to understand how they work in the real world, getting a deeper understanding of biology, or economics. • And we can let them inspire ways to solve our own, different problems -- emphasising the parts that help, de-emphasising the parts that do not.

Chaos theory, complex adaptive self-organising systems, artificial life:
all these areas of study show that systems become *qualitatively*
different when they have many, rather than a few, components. Such systems
have **emergent properties** -- properties of the entire system that
are not properties of the individual components. As our own artefacts grow
in complexity, they too exhibit emergent properties, some desirable, some
rather less so.

Currently the term "emergent" is often used as a synonym for "unpredictable"
or just plain"incomprehensible". However, if we are ever to
advance beyond simple computational artefacts, we need a **science of
emergence**. It may be true that the precise prediction of a complex
system is impossible: it may be **computationally irreducible**. But
that does not mean that no progress can be made (any more than the
existence of the Halting Problem prevents us from proving termination of
interesting programs). We can better understand the process of emergence:
we can classify, we can discover structures and processes, we can develop
laws governing general properties, we can find areas of greater
predictability, and so on.

The role of **contingency** is crucial. Given the vastness of the
phase space explored by complex growing systems, they can explore even in
principle only a vanishingly small portion of it: they are highly **non-ergodic**.
(For example, the number of proteins actually expressed is vanishingly
small compared to the number that could be expressed in the age of the
universe, which itself is even more vanishingly small compared to the
total number combinatorically possible.) In nature, precisely which
portion is explored is controlled as much by chance as by necessity. The
role that chance plays on engineered emergence needs to be explored
further.

Complex systems should be "grown" rather than "switched
on". The form of growth is a kind of emergent property, too. For
example, an entire individual grows, emergently, from a single cell. We
are used to thinking of dynamical systems having *intial transients*
that decay soon after being switched on. When a system instead grows, the
picture is very different. The "transient" never decays: **intial
conditions** crucially affect future development. We are also used to
thinking in *equilibrium* terms: an evolving, growing, in some sense
*living*, system is a **far-from-equilibrium** system; we need
new laws to describe and new intuitions to think about such systems.

**Change** is an important property of emergent systems. As a system
grows, as its scope increases, the way it works needs to change. As a
system's environment changes (possibly in reaction to the system's own
actions, possibly for other external reasons) the system needs to change
the way it interacts. Static structures, processes and rules are not a
good model when change is needed. Not only do we need to engineer the
initially desired emergent properties, we need to engineer ability to
change in desired, if unanticipated, ways.

The behaviour of a complex emergent system is as strongly influenced by
its **environment** as by its intrinsic structure. Evolved systems can
be considered to encode interesting aspects of their environments that
they need to respond to. Move a system into a different environment, and
it ceases to be such an effective solution to the problem (of survival in
the natural world, of its engineered purpose if it is a designed
artefact). The same system may solve different problems in different
environments; different systems may solve the same problem in different
environments. The effect of environment on emergence needs to be studied.

An emergent property is at a higher level than its component parts, for
example, cells emerging from molecules, organisms emerging from cells. As
the example shows, systems can have many layers of emergence, each
providing components for the next. Current experiments in generating
emergent properties seem to "stick" at around two levels of
emergent hierarchy. This may be because the assumed combinators, or *laws*,
are fixed. Real emergent systems have hierarchical **emergent laws**,
too. The emergence of hierarchical combinators needs to be studied.

If Kauffman's arguments
about quantum decoherence are right, that the universe is the way it is
because it *has* to be that way, and if
Rees and
Smolin are right, that the
parameters are *very* finely tuned to allow complexity -- then maybe
we can't build in arbitrary physical laws to our simulations and hope to
get realistic complex outputs -- maybe we have to build in the real
physical laws?

This may not be a fundamental problem, however. As opposed to natural
emergent systems, there are also *artificial* ones -- such as global
economics emerging from local trading rules, and other social systems.
They have key differences from natural systems: the system's existence is
not so dependent on precise parameter tuning (as evidenced by the
existence of different kinds of such systems); the components (now agents)
may *break* the "laws"; and the "laws" may *differ*
at different places and times, even to the extent of contradicting other
laws. Such artificial systems are potentially of even more interest than
natural emergence, since we want to engineer emergence in our artefacts,
since we want to achieve a *purpose*.

The realisation that physical embodiment affects computational models,
and that biological systems can be considered as computational models,
should change the way we look at computer science. Common themes run
through these areas: **massive parallelism**, **emergent properties**,
and **simulation**. These are enormous areas, and several research
projects spring to mind. A few of these are outlined below, and in the
accompanying FPGA page.

We would like to be able to design complex multi-agent systems to have desirable, and not to have undesirable, emergent properties. There is currently little or no theory about how to do so.

We need to understand the current status of emergence research, to build up a language and classification of emergent systems.

- Investigate and categorise classes of emergence, and classes of systems that exhibit these. Include physical, chemical and biological emergence (molecules, cells, creatures, swarms, intelligence, avalanches, ...), artificial emergence (economic and legal systems, ...), and computational emergence (simulations of biological emergence, cellular automata, ...).
- Investigate homogeneous emergence (all agents are similar) and heterogeneous emergence (varying agents)
- Investigate emergent
*structures*, emergent*behaviours*, emergent*laws and combinators* - Investigate how the complexity of agent behaviour is determined by its complex environment

We need to be able to experiment with emergent systems, to increase understanding, to answer questions that require further empirical evidence, and as a route to eventual design. (Computational resources for investigating emergent properties are likely to be large. An FPGA simulator might be appropriate.)

- design and implement a "meta-emergent" system, where the combinators, as well as the atoms, can evolve hierarchically
- include an emergently, endogenously, and co-evolvingly complex environment too
- echoing Smolin's investigation of how physical law parameters affect the evolution of the real universe, investigate how artificial law parameters affect possible artificial worlds
- determine the parameter space of "interesting" artificial worlds, and how those parameters relate to the real physical world
- determine parameter ranges and kinds that enhance or supress particular classes of emergence
- investigate the effect of seeding initial systems, and the subsequent "transient" conditions
- investigate the effect of the shape and colour of the noise used to provide (pseudo-)randomness for contingence

We need to be able to design and build systems with desired emergent properties.

- in large systems, the interconnections between the parts become more significant than the parts themselves. Investigate the theoretical basis for designing and describing such systems
- investigate how to use the biologically-inspired computational models to think about, design and implement large number of components, in particular embryology (for growing the system from a "seed" to "maturity") and immunology (for system maintenace, protection and change)
- with ubiquitous computing (smart devices, active badges, etc), investigate how will the numerous components can effectively communicate and cooperate
- from mobile software agents roaming telephone networks to "fast, cheap and out-of-control" robot devices exploring Mars, investigate how to design the components that exhibit the desired emergent properties
- investigate how to design, program, grow and control nanotechnological devices
- investigate fractal algorithm compression, as a route to encoding large instructions in small programs

I present the two areas I am most interested in, since they fit well with my main interest of Engineering Emergence. The many other areas of artifical biology have been relegated to section 4, and left as a series of interesting questions.

- real immune systems use proteins, with a vast variety of shapes, to bind to (recognise) ligands. Investigate good computational analogues of this vast variety of protein shapes
- investigate the use of artificial immune systems to detect, and destroy, computer viruses, damged "self"-agents, and so on
- investigate how they can be used to detect, and counter, all sorts of intrusions and anomalies (hacking attacks, denial of service attacks, imminent failure of equipment, ...)
- investigate how to prevent artifical auto-immune diseases such as "artificial arthritis" and the like
- investigate the computational analogue of the immune system's repair and maintenance functions

- investigate how biological embryology can inspire and illuminate the process of computational system growth and change
- investigate how emergent properties relate to the growth and control of multitudes of nanotech "assemblers"
- investigate how much of the "growth program" is best encoded in the (fixed) "DNA", and how much in the (variable) environment
- determine computational anologues of cell death, both in development (unneeded scaffolding) and in maturity (damage, decay, changed circumstances)

Much effort is going into building actual quantum computers: that is not my interest here. Rather, we should be ready to exploit these devices once they appear. Current descriptions of quantum algorithms talk in terms of manipulating unitary matrices, which is equivalent to describing classical algorithms in terms of logic gates. (Much) higher level languages are needed, and all the support paraphernalia that goes with them for them both to have have a solid foundation, and to be usable in software development.

- examine the fundamentals of quantum computability -- the Universal Quantum Turing Machine
- determine the fundamental building blocks of quantum programming -- is there a simple extension of GCL? of classical logic languages? of classical assembly languages? -- is an entirely new paradigm needed?
- design suitable assembly level and high level Q-languages -- analogues of classical imperative, declarative, and functional languages -- new fully quantum languages
- investigate suitable reasoning systems and refinement calculi for these languages.
- given the non-intuitive nature of QM, investigate how ordinary mortals might program and reason in such a language. Does Deutsch's many-worlds description provide the best metaphor, or are there better ones?
- investigate the requirements for "amplifying" the desired answer -- classify algorithms that need (certain kinds of) amplification and those that do not
- investigate the algorithmic complexity of quantum algorithms: time, space, and "parallel universe space"

Computational resources for simulating quantum algorithms are likely to be exponentially large. An FPGA simulator might be appropriate.

- implement a quantum computer simulator that supports the designed Q-language(s).
- use the languages to derive and implement existing Q-algorithms (Min Max, Shor's period finding algorithm used for factorization, and Grover's algorithm for DB searching)
- design and implement (simulate) new Q-algorithms -- investigate how certain well-known classical algorithms may be "quantised"
- real quantum devices can produce genuine random numbers; classical digital simulations only pseudo-random numbers. Investigate the differences this causes, and whether a quantum computaer simulator needs to be hooked up to a genuinely random number source.
- given that quantum execution is
*in principle*unobservable, investigate debugging and testing techniques for these languages - design ways of "visualising" Q-algorithm execution

**Neural networks**. The "learning algorithms" that decide
on the weights are a variety of optimisation algorithms, not as
biologically inspired as the network itself. • Can analogues of the
chemical communication in the brain be usefully added? • Can more
biologically-inspired learning algorithms be developed?

**Darwinian evolution**. • Can different biologically-inspired
mating algorithms be developed? Asexual, "symbiotic" (as in
mitochondria in cells), polyploidal, "gene-hopping", ... •
Can more "natural" chromosomal encodings be found? • Can
better cost functions be developed?

**Ecology**. • What is the best way to use coevolution to breed
more robust genetic algorithm solutions?

**Subsumption architecture**. • How can subsumption devices be
designed to perform desired tasks? • How to get many small devices to
cooperate to perform larger tasks.

**Cellular automata**. • What is the effect of dimensionality,
even fractal dimensions? • When to use synchronous *v*.
asynchronous updates? • What are the trade-off with size of
neighbourhood, shape of neighbourhood (square, hexagonal, quasi-crystal
Penrose tilings, irregular), and the number of states? • What can
non-deterministic CAs do? • What can quantum CAs do? • What
would be a sensible high level CA programming language?

**Physical devices**. • What other kinds of computational
models can a physical device present? • How many different models can
a single device present? • How can we deduce what models are possible
in advance? • How can we design in, and design out, suitable and
unsuitable observational models from our devices?

I would like to thank John Clark for several stimulating discussions in this area, and for commenting on these drafts.

../books/pages/htMartinJRees.htms/pages/htStuartAKauffman.htmges/htDavidDeutsch.htmages/L>LeeSmolin.htm