As technology has matured the requirements of computing have evolved. Many state-ofthe-art problems now focus on simulating and modelling physical phenomena, such as simulating neural networks, modelling emergent properties and implementing massive parallelism. The digital computing paradigm, however, is highly abstracted from what makes these systems interesting, which can lead to fragile designs and excessive computing requirements.
In our work, we propose, instead of forcing matter to conform to digital logic we can directly exploit information from complex physical interactions to solve modelling and prediction problems. We therefore argue, matter can be trained and perturbed to solve machine learning problems without heavily constraining the base substrate. To achieve this, we take an unconventional material, configure it through evolution to create an (unconventional) virtual machine, and then train that to perform computational tasks (unconventional computation).
By implementing a low-level abstraction we can make sense of all observable information whilst still utilising the full complexity of the substrate. In our (static) materials this is implemented using an adapted Reservoir Computing model . This model allows us to train high-dimensional (open) systems without high-level abstraction. At its centre, the “reservoir” (i.e. material), projects and separates embedded features within its input and stores them as addressable spatial-temporal states. These states are then extracted and combined (in the digital domain) to form a learned output using a trainable output filter .
The adapted reservoir model is adjustable through three important parameters; the microscopic material function, the macrostate projection and the output filter. Computation is shared across these parameters and heavily weighted towards the materials function and projection.
For the model to fit and for training to work, the material needs to form a suitable reservoir. In some cases, we argue a reservoir may only exist or emerge in a material under stimulation. This hypothesis therefore suggests a configurable material can be mapped to anywhere within the spectrum of all possible reservoirs; from poor to excellent.
Rather than manually searching for good reservoirs, applying evolutionary algorithms are suggested. Research from evolution-in-materio has demonstrated that matter can be configured via evolution to solve complex problems. Evolution is advantageous here for two reasons; i) the search is performed on a black-box, i.e. knowing the exact microstates that achieve a particular macrostate are somewhat redundant – similar to how a reservoir functions, ii) evolution is not restricted by domain knowledge, e.g. evolution can exploit properties or “defects” unknown to the designer, which could lead to more efficient designs.
The concept of evolving unconventional materials and training them as reservoirs has been shown in . In that work, randomly dispersed networks of carbon nanotubes (in polymers) form reservoirs trained on multiple temporal tasks. This first demonstration of the principle has provided competitive results to other methodologies and unconventional systems. It has also highlighted both the future potential and current challenges of the concept.
 Dale, M. et al. 2016. Evolving Carbon Nanotube Reservoir Computers. 2016 (accepted). Proceedings of the 15th Conference on Unconventional Computation and Natural Computation (UCNC).
@inproceedings(Dale-ALife-2016, author = "Matthew Dale and Julian F. Miller and Susan Stepney and Martin A. Trefzer", title = "Modelling and Training Unconventional in-Materio Computers using Bio-Inspired Techniques", pages = "11-12", crossref = "ALife-2016-lb" ) @proceedings(ALife-2016-lb, title = "Late Breaking Abstracts booklet, ALife 2016, Cancun, Mexico", booktitle = "Late Breaking Abstracts booklet, ALife 2016, Cancun, Mexico", editors = "Carlos Gershenson and Tom Froese and Jesus M. Siqueiros and Wendy Aguilar and Eduardo J. Izquierdo and Hiroki Sayama", year = 2016 )