EASy MSc 1998-2000
Simulation of Adaptive Behaviour Essay

Maintaining the continuity of complexity in artificial adaptive systems


This essay presents a fusion of Pattees [1] discussion of measurement as evidence of emergence in evolution with Keijzer's [2] discussion of proximal and distal environments. The aim is to present a substantiation of Keijzer's 'armchair worries' by means of a perspective derived from Pattee. Evidence from the evolutionary history of sensory-motor complexes will be used in this respect to show that there must be a balance between the different discernable measurement 'devices' at play in a system (natural or artificial) that is behaving adaptively in order to 'produce' stable, consistent behaviour. I will endeavour to define terms in a separate section before I use them as this will clarify the text.

Problems for Adaptive Agents

Before introducing the measurement taking/ measurement storage description of adaptive behaviour, it is necessary to describe the problems that such a description should hope to shed light on.

The Frame Problem

Deciding what to do in a given situation and working out the implications of that action can take too long to allow an effective solution to be reached. Animals seem to have solved this problem somehow. The 'red in tooth and claw' dynamics of evolution by natural selection do not allow the persistence of lengthy modelling/ planning/decision making systems in many animals' behavioural outlays. Such systems are replaced with 'quick and dirty' solutions. Strictly, they are not replaced - they do not arise in the first place. The frame problem can be seen in terms of two linked sub problems:

1) Reduction of the breadth of search. Reducing the many behavioural options to a smaller selection.
2) Reduction of the depth of recursion performed when gaining knowledge of the possible consequences of available actions.

Where to put the detail in a realisation (see below) of an adaptive system?

When designing an autonomous agent, where should the degrees of freedom be such as to allow the emergence of non-trivial behaviour? Is there any meaning in building a system with a huge number of available internal states that is deprived in terms of sensors and motors?


Pattee [1] distinguishes between simulations, realisations and theories. He describes them something like this:

Simulation is metaphorical, not literal. Features of a simulation 'Stand for', they do not realise. Simulation designers tend to focus attention on what from the real system is in the simulation with a tacit awareness of what is not. Also there are those features in the simulation that are not in the real system. 'The frame setting off the painting'. The theory/ knowledge of the real system inspires the details of the simulation. A good simulation must at least represent the essential functions of a realisation of the system.

A Realisation is a literal, substantive, functional device. Using Pattees terms, I would say that autonomous agent research is interested more in realisations than simulations as final results. Realisations will be the focus of attention in this writing.

Theories help tie together these things. A Theory must be all conquering - a simulation can be used to support a theory but theory must have conceptual coherence, simplicity and elegance. Theory is the best we have at any given time.

It is important to distinguish these three as it provides a background for the discussion of measurements.

Emergent Measurement

Clark [3] talks about emergence as the appearance of novel parameters from a set of interacting subsystems. He touts the example (taken from Resnick [4]) of the simulated termites who can order scattered wood chips into piles using just two simple rules: 'If you are not carrying anything and you bump into a wood chip, pick it up' and 'If you are carrying a wood chip and you bump into another wood chip, put your wood chip down'. Over time, several discrete piles are seen to appear. During a run of the system the removal of the final chip from a one-time pile permanently blocks that locus from further chip dropping, because of the rules. As time goes on, more and more loci become blocked in this way and piles appear. This locus blocking is the emergent 'parameter'. In a sense there is an implicit behaviour in the system. Stein and Meredith [5] hit on this key tenet of adaptive systems in that the behaviour of an adaptive (adapted) system is integral to its structure - it behaves because it has to.

Pattee describes general emergence as frozen accidents stabilised by their environments e.g. driving on the left hand side of the road. He describes three forms of emergence. The third is the relevant one here. It is also what Pattee considers to be 'one of the more fundamental test cases for emergent behaviour in artificial life models'. This is the emergence of devices to measure the environment. The primitive concept of measurement is an ability to recognise, appreciate or classify new aspects of the environment. A newly evolved enzyme allows an organism to appreciate a new element of its environment, for example the enzyme might facilitate the metabolism of a new food source. In Tom Ray's Tierra[6] the organisms evolved the ability to recognise and use each-other's code. The fundamental test for emergence in an artificial system is this: The organism (as he calls it) or the adaptive system, must construct a measuring device that allows it to make a classification of the environment, which it stores somehow. The quality of this measuring device defines the success of the system. (the quality of the storage device does this as well I think). The measuring device is optimised by evolution. The problem he has with artificial systems (e.g. simulations) is that they do not generate new measurements. Rather they work on pre existing measurements, in the form of data structures in a computers memory. He would presumably not have this problem with a realisation of an artificial system, such as an autonomous agent acting in the real world. So is the termite example emergent in Pattee's sense? It is a simulation but nevertheless a novel measurement of the environment is made. The measurement is 'where should the piles of wood chips go in this environment?'. But the simulated termites have not evolved the ability to make this measurement - it emerges from the simple rules as they interact with the properties of the environment. In a sense, the rules are the measuring devices and these are not evolved. Maybe Pattee would demand that the termites' rules should be evolved with the selection pressure of good pile making.

So we have this idea of measurements and storage of measurements. What constitutes a measurement and what constitutes the storage of a measurement? The measuring device could be a sense organ and the storage could be an impact on the internal state of the organism. A child hears an ice cream van and she becomes excited. A robot senses a wall in front of it and it starts moving backwards. This can be extended to describe the environmental sensory-motor coupling. The motivational state of a robot as discussed in Mcfarland [7] is a kind of storage device for measurements of the outside world. The outputs of the measuring devices possessed by the robot, e.g. sense data are stored in the form of their influence on that robot's motivational state, e.g. 'big black thing in front of robot lowers desire to move forward'. Or in a less explicit system, such as a dynamic recurrent neural net, 'no visual sense data (black shape) -> affect on some activity pattern near the leg motor outputs'. The form in which the measurements are stored is certainly not an explicit representation of the external world. As Harvey [8] might say, who is representing what to whom using what? Rather, the motivational state is an implicit, dynamic storage system for these measurements. So measurement and storage can be used to describe a sensory-motor coupling. It is possible to extend the measurement concept further by introducing another level of measurement. The internal state of the robot has an effect on its behaviour. In this sense the behaviour becomes a dynamic, implicit storage system for the progression of internal states of the robot. Why stop there? A final level of measurement will allow an escape from what could look rather like a 'tarted up' GOFAI sense input -> behavioural output system. Measurements of the behaviour are stored in another dynamic, implicit storage system, the environment. The environmental change can be objective or subjective. A subjective environmental change could be induced by something as simple as moving forward, as the robotís perception of the environment will change. The moving robot also causes an objective environmental change since the robot makes up part of the objective environment.

Overview of measurement/ storage device model:

1. Measurements of the environment stored in the form of motivational states.
2. Measurements of the internal state stored in the form of behaviours.
3. Measurements of the behaviours stored in the form of environmental impact.

In a sense, all these levels of measurements are storage media for each-other as change in one causes change in the others. The impact of changes in one part of the system on other parts could be used as a measure of the level of coupling of the different storage systems.

What constitutes the evolution of a new measuring device for an artificial autonomous agent? I.e. how can such a system pass Pattee's test of emergence? This description provides a lot of flexibility as to what constitutes a measurement. In programming terms a new measurement device could be a new interface provided by a storage system for some sense data. If the neural net controller of an animat evolves such that it responds to a sensory stimulus to which it did not respond before, then it has provided a new interface for the sensors to talk to. In this example the neural net is the storage device, remembering the data in the form of its activation state and the novel interface is the measurement device. There are many examples of neural network evolution. When the NNs are evolved from a random initial population, the evolution of interfaces is inevitable.

The Balance of Information Exchange

Keijzer [2] voices his fears about the imbalance of complexity in present artificial adaptive systems. Using the example of wheeled robots he illustrates the simplicity of the sensory-motor interaction involved as compared to the environment with which it is interacting. From the measuring system angle, there appears to be an imbalance in the contributions made by the three measuring and storage systems. Limited environmental measurements provide little information for the internal state to store. The measurements made of the internal state, realised or stored in the form of environmental impact are thus stunted. In other words the impact the system could have on its environment is constrained by its limited ability to perceive and react to that environment. An objection could be

'A wheeled robot can move much further in a given time than a walking robot. Therefore its subjective environment is altered a lot more, regardless of limited sensory-motor interactions'

It may move further but the impact on its subjective environment is far less complex. For example a wheeled robot tends to move in a 2D plane, unlike a legged robot. Legged walking can be seen as more 3D as it necessitates movement in a vertical plane as the weight is shifted from leg to leg. The complexity of the subjective environment of a robot capable of this 3D motion is far higher. This highlights the difference between subjective and objective environments. When these two were discussed earlier, they were considered together to clarify the measurement/ storage of measurements concept. Keijzer draws a strong line between the two. He discusses the proximal and distal environment, a view he attributes to Heider and Brunswick. Proximal can be seen as neutral - a sound wave here, a light wave there do not have an immediate/ great effect on the organism. They do provide clues about the distal environment however. This distal environment may include things to eat and things to be eaten by. Once such things become proximal, the problem as to what to do about them is solved - in the examples above, the organism can eat or it can be eaten. This leads to the concept of a time frame within which the organism must decide what to do about its proximal environment such as to stabilise the distal environment, i.e. what to do to its proximal environment to ensure that it can eat or it can avoid being eaten at some point in the near future. (This points back to the frame problem). The subjective and objective environments mentioned above could be seen as equivalent to the proximal and distal environment, respectively. This realisation demands a more explicit description of the three-part measurement and storage system. The sensors provide an interface to which the proximal environment presents itself. Therefore measurements of the proximal environment are stored in the form of sensor activations. The control system provides an interface for these sensor readings such that they can affect its state. The motor system provides an interface to which the control system presents its measurements of itself (as affected by its storage of the sensors' measurements). The motor system changes its state according to the ongoing presentations from the control system. The proximal environment offers an interface to which the motor systems presents its storage of the control system measurements. This interface may be a physical surface, for example. The proximal environment stores its measurement of the motor system by altering its state. Therefore the altered proximal environment makes a different presentation at the sensor interface and so on. If we take this back to Keijzers concerns about wheeled robots, and look at the complexity of the different interfaces involved, all the interfaces are complex except for the control system to motor system link. Wheels are either on or off. On a level, clean surface this means that the motor section of the system becomes very transparent - the state of the control system is transferred almost directly into an impact on the proximal environment.


Hiile [9] provides a concise account of the evolution of simple nervous systems such as that found in the Coelentrates. Despite their simple body structure, they have a relatively advanced and very integrated nervous system. The rhythmic swimming of the Coelentrate is a direct consequence of a tight nervous-motor system coupling. The impact of this integrated system on the behaviour of these organisms is obviously significant. It dictates how they move and that places constraints on how they can react to their environment. Land and Ferland [10] describe the evolution of the variety of eyesight systems found in nature. They mention the limited ability of simple sensors to mediate behaviour. A simple 'Pit Eye' that can resolve a few degrees at most cannot really mediate very complex behaviour. Its measurements are inaccurate and uninformative. As such their impact on the interface with which they communicate is limited. Using the continuity of the measurement/ interface description, can one say that if one part of the system is simple, will it have a knock on effect on the next part of the system? In the example of pit eyes the visual sensory measurement is simple and this results in it having a small impact on the proximal environment of the organism via its behaviour. In the example of wheeled robots it is possible to say that the motor system is simplistically interpreting the internal state such as to produce a limited impact on the proximal environment.

Hille's accounts show that the relatively complex perception systems being used in adaptive agent research would never be coupled with such simple motor systems if they were in natural adaptive systems. The behaviour that such a system would support would be way too unstable. Such badly matched systems must be limited in the complexity of behaviours that they can support stably. A complex sensory activation pattern is presented to the internal state's measurement interface and is processed into a form in which it can be presented to the motor interface. If the motor interface is really simple, such as two motors controlling wheels, much of the detail gleaned from the proximal environment will inevitably be lost. Pattee says that the proximal environment provides clues about the distal environment and that these clues are often inscrutable. This stands to reason - making sense of a complex environment is a highly non-trivial task, one that has proved quite beyond years of AI research. To filter down the information gleaned from a complex environment into a pattern of 2 bit signals does not seem in line with the natural world at all. It is lacking the key natural feature of continuity of complexity. In this respect, Keijzer mentions Ashby's law of requisite variety where 'only variety can destroy variety'.


Experiments with wheeled robots appear to have solved the frame problem to some extent. I claim that such systems are cheating in that they cut down the size of the search parameters mentioned above in an unnatural, stop gap way. This involves the filtering of sense patterns from the environment into 2 bit motor patterns. There are no systems apparent in nature where such a complex set of sensors is matched with such a simplistic motor system. Systems such as the Paramecium [9] match simple motor systems with similarly simple sensor systems. Natural evolutionary history shows us that sensors develop in parallel with motor systems. The complexity and inscrutability of the distal environment demands this. As sensory capabilities increase, motor skills increase such as to make use of the extra 'data' available. Motor skills that are out of balance with sensor skills will lead to input from the environment that is too complex for the system that is expressing the reaction of the system to this input. This will lead to limited potential behavioural outlay and instability.


1. H. H. Pattee 'Simulations, Realisations and Theories of Life' p379-395. In 'The Philosophy of Artificial Life' ed. Margaret Boden Oxford University Press 1996.
2. F. A. Keijzer 'Some Armchair Worries about Wheeled Behaviour' p13-21 in the Proceedings of the 5th Int. Conf on the Simulation of Adaptive Behaviour ed. Pleifer, Blumberg, Meyer and Wilson. 1998
3. Andy Clark 'Happy Couplings: Emergence and Explanatory Interlock' p262-282 in 'The Philosophy of Artificial Life' ed. Margaret Boden Oxford University Press 1996.
4. M. Resnick 'Learning About Life' p 229-242 in Artificial Life 1 1994
5. Barry E. Stein and M. Alex Meredith 'The Merging of the Senses' Chapter 2. MIT Press 1993
6. Thomas S. Ray 'An Approach to the Synthesis of Life' p111-145 in 'The Philosophy of Artificial Life' ed. Margaret Boden Oxford University Press 1996.
7. David J. McFarland 'Animals as Cost Based Robots' p179-208 in 'The Philosophy of Artificial Life' ed. Margaret Boden Oxford University Press 1996.
8. Inman Harvey 'Untimed and Misrepresented: Connectionism and the Computer Metaphor' CSRP 245, 1992.
9. Bertil Hille 'Ionic Channels of Excitable Membranes' p 525 - 544, Chapter 20 Second Edition 1992
10. Michael F. Land and Russell D. Fernald 'The Evolution of Eyes' p1-29 in Annual Review of Neuroscience:15, 1992