Cybernetic dreams: Beer’s pond brain

“Cybernetic dreams” is my mini series on ideas from cybernetic research that has yet to fulfill their promise. I think there are many cool ideas in cybernetics research that has been neglected and I hope that this series brings them more attention.

Cybernetics is a somewhat hard to describe style of research in the period of 1940s -- 1970s. It is as much as an aesthetics as it is a research field. The main goals of cybernetics research are to understand how complex systems (especially life, machines, and economic systems) work, how they can be evolved, constructed, fixed, and changed. The main sensibilities of cybernetics are biology, mechanical engineering, and calculus.

Today we discuss Stafford Beer’s pond brain.

Stafford Beer

Stafford Beer is a cybernetician that tried to make more efficient economic systems by cybernetic means. Project Cybersyn is his most famous project: making a cybernetic economy system. It will be discussed in a future episode. From Wikipedia:

Stafford Beer was a British consultant in management cybernetics. He also sympathized with the stated ideals of Chilean socialism of maintaining Chile’s democratic system and the autonomy of workers instead of imposing a Soviet-style system of top-down command and control. One of its main objectives was to devolve decision-making power within industrial enterprises to their workforce in order to develop self-regulation [homeostasis] of factories.

The cybernetic factory

The ideal factory, according to Beer, should be like an organism that is attempting to maintain a homeostasis. Raw material comes in, product comes out, money flows through. The factory would have sensory organs, a brain, and actuators.

From (Pickering, 2004):

The T- and V-machines are what we would now call neural nets: the T-machine collects data on the state of the factory and its environment and translates them into meaningful form; the V- machine reverses the operation, issuing commands for action in the spaces of buying, production and selling. Between them lies the U-Machine, which is the homeostat, the artificial brain, which seeks to find and maintain a balance between the inner and outer conditions of the firm—trying to keep the firm operating in a liveable segment of phase-space.

By 1960, Beer had at least simulated a cybernetic factory at Templeborough Rolling Mills, a subsidiary of his employer, United Steel… [The factory has sensory organs that measures “tons of steel bought”, “tons of steel delivered”, “wages”, etc.] At Templeborough, all of these data were statistically processed, analysed and transformed into 12 variables, six referring to the inner state of the mill, six to its economic environment. Figures were generated at the mill every day—as close to real time as one could get… the job of the U-Machine was to strike a homeostatic balance between [the output from the sensory T-machines and the commands to the actuating V-machines]. But nothing like a functioning U-Machine had yet been devised. The U-Machine at Templeborough was still constituted by the decisions of human managers, though now they were precisely positioned in an information space defined by the simulated T- and V-Machines.

Unconventional computing

[Beer] wanted somehow to enrol a naturally occurring homeostatic system as the brain of the cybernetic factory.

He emphasized that the system must have a rich dynamics, because he believed in Ashby’s “Law of requisite variety”, which roughly speaking states that a system can only remain in homeostasis if it has more internal states than the external states it encounters.

during the second half of the 1950s, he embarked on ‘an almost unbounded survey of naturally occurring systems in search of materials for the construction of cybernetic machines’ (1959, 162).

In 1962 he wrote a brief report on the state of the art, which makes fairly mindboggling reading (Beer 1962b)… The list includes a successful attempt to use positive and negative feedback to train young children to solve simultaneous equations without teaching them the relevant mathematics—to turn the children into a performative (rather than cognitive) mathematical machine—and it goes on to discuss an extension of the same tactics to mice! This is, I would guess, the origin of the mouse-computer that turns up in both Douglas Adams’ Hitch-Hikers Guide to the Universe and Terry Pratchett’s Discworld series of fantasy novels.

Research like this is still ongoing, under the banner of “unconventional computing”. For example, in 2011, scientists made crab swarms to behave such that they implement logic gates. Some scientists also try to use intuitive intelligence of untrained people to solve mathematical problems, such as the Quantum Moves game, which solves quantum optimization problems.

Pond brain

Beer also reported attempts to induce small organisms, Daphnia collected from a local pond, to ingest iron filings so that input and output couplings to them could be achieved via magnetic fields, and another attempt to use a population of the protozoon Euglena via optical couplings. (The problem was always how to contrive inputs and outputs to these systems.) Beer’s last attempt in this series was to use not specific organisms but an entire pond ecosystem as a homeostatic controller, on which he reported that, ‘Currently there are a few of the usual creatures visible to the naked eye (Hydra, Cyclops, Daphnia, and a leech); microscopically there is the expected multitude of micro-organisms. . . The state of this research at the moment,’ he said in 1962, ‘is that I tinker with this tank from time to time in the middle of the night’ (1962b, 31).

In the end, this wonderful line of research foundered, not on any point of principle, but on Beer’s practical failure to achieve a useful coupling to any biological system of sufficiently high variety.

In other words, Beer couldn’t figure out a way to talk to a sufficiently complicated system in its own language (except perhaps with human business managers, but they cost more than feeding a pond of microorganisms).

Matrix brain

The pond brain is wild enough, but it wasn’t Beer’s end goal for the brain of the cybernetic factory.

the homeostatic system Beer really had in mind was something like the human spinal cord and brain. He never mentioned this in his work on biological computers, but the image that sticks in my mind is that the brain of the cybernetic factory should really have been an unconscious human body, floating in a vat of nutrients and with electronic readouts tapping its higher and lower reflexes—something vaguely reminiscent of the movie The Matrix. This horrible image helps me at least to appreciate the magnitude of the gap between cybernetic information systems and more conventional approaches.

As shown in an illustration in his book Brain of the firm (The Managerial cybernetics of organization):

Reservoir computing

Reservoir computing is somewhat similar to Beer’s idea of using one complex system to control another. The “reservoir” is a complex system that is cheap to run and easy to talk to. For example, a recurrent neural network (a neural network with feedback loops, in contrast to a feedforward neural network, which has no feedback loops) of sufficient complexity (hinting at the law of requisite variety) can serve as a reservoir. To talk to the reservoir, just cast your message as a list of numbers, and input them to some neurons in the network. Then wait for the network to “think”, before reading the states of some other neurons in the network. That is the “answer” from the reservoir.

This differs from deep learning in that the network serving as the reservoir is left alone. It is initialized randomly, and its synaptic strengths remain unchanged. The only learning parts of the system are the inputs and outputs, which can be trained very cheaply with linear regression and classification. In other words, the reservoir remains the same, and we must learn to speak its language, which is surprisingly easy to do. From (Tanaka et al, 2019):

Another advantage is that the reservoir without adaptive updating is amenable to hardware implementation using a variety of physical systems, substrates, and devices. In fact, such physical reservoir computing has attracted increasing attention in diverse fields of research.

Other reservoirs can be used, as long as it is complex and cheap. For example, (Du et al, 2017) built reservoirs out of physical memristors:

… a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition. The system is also used to experimentally solve a second-order nonlinear task, and can successfully predict the expected output without knowing the form of the original dynamic transfer function.

(Tanaka et al, 2019) reviews many types of physical reservoirs, including biological systems!

researchers have speculated about which part of the brain can be regarded as a reservoir or a readout as well as about how subnetworks of the brain work in the reservoir computing framework. On the other hand, physical reservoir computing based on in vitro biological components has been proposed to investigate the computational capability of biological systems in laboratory experiments.

Chaos computing

“Chaos computing” is one instance of reservoir computing. The reservoir is an electronic circuit with a chaotic dynamics, and the trick is to design the reservoir just right, so that it performs logical computations. It seems that the only company that does this is ChaoLogix. What it had back in 2006 was already quite promising.

ChaoLogix has gotten to the stage where it can create any kind of gate from a small circuit of about 30 transistors. This circuit is then repeated across the chip, which can be transformed into different arrangements of logic gates in a single clock cycle, says Ditto.

“in a single clock cycle” is significant, as field-programmable gate array, which can also rearrange the logic gates, takes millions of clock cycles to rearrange itself.

It has been acquired by ARM in 2017, apparently for security reasons:

One benefit is that chaogates are said to have a power signature that is independent of the inputs which makes it valuable in thwarting differential power analysis (DPA) side channel attacks.


Pickering, Andrew. “The Science of the Unknowable: Stafford Beer’s Cybernetic Informatics.” Kybernetes 33, no. 34 (2004): 499–521. https://​​doi.org/​​10/​​dqjsk8.

Tanaka, Gouhei, Toshiyuki Yamane, Jean Benoit Héroux, Ryosho Nakane, Naoki Kanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, and Akira Hirose. “Recent Advances in Physical Reservoir Computing: A Review.” Neural Networks 115 (July 1, 2019): 100–123. https://​​doi.org/​​10/​​ggc6hf.