Systems Theory Terms

Below are some notes that I took while trying to understanding what exactly Systems theory is all about.

System

There is no universally agreed upon definition of ‘system’, but in general systems are seen as at least two elements that are interconnected. It is also common for systems to be talked about as if all of the components in the system work together to achieve some overall purpose or goal. The primary goal is often survival. A commonly accepted definition is below (note that the word ‘element’ is often replaced with ‘component’ for generality purposes):

a system is a set of two or more interrelated elements with the following properties (Ackoff, 1981, p. 15-16):

  1. Each element has an effect on the functioning of the whole.

  2. Each element is affected by at least one other element in the system.

  3. All possible subgroups of elements also have the first two properties.

Non-systems are generally considered to be single instances or a set of elements that lack interconnections, although these may be part of a system.

Sand scattered on a road by happenstance is not, itself, a system. You can add sand or take away sand and you still have just sand on the road. Arbitrarily add or take away football players, or pieces of your digestive system, and you quickly no longer have the same system. (Meadows, 2009, p. 12)

Environment

A systems environment consists of all variables which can affect its state. External elements which affect irrelevant properties of a system are not part of its environment. [...] A closed system is one that has no environment. An open system is one that does. (Ackoff, 1971, p. 663)

The environment is often referred to as the context in which the system is found or as its surroundings. Systems are considered closed if they have no interaction with their environment. It is often the case that systems are considered closed for practicality reasons even though they may not technically be absolutely closed, but just have limited interaction with their environment.

Boundary

The boundary is the separation between the system and environment. The actual point at which the system meets its environment is called an ‘interface’. It is often the case that the boundary is not sharply defined and that boundaries are conceptual rather than existing in nature.

As any poet knows, a system is a way of looking at the world. (Weinberg, 1975, p. 52)

It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose. (Meadows, 2009, p. 99)

The system therefore consists of all the interactive sets of variables that could be controlled by the participating actors. Meanwhile, the environment consists of all those variables that, although affecting the system’s behaviour could not be controlled by it. The system boundary thus becomes an arbitrary subjective construct defined by the interest and the level of the ability and/​or authority of the participating actors (Gharajedagh, 1999, p. 30-31)

Interactions (Inputs/​Outputs)

Conventional physics and physical chemistry deal with closed systems (Beralanaffy, 1968, p. 32)

Closed systems are those which are considered to be isolated from their environment. This property of ‘closedness’ is often required in scientific analysis as it makes it possible to be able to calculate future states with accuracy. The problem is that many systems are open, for example, living organisms are open systems that exchange matter with their environment. A living organism requires oxygen, water and food in order to survive. It gains all of this by interacting with its environment. This interaction has two components: input, that which enters the system from the outside and output that which leaves the system for the environment.

Subsystem and supersystem

The environment can itself consist of other systems interacting with their environment. A greater system is referred to as a super system, or suprasystem. A system that contains subsystems is said to have a hierarchy. That is different levels in the system may be different sets of systems. An intuitive idea demonstrating hierarchy, specifically nested hierarchy, is that of Russian nesting dolls. Other types of hierarchies include :

  • Subsumptive containment (“is a” hierarchy) - an example: a square is a polygon which is a shape.

  • Compositional containment (“part of” hierarchy) - an example is considering an aircraft by decomposing it into its subsequent systems e.g. propulsion system and flight-control system, and so on.

(Booch, et al., 2007)

Hard and soft systems

Systems are commonly differentiated based on whether they are hard or soft. Hard systems are precise, well defined and quantifiable whereas soft systems are not. With soft systems, the system doesn’t really exist and is instead a label or theory about some part of the world and how it operates. The hard and soft difference is really about different approaches in how to view the world systemically. The hard system approach sees the world as systemic and the soft system approach sees the process of inquiry as systemic:

The use of the word ‘system’ is no longer applied to the world, it is instead applied to the process of our dealing with the world. It is this shift of systemicity (or ‘systemness’) from the world to the process of inquiry into the world which is the crucial intellectual distinction between the two fundamental forms of systems thinking, ‘hard’ and ’soft. (Checkland, 2000, p.17)

Complexity

Complexity is not easy to define. Worse still, it can mean different things to different people. Even among scientists, there is no unique definition of Complexity. Instead, the scientific notion of Complexity – and hence of a Complex System – has traditionally been conveyed using particular examples of real-world systems which scientists believe to be complex. (Johnson, 2009, p. 3)

Some concepts which are related to and sometimes mistaken for complexity are (Edmonds, 1996, pp. 3-6):

  • Size—can be an indication of the general difficulty in dealing with a particular system and the potential for that system to be complex. But, it is not a sufficient definition of complexity as the components of the system also need to be interrelated.

  • Ignorance – complexity can be a cause of ignorance, but other causes are also possible. Therefore, it is not useful at all to conflate the two terms. For example, it is not very helpful to describe the internal state of an electron as complex just because we are ignorant about it.

  • Minimum description size also known as Kolmogorov complexity—By this definition highly ordered expressions come out as simple and random expressions as maximally complex. The problem with this definition is that it is possible to have expressions in which most of the information in unrelated, so as a result the whole is incompressible and large, but ultimately simple. Related to this is the fact that the more interrelations there are, the more compressible the expression is likely to be, but also the more complex. This is the opposite of what would be the case if the minimum description size defined complexity.

  • Variety – some variety is necessary for complexity, but it is not sufficient for it. For example, a piece of atonal music contains more variety than a tonal piece of music, but it is not necessarily more complex.

  • Order and disorder – It is true that complex things exist between order and disorder, but it is better to consider this just as a characteristic of complexity rather than a defining attribute. It is often tough to measure things uniformly and what may appear as disorder may actually be complex. Consider the three below images. If you were told that the last image was created by means of a pseudo-random number generator, then you would likely not view it as complex. This means that the language of representation is important in determining complexity and since the below diagrams do not have an inherent language we have to impose one on them. So, based on the assumption that it was generated randomly it would not be viewed as complex whereas in reality, since the assumption is wrong, it is complex (You can see the image 1 is in image 2 which is in image 3 thereby making image 3 the most complex).

  • Chaos - ”Chaos is the generation of complicated, aperiodic, seemingly random behaviour from the iteration of a simple rule. This complicatedness is not complex in the sense of complex systems science, but rather it is chaotic in a very precise mathematical sense. Complexity is the generation of rich, collective dynamical behaviour from simple interactions between large numbers of subunits. Chaotic systems are not necessarily complex, and complex systems are not necessarily chaotic” (Rickles, Hawe, & Shiell, 2007). Complex systems in general differ to chaotic systems because they contain a number of constituent parts (“agents”) that interact with and adapt to each other over time and can lead to emergent properties. In chaotic systems, uncertainty arises from the practical inability to know the initial conditions of a system. In complex systems, uncertainty is inherent due to the system having emergent properties.

  • Stochastic—In stochastic or random dynamics there is indeterminacy in the future evolution of the system, which can be described with probability distributions. This means that even if the initial conditions were known, there are still many possible states that the system could reach, some states can be more probable than others. A stochastic system is seen as the opposite of a deterministic system. A deterministic system has no randomness in the development of its future states. That is, the future dynamics of a deterministic system are fully defined by their initial conditions. A purely stochastic system can be fully described with little information. Therefore, complexity is a characteristic that is independent of the stochastic/​deterministic spectrum.

There are many definitions of complexity. Most of them revolve around the idea that the complexity of a phenomenon is a measure of how difficult it is to describe. One example of a decent definition that avoid the problem described above is:

that property of a language expression which makes it difficult to formulate its overall behaviour even when given almost complete information about its atomic components and their interrelations. (Edmonds, 1996, p. 6)

Another common definition that is used is:

Complexity is the property of a real world system that is manifest in the inability of any one formalism being adequate to capture all its properties. It requires that we find distinctly different ways of interacting with systems. Distinctly different in the sense that when we make successful models, the formal systems needed to describe each distinct aspect are NOT derivable from each other. (Mikulecky, 2005, p. 1)

The second definition highlights the point that complexity often leads to an inability for a single language or single perspective to describe all the properties of a system. This means that multiple languages and different perspectives are required just to understand a complex system. This has a very important consequence. It means that no single perspective is absolutely correct there are multiple truths and values, although some are more correct than others.

Organisation

Complexity is normally viewed as being either of the type organised or disorganised. Disorganised complexity problems are ones in which the Law of Large Numbers works. This means that even though there may be a multitude of agents all interacting together their stochastic elements average out and so become predictable (on average) with statistics. Said another way, individual variation tends to reduce potential predictability, but the aggregate behaviour, if the individual variations cancel each out, can be predicted. An example would be rolling a die. The exact outcome cannot be known, assuming the die is not loaded, but if you have a large enough sample size you can know that the average result is (3.5). Problems of organised complexity on the other hand are not problems:

to which statistical methods hold the key. They are all problems which involve dealing simultaneously with a sizable number of factors which are interrelated into an organic whole. (Weaver, 1948, p. 5)

Complex Systems

Although there is no formally accepted definition of complexity or complex systems, there are a number of intuitive features that appear in many definitions (Heylighen, 2008, pp. 4-7), (Ladyman, Wienser , & Lamber, 2013)

  • Complex systems cannot be too rigid like the “frozen” arrangement of molecules in a crystal or too random like molecules in a gas. But, must have both aspects. They are predictable in some aspects and surprising in others. The intermediate position is called the edge of chaos. The edge of chaos is where you have enough structure or patterns so that the system is not random and at the same time has enough fluidity and emergent properties that it is not deterministic.

  • Complex systems have many components that are connected, distinct, autonomous and to some level mutually dependent. The components are not completely mutually dependent, however, as would be the case in a crystal where the state of one molecule determines the state of all the others.

  • Complex systems have hierarchy i.e. they are made up of different levels of systems. It is important to note that the way that hierarchy works in complex systems is different than in simple systems. Complex systems do not have a central control system and are often not neatly nested instead they have a complex structure with possible interpenetration between the levels. This means that important roles can be played by apparently marginal components. The hierarchy is also not permanent, but can be transformed. Transformation does not imply that hierarchies are going to be destroyed, but they may just be shifted. (Cilliers, 2001)

  • Complex systems are commonly modelled as agents i.e. single systems that act upon their environment in response to event that they experience. Two examples of agents are people and cells. In regards to agents, systems have the following features:

    • The number of agents in a system is generally seen to be in a state of flux as agents can multiply or “die”.

    • Agents are often implicitly assumed to be goal-directed with the primary goal being survival.

    • Agents can impact each other either locally or globally through interaction. An example of global impact would be the ripple produced by a pebble that locally disturbs the surface of the water, but then widens to encompass the whole pond.

  • Processes in complex systems are often non-linear. In a mathematical sense, this refers to the disproportional relationships among variables in equations and the systems represented by those equations and variables. Feedback and mutual interaction between the variables is often the cause of non-linearity. A linear relationship between two quantities means that the two quantities are proportional to each other. For example, if you double the volume of water you also double its weight. In linear systems the superposition principle applies. The net response caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. Non-linear relationships are ones in which the superposition principle does not apply.

  • Complex system with their context dependent components cannot be fragmented into material parts. Simple systems can be.

  • Complex systems are normally open which means that they exchange matter, energy and/​or information with their wider environment.

  • Complex systems often have memory which means that their prior states can influence their current behaviour.

The below features are also found and will be described in their own sections below:

  • Complex systems have feedback

  • Complex systems exhibit spontaneous order. That is, they self-organize which also allows robustness.

  • Complex systems have emergent properties. This is summed up in the saying (the whole is greater than the sum of its parts)

Feedback

“feedback” exists between two parts when each affects the other. […] The exact definition of “feedback” is nowhere important. The fact is that the concept of “feedback”, so simple and natural in certain elementary cases, becomes artificial and of little use when the interconnections between the parts become more complex. […]Such complex systems cannot be treated as an interlaced set of more or less independent feedback circuits, but only as a whole. (Ashby, 1999, p. 54)

Feedback is a circular causal process in which some portion of a system’s output is returned (fed back) into the system’s input. Feedback is an important mechanism in achieving homeostasis also known as steady state or dynamic equilibrium. An example of a feedback mechanism in humans would be the release of the hormone insulin in response to increased blood sugar levels. Insulin increases the body’s ability to take in and convert glucose. This has an overall effect of restoring the blood sugar levels back to what they originally were.

Positive feedback is when small perturbations (system deviations) reinforce themselves and have an amplifying effect. An example is emotional contagion. If one person starts laughing, then this is likely to make others start laughing as well. Another example is the spread of a disease, where a single infection can eventually turn into a global pandemic. In positive feedback the effects are said to larger than the causes. When it is the other way around (the effects are smaller than the causes), then you have negative feedback. Negative feedback is when perturbations are slowly supressed until the system eventually return to its equilibrium state. Negative feedback has a dampening effect.

Positive feedback can have an effect of amplifying small and random fluctuations into unpredictable and wild swings in the overall system behaviour, which would then be considered chaotic. Negative feedback makes a system more predictable by supressing the effect of such swings and fluctuations. A consequence of this predictably is a loss of controllability. If negative feedback is present, then a system when pushed out of its equilibrium state will undertake some action to return to it. An example in social systems would be social protest when leaders or governments try to implement unwanted changes.

Interactions that involve positive feedback are very sensitive to their initial conditions. An extremely small and often undetectable change in the initial conditions can lead to drastically different outcomes. This is known as: “the butterfly effect”. The phrase refers to the idea that a change as tiny as the flapping or non-flapping of a butterfly’s wings can have a drastic effect on the weather patterns in another location in the world even going so far as leading to a tornado. Please note that the flapping of the wings does not cause the tornado. They are one instead just one part of the initial conditions that caused the tornado. The flapping wing represents a tiny, seemingly insignificant, change in the initial conditions that turns out to be extremely significant due to a cascading i.e. domino effect.

The butterfly effect is actually a concept relating to chaotic systems. It is important to note that if the initial conditions of the chaotic system were unchanged between two simulations to an infinite degree of precision, the outcome of the two will be the same over any period of time. This means that the systems are still deterministic. A similar, but distinct notion in complex systems is the ‘global cascade’ (Watts, 2002). This is basically a network-wide domino effect that occurs in a dynamic network. It has been noted that systems may appear stable for long period of time and be able to withstand many external shocks and then suddenly and apparently for no explicable reasons exhibit a global cascade. For this reason, systems are both robust and fragile. They can withstand many shocks making them robust, but global cascades can by triggered by shocks that are indistinguishable from others which have previously been withstood. Due to the fact that the original perturbations can be undetectable, the outcomes are then in principle unpredictable.

Complex systems tend to exhibit a combination of both positive and negative feedback. This means that the effects from certain changes are amplified and others dampened. This leads to the overall system behaviour being both unpredictable and uncontrollable.

Self-organization

Self-organization can be defined as the spontaneous emergence of global structure out of local interactions. “Spontaneous” means that no internal or external agent is in control of the process: for a large enough system, any individual agent can be eliminated or replaced without damaging the resulting structure. The process is truly collective, i.e. parallel and distributed over all the agents. This makes the resulting organization intrinsically robust and resistant to damage and perturbations. (Heylighen, 2008, p. 6)

The second law of thermodynamics says that “energy spontaneously tends to flow only from being concentrated in one place to becoming diffused and spread out.” (Lambert, 2015). An illustrating example is the fact that a hot frying pan cools down when it’s taken off the kitchen stove. Its thermal energy (“heat”) flows out to the cooler room air. The opposite never happens.

The second law of thermodynamics might at first glance appear to be implying that all systems need to degrade and cannot be sustained, but this is not the case. The second law of thermodynamics was formulated based on a separate class of phenomena (steam engines originally) than living systems. The original class relates to steady state phenomena close to thermodynamic equilibrium (having the same thermodynamic properties, e.g. heat). Living and more complex systems are steady state phenomena far from thermodynamic equilibrium. They are not isolated but depend on a steady flux of energy that is dissipated to maintain a local state of organisation.

Metaphorically, the micro level serves as an entropy sink, permitting overall system entropy to increase while sequestering this increase from the interactions where self-organization is desired. (Parunak & Brueckner, 2001, p. 124)

In other words, at the macro level there is an apparent reduction in entropy(measure of the spontaneous dispersal of energy), but at the micro level random processes greatly increase entropy. The system exports this entropy to its environment for example when we breathe we excrete carbon dioxide.

The term waste is not really suitable for the products of excretion because they may actually be used as input for other systems. Plants excrete oxygen which we humans require to survive. A better term is negentropy which is the entropy that a living system exports in order to keep its own entropy low. So, in summary living systems delay decay into thermodynamical equilibrium, i.e. death, by feeding upon negentropy in order to compensate for the entropy that they produce while living or to put it even more simply they suck orderliness from their environment.

An organism stays alive in its highly organized state by taking energy from outside itself from a larger encompassing system and processing it to produce within itself a lower entropy more organized state (Schneider & Kay, 1992, p. 26)

Autopoiesis

Regenerative cycling (autopoiesis) is another common feature of self-organizing systems.

To destroy exergy, self-organising systems use the same general strategy: They load high exergy energy into compounds which later will give it away in degraded form. For efficient exergy uptake, a constant supply of compounds with low exergy must be available. These are often provided by an internal organisation supplying the site of exergy loading in the system with degraded material to be “reconstructed”. If degraded material exist in ubiquitous amounts, there is no need for the organisation to provide it, but if the material is limited, a cyclic organisation delivering the material to the site of exergy uptake has survival value for the system. The more a substance is limiting, the higher the survival value for an organisation that keeps it within the system and transports it efficiently to the “re-loading” area of the system. This phenomenon is called the regenerative cycle.(Günther, 1994, p. 7)

The reason why more complex systems tend to be nested is that nested complex systems may have a larger capacity to degrade exergy because of the multiple layers of the network reinforcements by feedback than non-nested systems.

Dissipative structures

The view of self-organization that has been covered so far leads nicely into ‘dissipative structures’.

In Prigoginian terms , all systems contain subsystems , which are continually “fluctuating.” At times, a single fluctuation or a combination of them may become so powerful, as a result of positive feedback, that it shatters the pre-existing organization. At this revolutionary moment-the authors call it a “singular moment” or a “bifurcation point”-it is inherently impossible to determine in advance which direction change will take: whether the system will disintegrate into “chaos” or leap to a new, more differentiated , higher level of “order” or organization, which they call a “dissipative structure.” (Such physical or chemical structures are termed dissipative because, compared with the simpler structures they replace , they require more energy to sustain them.) (Prigogine & Stengers, 1984, p. 17)

A whirlpool is an example of dissipative structure and it could have been called ‘doubly dissipative ’because it requires a continuous flow of matter and energy to maintain its form. When the influx of external energy stops or falls below a certain threshold, the whirl pool will degrade. Other examples of dissipative structures include refrigerators, flames and hurricanes.

Attractors

In relation to self-organization, the term attractor will come up frequently. It is a mathematical term which refers to a value or set of values toward which variables in a dynamical system tend to evolve. A dynamical system is a system whose state evolves with time over a state space according to a fixed rule. A state space is the set of value that a process or system can take.

Attractors emerge, or at least will get stronger, when systems are moved out of equilibrium. Exergy is the energy that is available to be used. After the system and surroundings reach equilibrium, the exergy is zero.

Exergy is a measure of how far a system deviates from thermodynamic equilibrium […]The existence of an exergy gradient over a system drives it away from equilibrium. […] If a system is moved away from thermodynamic equilibrium by the application of a gradient of exergy, an attractor for the system can emerge for the system to organise in a way that reduces the effect of the applied gradient.[…] An increase of the applied gradient will also increase the strength of the attractor. (Günther, 1994, pp. 5-7)

One of the most common ways in which systems reach these attractors is through simple and random fluctuations which are then amplified by positive feedback. This process is referred to as: “order from noise”, a special case of the principle of selective variety. In summary, “order from noise” means that random perturbations (“noise”) cause the system to explore a variety of states in its state space. This exploration increases the chance that the system will arrive into the basin of a “strong” or “deep” attractor, from which it would then quickly enter the attractor itself.

Multiple equilibria occur when several different local regions of the same phase space are attractors. Minor perturbations can cause the system to shift between different equilibria or attractors, causing abrupt and dramatic changes in the system.

Thresholds

Thresholds mark the borders between different equilibria. This means that crossing crossing thresholds can have dramatic changes in the system. The term ‘threshold’ is used to broadly define the minimum amount of change that is required before impacts cause bifurcations or are recognized as important or dangerous. Thresholds can also be conditionally dependent. That is, there may be many interdependent thresholds or thresholds that become apparent only after other specific conditions have been met. This along with their dependence on initial conditions, couplings with other system components, and rapid change between multiple equilibria often make thresholds hard to predict accurately.

Path dependence/​hysteresis

Path dependence and hysteresis are both related to the phenomenon of system memory and mean that the system cannot be explained simply from the current conditions alone. They tell us that a systems state depends not only on the system dynamics and input, but also on the previous states of the system, such as initial conditions.

Path dependence is the idea that the current state of a system depends on the paths that it has taken. Hysteresis occurs when the removal of a stimulus doesn’t result in the system returning to its initial conditions. That is, the system behaves irreversibly. An example of path dependence in the climate system is vegetation cover. There are parts of the world where both dry grassland and wet rain forests are possible, despite having the same climate boundary conditions. The state in which the system stablizes depends on the system’s past. It is possible that a fire or deforestation by humans could cause the rainforest to irreversibly become grassland even though the climate boundary conditions remain the same. This is because each vegetation type modifies its local climate and creates stable local conditions for its own existence. Another example, of this is the Arctic sea ice. Once it is lost, sea ice is very hard to regrow sufficiently to be able to subsist through the summer melt, even though thick sea ice could stably persist in the same climate conditions.

Emergence

An intuitive understanding of emergence can be gained by looking at a painting painted with the technique of pointillism. When you look at it up close, all you can see is dots, but as you move further back the overall image begins to resolve. Unfortunately emergence, although it can be understood intuitively, is not a well clarified concept (Corning, 2002, pp. 6-8).

The concept of emergence is generally seen in contexts where the two metaphysical claims are discussed (Christen & Franklin, p. 1-2):

  1. Ontological monism is the claim that there is only one type of stuff in the world. This opposing view of this is the ‘vitalist’ position, which was promoted by Henri Bergson, for example. The vitalist position posits the existence of a life-substance which is inherently different from the inanimate stuff found in rocks and clouds. This inanimate stuff, or what Henri Bergson would call ‘elan vitale’, is the postulated reason for life’s unique properties. Reductionists and emergentists alike rule out vitalism in favour of ontological monism because they see vitalism as unparsimonious and unscientific,

  2. Hierarchical realism is the claim that any system that is under investigation can be broken into heirachical levels. In every system there is at least two levels. The first is the ‘lower level’ which consists of the parts and the second is the ‘upper level’ which consists of the whole system. The two levels can be connected through:

    • Microdeterminism where the parts in the lower level and their interactions fully determine the behaviour of the whole system

    • Macrodeterminism, also called down causation, where the upper level acts causally on the lower level. “downward causation is basically the result of the structural or functional organisation of the parts on the lower lever (e.g. a feedback mechanism)” (Christen & Franklin, p. 12)

One well known argument for why argues entities of the world, which evolved under disruptive conditions, are likely to be organised hierarchically is (Simon, 1960, p.470):

There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently—new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?

The watches the men made consisted of about 1,000 parts each. Tempus had so constructed his that if he had one partly assembled and had to put it down—to answer the phone, say—it immediately fell to pieces and had to be reassembled from the elements. The better the customers liked his watches, the more they phoned him and the more difficult it became for him to find enough uninterrupted time to finish a watch.

The watches that Hora made were no less complex than those of Tempus. But he had designed them so that he could put together subassemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly; and a system of ten of the latter subassemblies constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the manhours it took Tempus.

Emergence can be categorized into a few different types (Christen & Franklin, p. 6-7):

  • Pure phenomenological emergence – sees emergent properties as products of our ignorance or limitations. They are properties or behaviours that are at first sight surprising for the observer, but after a closer look at the lower level are explainable and no longer surprising. Examples of this can be found in chaotic attractors. Also, planetary motion prior to Kepler would have been considered ‘emergent’, but it turned out to be something rather simple (an ellipse). This sense of emergence sees it not as a claim about the universe, but about our understanding of it.

  • Epistemic emergence—consists mainly in properties or behaviours that appear on the higher level, but are reducible in the sense of Nagel-reductionism. Ernest Nagel proposed that theoretical reduction requires “bridge laws” that allow for the translation between the different vocabularies on the different levels. For example, he would claim that there exists “bridge laws” that truly state a law-like relation between any claim from chemistry, say, and a claim in the “reduction base” (physics). The reason for the existence of a theory for the upper level is basically an instrumental one, as the description of the phenomena is more compressed using the upper level theory. One example of this is using agent-based methodologies (ABM).

  • Emergence of macroproperties – is the emergence of macroproperties, of structural or functional organisation in a self-organised process. An example, is a Belousov-Zhabotinsky (BZ) reaction which gains new properties when far from equilibrium.

  • Theoretical emergence – concerns primary laws. In evolutionary theory, for example, laws appear on an upper level because their applications need a certain minimal degree of structure/​organisation. That is, physically represented information which can mutate. The question in this case is, if such laws have the same status like the laws on the basic level.

  • Weak causal emergence – tells us that an upper level phenomenon is weakly emergent with respect to the lower level domain when it arises from the lower level domain, but its truths are unexpected given the principles governing the lower level domain. Weakly emergent properties are seen as being capable of being determined through observing or simulating the system, but not through any prior analysis.

  • Strong causal emergence – tells us that the whole is something more than the sum of its parts. That is, strongly emergent properties are higher level phenomena that directly cause qualities in the lower level components and are irreducible to these constituent components.

  • Mystic emergence – posits the existence of laws or macroproperties that appear at a certain level and are impossible to truly understand. They need to be accepted as a primitive component of nature. Examples of this are: vitalism and creationism

Adaption

Adaptation is a relationship between a system and its environment. Systems are often classified as adaptable (able to be modified by an external agent) and/​or adaptive (able to change itself).

An example problem (Ashby, 1960, p. 11) demonstrating the concept of adaptive behaviour is that of the cat and fire. The cat’s behaviour in response to the fire is likely to at first be unpredictable and inappropriate. It may paw at it or stalk it like it is a mouse or walk unconcernedly onto it. It is unlikely use the fire as a method to achieve homeostasis in body temperature. That is it may sit far from the fire even when cold. Later, when the cat has had enough relevant experience with the fire it will approach the fire and seat itself in a place where the heat is moderate. If the fire is burning low, it will move nearer. If a hot coal happens to fall out, it will jump away. Its behaviour towards the fire is now considered ‘adaptive’.

A form of behaviour is adaptive if it maintains the essential variables within physiological limits.(Ashby, 1960, p. 57)

Resilience

Resilience is the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks. (Walker, Holling, Carpenter, & Kinzig, 2004)

A nice way of thinking of resilience is as follows:

I think of resilience as a plateau upon which the system can play, performing its normal functions in safety. A resilient system has a big plateau, a lot of space over which it can wander, with gentle, elastic walls that will bounce it back, if it comes near a dangerous edge. As a system loses its resilience, its plateau shrinks. (Meadows, 2009, p. 77)

Resilience arises from a rich structure of many feedback loops that can work in different ways to restore a system even after a large perturbation. A single balancing loop brings a system stock back to its desired state. Resilience is provided by several such loops, operating through different mechanisms, at different time scales, and with redundancy—one kicking in if another one fails. A set of feedback loops that can restore or rebuild feedback loops is resilience at a still higher level—meta-resilience, if you will. Even higher meta-meta-resilience comes from feedback loops that can learn, create, design, and evolve ever more complex restorative structures. Systems that can do this are self-organizing. (Meadows, 2009, p. 76)

It is important to note that resilience doesn’t mean that the system is static or constant. Resilient system can be and often are very dynamic. Short-term oscillations, fluctuations and long cycles of climax and collapse may be the norm. Systems that are constant over time can be un-resilient. This presents a problem because people often desire that systems be measurable and for variations over time periods to be minimised. Most people are unaware of what actually makes a system resilient as it is often hard to see.

Static stability is something you can see; it’s measured by variation in the condition of a system week by week or year by year. Resilience is something that may be very hard to see, unless you exceed its limits, overwhelm and damage the balancing loops, and the system structure breaks down. Because resilience may not be obvious without a whole-system view, people often sacrifice resilience for stability, or for productivity, or for some other more immediately recognizable system property. (Meadows, 2009, p. 77)

Complex Adaptive Systems

Many natural systems, e.g. brains, immune systems, societies, are complex adaptive systems. Complex adaptive systems display the complexity of complex systems, but they are also able to adapt and evolve with a changing environment. It is often referred to as co-evolution rather than just as adaptation to a single distinct environment. This is because other systems are in the environment.

References

  • Ackoff, R. (1981). Creating the corporate future. New York: John Wiley & Sons.

  • Ackoff, R. (1971). Towards a System of Systems Concepts. Management Science , 661-671.

  • Ashby, W. (1999). An Introduction To Cybernetics. London: Chapman & Hall.

  • Ashby, W. (1960). Design for a Brain: The Origin of Adaptive Behavior. New York: Wiley.

  • Beralanaffy, L. (1968). General System Theory. New York: George Braziller.

  • Booch, G., Maksimchuk, R., Engle, M., Young, B., Conallen, J., & Houston, K. (2007). Object-Oriented Analysis and Design with Applications. Boston: Addison-Wesley.

  • Checkland, P. (2000). Soft Systems Methodology: A Thirty Year Retrospective’. Systems Research and Behavioral Science , 11-58.

  • Christen, M., & Franklin, R. The Concept of Emergence in Complexity Science: Finding Coherence between Theory and Practice.

  • Cilliers, P. (2001). Boundaries, Heirarchies and Networks in Complex Systems. International Journal of Innovation Management , 6-7.

  • Corning, P. (2002). The Re-Emergence of “Emergence”: A Venerable Concept in Search of a Theory. Complexity .

  • Edmonds, B. (1996). What is Complexity? - The philosophy of complexity per se with application to some examples in evolution. Manchester: Centre for Policy Modelling.

  • Gharajedagh, i. J. (1999). Systems Thinking. Managing Chaos and Complexity. London: Elsevier.

  • Günther, F. (1994). Self-organisation in systems far from thermodynamic equilibrium. Sweden.

  • Heylighen, F. (2008). Complexity and Self-organization. in: Encyclopedia of Library and Information Sciences, eds. M. J. Bates & M. N. Maack.

  • Johnson, N. (2009). Two’s Company, Three is. In N. Johnson, Simply complexity: A clear guide to complexity theory (p. 3). Oneworld Publications.

  • Ladyman, J., Wienser , K., & Lamber, J. (2013). What is a Complex System? European Journal for Philosophy of Science , 4-10.

  • Lambert, F. (2015, July 2). two. Retrieved from The Second Law of Thermodynamics!: http://​​secondlaw.oxy.edu/​​two.html

  • Meadows, D. (2009). Thinking in Systems. London: Earthscan.

  • Mikulecky, D. (2005). Complexity science as an aspect of the complexity of science. In U. o. Liverpool, Worldviews, Science and Us (p. 1). Liverpool.

  • Parunak, H., & Brueckner, S. (2001). Entropy and Self-Organization in Multi-Agent Systems. International Conference on Autonomous Agents, 124-130

  • Prigogine, I., & Stengers, I. (1984). Order out of Chaos: Man’s new dialogue with nature. New York: Bantam books.

  • Rickles, D., Hawe, P., & Shiell, A. (2007). A simple guide to chaos and complexity. J Epidermiol Community Health

  • Schneider, E., & Kay, J. (1992). Life as a Manifestation of the Second Law of Thremodynamics. Mathematical and Computer Modelling , 25-48.

  • Simon, Herbert A. (1962); The Architecture of Complexity; Proceedings of the American Philosophical Society 106, No. 6; p. 467-482.

  • Walker, B., Holling, C., Carpenter, S., & Kinzig, A. (2004). Resilience, Adaptability and Transformability in Social–ecological Systems. Ecology And Society , http://​​www.ecologyandsociety.org/​​vol9/​​iss2/​​art5/​​.

  • Watts, D. (2002). A Simple Model of Global Cascades on Random Networks. Proceedings of the National Academy of Sciences of the United States of America, (pp. 5766-5771). National Academy of Sciences.

  • Weaver, W. (1948). Science and Complexity. American Scientist

  • Weinberg, G. (1975). Introduction to General Systems Thinking.