5  Self-organization

5.1 Introduction

We saw chaos and phase transitions in Chapters 2 and 3 and will now focus on a third amazing property: self-organization. Self-organization plays an essential role in psychological and social processes. It operates in our neural system at the neuronal level, in perceptual processes as well in higher cognition. In human interactions, self-organization is a key mechanism in cooperation and opinion polarization.

Self-organization is captivating because it reveals the remarkable ability of complex systems to generate order and structure without external control or intervention.

Unlike chaos and phase transitions, self-organization lacks a generally accepted definition. The definition most people agree on is that self-organization, or spontaneous order, is a process in which global order emerges from local interactions between parts of an initially disordered complex system. These local interactions are often fast, while the global behavior takes place on a slower time scale. Self-organization takes place in an open system, which means that energy, such as heat or food, can be absorbed. Finally, some feedback between the global and local properties seems to be essential. Self-organization occurs in many physical, chemical, biological, and human systems. Examples of self-organization include the laser, turbulence in fluids, convection cells in fluid dynamics, chemical oscillations, flocking, neural waves, and illegal drug markets. For a systematic review of research on self-organizing systems, see Kalantari, Nazemi, and Masoumi (2020). There are many great online videos. I recommend “The Surprising Secret of Synchronization” as an introduction. For a short history of self-organization research, I refer to the Wikipedia page on “Self-Organization.” For an extended historical review, I refer to Krakauer (2024).

This chapter also marks a transition from the study of systems with a small number of variables to systems with many variables. We now focus on tools and models for studying multi-element systems, such as agent-based modeling and network theory. We will see complexity and self-organization in action! This is not to say that the earlier chapters are not an essential part of complex-systems research. The global behavior of complex systems can often be described by a small number of variables that behave in a highly nonlinear fashion. To study this global behavior, chaos, bifurcation, and dynamical systems theory are indispensable tools.

The main goal of this chapter is to provide an understanding of self-organization processes in different sciences, and in psychology in particular. I will do this by providing examples from many different scientific fields. It is important to be aware of these key examples, as they can inspire new lines of research in psychology.

We will learn to simulate self-organizing processes in neural and social systems using agent-based models. To this end, we will use R and another tool, NetLogo. NetLogo is an open-source programming language developed by Uri Wilenski (2015). There are (advanced) alternatives, but as a general tool NetLogo is very useful and fun to work with.

I start with an overview of self-organization processes in the natural sciences, then I will introduce NetLogo and some examples. I will end with an overview of the application of self-organization in different areas of psychology.

5.2 Key examples from the natural sciences

5.2.1 Physics

One physical example of self-organization is the laser. An important founder of complex-systems theory is Hermann Haken (1977). He developed synergetics, a specific approach to the study of self-organization and complexity in systems that is also popular in psychology. Synergetics originated in Haken’s work on lasers. We will not discuss lasers in detail here, but the phenomenon is fascinating. Light from an ordinary lamp is irregular (unsynchronized). By increasing the energy in a laser, a transition to powerful coherent light occurs. In the field of synergetics, the order parameter is the term used to describe the coherent laser light wave that emerges. The individual atoms within this system move in a manner consistent with this emergent property, which is, unfortunately, called enslavement. Interestingly, the motion of these atoms contributes to the formation of the order parameter, that is, the laser light wave. Conversely, the laser light wave dominates the movement of the individual atoms. This interaction exhibits a cyclical cause-and-effect relationship or strong emergence (cf. fig. 1.2). Synergetics has been applied, as we will see, to perception (Haken 1992) and coordinated human movement (Fuchs and Kelso 2018).

An order parameter is a quantitative measure to describe the degree of order within a system, especially in the context of phase transitions.

Another famous example, which will be very important for psychological modeling later, is the Ising model of magnetism. In the standard 2D version of the model, atoms are locations on a two-dimensional grid. Atoms have left (\(-1\)) or right (\(1\)) spins. When the spins are aligned (all \(1\) or all \(-1\)), we have an effective magnet. If they are not aligned, the effect of the individual spins is canceled out. Two variables control the behavior of the magnet: the temperature of the magnet and the external magnetic field. The lower the temperature, the more the spins align. The temperature at which the magnet loses its magnetic force is called the Curie point (see YouTube for some fun demonstrations). The external field could be caused another magnet.

The Ising model (replaced by more advanced models of magnetism in modern physics) has found applications in many sciences.At high temperatures, all the atoms behave randomly, and the magnet loses its magnetic effect.
Figure 5.1: Schematic picture of the magnet. Spins, \(x\), can be left (\(-1\)) or right (\(1\)). At lower temperatures, \(T\), the spins tend to align with neighboring spins and the external field, \(\tau\), resulting in magnetism.

The main model equations of the Ising model are:

\[ H\left( \mathbf{x} \right) = - \sum_{i}^{n}{\tau x_{i}} - \sum_{< i,j >}^{}{x_{i}x_{j}}, \tag{5.1}\]

\[ P\left( \mathbf{X} = \mathbf{x} \right) = \frac{\exp\left( - \beta H\left( \mathbf{x} \right) \right)}{Z}. \tag{5.2}\]

The first equation defines the energy of a given state vector \(\mathbf{x}\) (for \(n\) spins with states —1 and 1). The notation \(<i,j>\) in the summation means that we sum over all neighboring, or linked, pairs. Vectors and matrices are represented using bold font.

The external field and temperature are \(\tau\) and \(T\) (\(1/\beta\)), respectively. The first equation simply states that nodes congruent with the external field lower the energy. Also, neighboring nodes with equal spins lower the energy. Suppose we have only four connected positive spins (right column of figure 5.1) and no external field, then we have \(\mathbf{x} = (1,1,1,1)\) and \(H = - 6\). This is also the case for \(\mathbf{x} = ( - 1, - 1, - 1, - 1)\), but any other state has a higher energy.

With an external field we can force the spins to be all left or all right.

The second equation defines the probability of a certain state (e.g., all spins \(1\)). This probability requires a normalization, \(Z\), to ensure that the probabilities over all possible states sum up to 1. For large systems (\(N > 20\)), the computation of \(Z\) is a substantive issue as the number of possible states grows exponentially. If the temperature is very high, that is, \(\beta\) is close to 0, \(\exp\left( - \beta H\left( \mathbf{x} \right) \right)\) will be 1 for all possible states, and the spins will behave randomly. The differences in energy between states do not matter anymore.

The randomness of the behavior is captured by the concept of entropy. To explain this a bit better, we need to distinguish the micro- and macrostate of an Ising system. The Boltzmann entropy is a function of the number of ways (\(W\)) in which a particular macrostate can be realized. For \(\sum_{}^{}x = 4\), there is only one way (\(\mathbf{x} = 1,1,1,1)\). But for \(\sum_{}^{}x = 0\), there are six ways (\(W = 6\)). The Boltzmann entropies (\(\ln W)\) for these two cases are 0 and 1.79, respectively. The concept of entropy will be important in later discussions.

Entropy is a measure of the degree of disorder or randomness in a system.The microstate is defined by the configuration \(\mathbf{x}\) of spins, while the macrostate is determined by the sum of spins (similar to how magnetization is defined).

In the simulation of this model, we take a random spin and calculate the energy of the current \(\mathbf{x}\) and the \(\mathbf{x}\) with that particular spin flipped. The difference in energy determines the probability of a flip:

\[ P\left( x_{i} \rightarrow - x_{i} \right) = \frac{1}{ 1 + e^{- \beta\left( H\left( x_{i} \right) - H\left( - x_{i} \right) \right)}}. \tag{5.3}\]

If we do these flips repeatedly, we find equilibria of the model. This is called the Glauber dynamics (more efficient algorithms do exist). The beauty of these algorithms is that the normalization constant \(Z\) falls out of the equation. In this way we can simulate Ising systems with \(N\) much larger than 20.

Glauber dynamics is a simulation technique that updates the spin states in a system based on energy differences and temperature, guiding it toward equilibrium.

Interestingly, in the case of a fully connected Ising network (also called the Curie—Weiss model), the emergent behavior—what is called the mean field behavior—can be described by the cusp (Abe et al. 2017; Poston and Stewart 2014). The external field is the normal variable. Temperature acts as a splitting variable. The relationship to self-organization is that when we cool a hot magnet, at some threshold the spins begin to align and soon are all \(1\) or \(-1\). This is the pitchfork bifurcation, creating order out of disorder.1

The mean field behavior is the average magnetic field produced by all spins.

In the 2D Ising model (see figure 5.1), the connections are sparse (only local), and more complicated (self-organizing) behavior occurs. We will simulate this in NetLogo later in this chapter, Section 5.3.2.2, and as a model of attitudes in Chapter 6, Section 6.3.3.

A fully connected Ising model behaves according to the cusp. In less connected networks of Ising spins, self-organizing patterns can emerge.

5.2.2 Chemistry

Other founders of self-organizing systems research are Ilya Prigogine and Isabelle Stengers. Prigogine won the 1977 Nobel Prize in chemistry for his work on self-organization in dissipative systems. These are systems far from thermodynamic equilibrium (due to high energy input) in which complex, sometimes chaotic, structures form due to long-range correlations between interacting particles. One notable example of such behavior is the Belousov—Zhabotinsky reaction, an intriguing nonlinear chemical oscillator.

Stengers and Prigogine authored the influential book Order Out of Chaos in (1978). This work significantly influenced the scientific community, particularly through their formulation of the second law of thermodynamics. One way of stating the second law is that heat flows spontaneously from hot objects to cold objects, and not the other way around, unless external work is applied to the system. A more appealing example might be the student room that never naturally becomes clean and tidy, but rather the opposite.

The second law of thermodynamics states that the total entropy of an isolated system always increases over time and never decreases, meaning that spontaneous processes in nature tend to move toward a state of increasing disorder or randomness.

Stengers and Prigogine (1978) argued that while entropy indeed increases in closed systems, the process of self-organization in open systems can create ordered structures, resulting in a net decrease in what they referred to as “local entropy.” Prigogine and Stengers placed particular emphasis on irreversible transitions, highlighting their importance in understanding complex systems. While the catastrophe models we previously discussed exhibited symmetrical transitions (sudden jumps in the business card are symmetric), Prigogine’s research revealed that this symmetry does not always hold true.

Irreversible transitions refer to changes in a system that cannot be reversed by simply reversing the conditions that caused the change, often resulting in a permanent change in the state or structure of the system.

To illustrate this point, consider the analogy of frying an egg. The process of transforming raw eggs into a fried form represents a phase transition, but it is impossible to reverse this change and unfry the egg. Prigogine linked these irreversible transitions to a profound question regarding the direction of time, commonly known as the arrow of time. Although it is a fascinating topic in itself, we will not explore it further here.

5.2.3 Biology

There is no shortage of founders of complex-systems science. Another fantastic book is Stuart Kaufmann’s Origin of Order (1993), which introduces the concept of self-organization into evolutionary theory. He argues that the small incremental steps in neo-Darwinistic processes cannot fully explain natural evolution. If you want to know about adaptive walks and niche hopping in rugged fitness landscapes, you need to read his book (Kauffman 1993). Another influential theory is that of punctuated equilibria, which proposes that species undergo long periods of stability interrupted by relatively short bursts of rapid evolutionary change (Eldredge and Gould 1972).

A neat example of the role of self-organization in evolution is the work on spiral wave structures in prebiotic evolution by Boerlijst and Hogeweg (1991). This work builds on Eigen and Schuster’s (1979) classic work on the information threshold. Evolution requires the copying of long molecules. But in a system of self-replicating molecules, the length of the molecules is limited by the accuracy of replication, which is related to the mutation rate. Eigen and Schuster showed that this threshold can be overcome if such molecules are organized in a hypercycle in which each molecule catalyzes its nearest neighbor. However, the hypercycle was shown to be vulnerable to parasites. These are molecules that benefit from one neighbor but do not help another. This molecule will outcompete the others, and we are back to the limited one-molecule system.

A hypercycle is a network of self-replicating molecules or entities that mutually support each other’s production, leading to an increase in complexity and stability beyond what individual entities could achieve alone.

What Boerlijst and Hogeweg did was to implement the hypercycle in a cellular automaton. In the hypercycle simulation, cells could be empty (dead) or filled with one of several colors. Colors die with some probability but are also copied to empty cells with a probability that depends on whether there is a catalyzing color in the local neighborhood. One of the colors is a parasite, catalyzed by one color but not catalyzing any other colors. The effect, which you will see later using NetLogo, is that rotating global spirals emerge that isolate the parasites so that a stable hypercycle prevails.

A cellular automaton (CA) is usually a two-dimensional grid of cells, where cells interact with their neighbors, as in the 2D Ising model, but this can be generalized to more or less dimensions.

Many examples of self-organization come from ecosystem biology. We will see a simulation of flocking in NetLogo later, but I also want to highlight the collective behavior of ants (figure 5.2).

Figure 5.2: The ant bridge is an example of collective behavior.

Ants exhibit amazing forms of globally organized behavior. They build bridges, nests, and rafts, and they fight off predators. They even relocate nests. Ant colonies use pheromones and swarm intelligence to relocate. Scouts search for potential sites, leaving pheromone trails. If a promising location is found, more ants follow the trail, reinforcing the signal. Unsuitable sites result in fading trails. Once a decision is made, the colony collectively moves to the chosen site, transporting their brood and establishing a new nest.

It is not a strange idea to think of an ant society as a living organism. Note that all this behavior is self-organized. There is clearly no super ant that has a blueprint for building bridges and telling the rest of the ants to do certain things. Ants also don’t hold lengthy management meetings to organize. The same is true of flocks of birds. There is no bird that chirps commands to move collectively to the left, to the right, or to split up. This is true of human brains. An individual neuron is not intelligent.

Our intelligence is based on the collective behavior of billions of neurons.

5.2.4 Computer science

Another important source of self-organization research is computer science. A simple but utterly amazing example is the work on John Conways’ Game of Life (Berlekamp, Conway, and Guy 2004). The rules are depicted in figure 5.3.

Figure 5.3: The rules of the Game of Life.

For each cell, given the states of its neighbors, the next state for all cells is computed. This is called synchronous updating.2 It is hard to predict what will happen if we start from a random initial state. But you can easily verify that a block of four squares is stable, and a line of three blocks will oscillate between a horizontal and a vertical line.

A great tool for playing around with the Game of Life is Golly, a freely available application for computers and mobile phones. I ask you to download and open Golly, draw some random lines, press Enter, and see what happens. Often you will see it converging to a stable state (with oscillating subpatterns). Occasionally you will see walkers or gliders (zoom out). These are patterns that move around the field.

Random initial patterns rarely lead to anything remarkable, but by choosing special initial states, surprising results can be achieved. First, take a look at the Life within Patterns folder. Take, for example, the line-puffer superstable or one of the spaceship types. My favorite is the metapixel galaxy in the HashLife folder. Note that you can use the + and — buttons to speed up and slow down the simulation. What this does is simulate the game of life in the game of life! Zoom in and out to see what really happens. I’ve seen this many times, and I’m still baffled. A childish but fun experiment is to disturb the metapixel galaxy in a few cells. This leads to a big disturbance and a collapse of the pattern.

The Turing machine is a theoretical machine developed by Alan Turing in 1936, that despite its simplicity can implement any computer algorithm, including, of course, the Game of Life!

I was even more stunned to see that it is possible to create the (universal) Turing machine in the Game of Life (Rendell 2016). The Game of Life implementation of the Turing machine is shown in figure 5.4. This raises the question of whether we can build self-organizing intelligent systems using elementary interactions between such simple elements. Actually, we can to some extent, but by using a different setup, based on brain-like mechanisms (see the next section on neural networks).

Figure 5.4: The Turing machine built in the Game of Life. (Reproduced from LifeWiki.)

Another root of complex-systems theory and the role of self-organization in computational systems is cybernetics (Ashby 1956; Wiener 2019). To give you an idea of this highly original work, I will only mention the titles of a few chapters of Norman Wiener’s book, originally published in 1948: “Gestalt and Universals,” “Cybernetics and Psychopathology,” “On Learning and Self-Reproducing Machines,” and, finally, “Brainwaves and Self-Organization.” And this was written in 1948!

Cybernetics studies circular causal and feedback mechanisms in complex systems, focusing on how systems regulate themselves, process information, and adapt to changes in their environment.

The interest in self-organization is not only theoretical. In optimization, the search for the best parameters of a model describing some data, techniques inspired by cellular automata and self-organization have been applied (Langton 1990; Xue and Shen 2020). I have always been fascinated with genetic algorithms (Holland 1992a; Mitchell 1998), where the solutions to a problem (sets of parameter values) are individuals in an evolving population. Through mutation and crossover, better individuals evolve. This is a slow but very robust way of optimizing, preventing convergence to local minima.

Genetic algorithms are a class of optimization algorithms inspired by the process of natural selection, where solutions to a problem evolve over generations.

John Henry Holland is considered one of the founding fathers of the complex-systems approach in the United States. He has written a number of influential books on complex systems. His most famous book, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control Theory, and Artificial Intelligence (Holland 1992b), has been cited more than 75,000 times.

A self-organizing algorithm that has played a large role in my applied work is the Elo rating system developed for chess competitions (Elo 1978). Based on the outcomes of games, ratings of chess players are estimated, which in turn are used to match players in future games. Ratings converge over time, but adjust as players’ skills change. We have adapted this system for use in online learning systems where children play against math and language exercises (Maris and van der Maas 2012). The ratings of children and exercises are estimated on the fly in a large-scale educational system (Klinkenberg, Straatemeier, and van der Maas 2011). We build this system to collect high frequency learning data to test our hypotheses on sudden transitions in developmental processes, but it was more successful as an online adaptive practice system. We collected billions of item responses with this system (Brinkhuis et al. 2018).

The Elo rating system is a self-organizing method of calculating the relative skill levels of players in head-to-head games based on the results of their games.

5.2.5 Neural networks

The current revolution in AI, which is having a huge impact on our daily lives, is due to a number of self-organizing computational techniques. Undoubtedly, deep learning neural networks have played the largest role. A serious overview of the field of neural networks is clearly beyond the scope of this book, but one cannot understand the role of complex systems in psychology without knowing at least the basics of artificial neural networks (ANNs), that is, networks of artificial neurons. ANNs consist of interconnected nodes, or “neurons,” organized into layers that process information by propagating signals through the network. ANNs are trained on data to learn patterns and relationships, enabling them to perform tasks such as classification, regression, and pattern recognition.

Artificial neural networks are computational models inspired by the structure and function of biological neural networks.

Artificial neurons are characterized by their response to input from other neurons in the network, which is typically weighted and summed before being passed through an activation function. This activation function may produce either a binary output or a continuous value that reflects the level of activation of the neuron. The input could be images, for example, and the output could be a classification of these images. The important thing is that neural networks learn from examples.

Unsupervised learning is based on the structure of the input. A famous unsupervised learning rule is the Hebb rule (Hebb 1949), which states that what fires together wires together. Thus, neurons that correlate in activity strengthen their connection (and otherwise connections decay). In supervised learning, connections are updated based on the mismatch between model output and intended output through backpropagation. Hebbian learning and backpropagation are just two of the learning mechanisms used in modern ANNs.

Backpropagation is a mechanism to update specific connections such that this mismatch or error is minimized over time.

Modern large language models, like GPT, differ from traditional backpropagation networks in terms of their architecture, training objective, pre-training process, scale, and application. Large language models use transformer architectures, undergo unsupervised pre-training followed by supervised fine-tuning, are trained on massive amounts of unlabeled data, are much larger in size, and are primarily used for natural language-processing tasks.

Another important distinction is between feedforward and recurrent neural networks. An interesting recurrent unsupervised model is the Boltzmann machine. It is basically an Ising model (see Section 5.2.1) where the connections between nodes have continuous values. These connections or weights can be updated according to the Hebb rule. A simple setup of the Boltzmann machine is to take a network of connected artificial neurons and present the inputs to be learned in some sequence by setting the states of these neurons equal to the input. The Hebb rule should change the weights between neurons so that the Boltzmann machine builds a memory for these input states. This is the training phase. In the test phase, we present partial states by setting some, but not all, nodes to the values of a particular learned input pattern. By the Glauber dynamics, we update the remaining states that should take on the values belonging to the pattern. This pattern completion task is typical for ANNs.

Feedforward neural networks process information in a single forward pass, while recurrent neural networks have directed cycles, allowing them to capture temporal dependencies.The Hebb rule states neurons that fire together wire together.

This setup is called the general or unrestricted Boltzmann machine, where any node can be connected to any other node and each node is an input node. The restricted Boltzmann machine (RBM) is much more popular because of its computational efficiency. In an RBM, nodes are organized in layers, with connections between layers but not within layers. In a deep RBM, we stack many of these layers, which can be trained in pairs (figure 5.5).3 Other prominent approaches are the Kohonen self-organizing maps and the Hopfield neural network.

Figure 5.5: The deep learning restricted Boltzmann machine.

The waves of popularity of neural networks are closely related to the development of supervised learning algorithms, where the connections between artificial neurons are updated based on the difference between the output and the desired or expected output of the network. The first supervised ANN, the perceptron, consisted of multiple input nodes and one output node and was able to classify input patterns from linearly separable classes. This included the OR and AND relation but excluded the XOR relation. In the XOR, the sum of the two bits is not useful for classification. By adding a hidden layer to the perceptron, the XOR can be solved, but it took many years to develop a backpropagation rule for multilayer networks such that they can learn this nonlinear classification from examples. We will do a simple simulation in NetLogo later. Although they are extremely powerful, it is debatable whether backprop networks are self-organizing systems. Self-organizing systems are characterized by their ability to adapt to their environment without explicit instructions. Unsupervised neural networks are more interesting in this respect.

In the XOR pattern, the combinations of 00 and 11 are false, 01 and 10 are true.

All these models were known at the end of the twentieth century, but their usefulness was limited. This has changed due to some improvements in algorithms but especially in hardware. Current deep-learning ANNs consist of tens of layers within billions of nodes, trained on billions of inputs using dedicated parallel processors (e.g., Schmidhuber 2015).

Neural networks are at the heart of the AI revolution, but other developments, especially reinforcement learning, have also played a key role. Examples are game engines, robots, and self-driving cars. Note that the study of reinforcement learning also has its roots in psychology (see Chapter 1 of Sutton and Barto 2018).

Reinforcement learning is essential in AI systems that need to behave or act on the environment.

I was most amazed by the construction and performance of AlphaZero chess. AlphaZero chess (Silver et al. 2018) combines a deep learning neural network that evaluates positions and predicts next moves with a variant of reinforcement learning (Monte Carlo tree search). Amazingly, AlphaZero learns chess over millions of self-played games. This approach is a radical departure from classic chess programs, where brute-force search and built-in indexes of openings and endgames were the key to success. As it learns, it shows a phase transition in learning after about 64,000 training steps (see fig.7 in McGrath et al. 2022). For an analysis of the interrelations between psychology and modern AI, I refer to van der Maas, Snoek, and Stevenson (2021).

AlphaZero chess is a self-organizing program that learns chess from scratch by playing against itself.

AlphaZero’s use of Monte Carlo tree search is also a form of symbolic artificial intelligence. The idea of combining classic symbolic approaches with neural networks has always been in the air. The third wave of this hybrid approach is reviewed in Garcez and Lamb (2023).

5.2.6 The concept of self-organization

I trust that you now possess some understanding of self-organization and its applications across various scientific fields. Self-organization is a generally applicable concept that transcends various disciplines, yet it maintains strong connections with specific examples within each discipline.

As previously mentioned, the precise definition of self-organization remains under discussion, and a range of criteria continue to be debated. Key questions, such as the degree of order necessary for a system to be deemed self-organized, whether any external influences are permissible, whether a degree of randomness within the system is acceptable, and whether the emergent state must be irreversible, are among the issues that lack definitive resolutions.

This ambiguity in the definition isn’t unusual for psychologists, as many nonformal concepts lack strict definitions. The value of the self-organization concept is primarily found in its concrete examples, its broad applicability, such as in the field of artificial intelligence, and our capability to create simulations of it. The focus of the next section will be on such simulations using a dedicated tool, NetLogo.

5.4 Self-organization in psychology and social systems

In this second part of the chapter, I provide illustrations of research on self-organization within various psychological systems, spanning several subfields of psychology. I begin with an exploration of self-organization in the context of the brain and conclude with an examination of its implications within human organizations. I will point to relevant literature to guide further exploration in other areas.

5.4.1 The brain

Many psychological and social processes involve self-organization. As discussed above, at the lowest level self-organization plays a role in neural systems. Self-organization in the brain is an active area of research (Breakspear 2017; Chialvo 2010; Cocchi et al. 2017; Ooyen and Butz-Ostendorf 2017; Plenz et al. 2021). Dresp-Langley (2020) distinguished seven key properties of self-organization clearly identified in brain systems: modular connectivity, unsupervised learning, adaptive ability, functional resiliency, functional plasticity, from-local-to-global functional organization, and dynamic system growth.

A key example is Walter Freeman’s work on the representation of odors in the brain (Skarda and Freeman 1987). He used EEG measurements to support his nonlinear system model of the brain. Freeman proposed that the brain operates by generating of dynamic patterns of electrical activity, which he called attractors.

In Freeman’s theory, attractors represent stable states of neural activity that arise spontaneously from the interactions between large populations of neurons.

Another influential theory was proposed by neuroscientist Gerald Edelman. His theory of neural Darwinism suggests that the development of the brain’s neural connections is based on a process of competition and selection, rather than being pre-wired in the genes (Edelman 1987). According to Edelman’s theory, the brain is a complex, dynamic system made up of many interconnected neurons that constantly interact with each other and the outside world. The process of competition and selection occurs through the formation of ensembles of neurons that respond to specific stimuli or experiences. An alternative approach was put forward by Carpenter and Grossberg (1987). Grossberg and Carpenter’s theory focuses on how neural networks in the brain self-organize to process information and adapt to changing environments. It explores the principles governing neural dynamics, leading to the emergence of coherent cognitive and behavioral patterns through interaction and learning within neural systems.

In neural Darwinism, the connections between neurons in successful ensembles become stronger over time, while those in unsuccessful ensembles weaken or disappear.

It has also been claimed that self-organized criticality (SOC) (see the Sandpile model in Section 5.3.1) plays a role in the brain (Bak, Tang, and Wiesenfeld 1988). It is hypothesized that when a system is close to criticality, small perturbations can have large, cascading effects, which can allow the system to rapidly switch between different states of activity in response to changes in the environment. One of the key pieces of evidence for SOC in the brain comes from studies of the distribution of sizes of neural activity events, which has been found to follow a power law distribution, but alternative explanations have been provided (Bédard, Kröger, and Destexhe 2006). This is a technical area of research with many methodological challenges (Lurie et al. 2020; O’Byrne and Jerbi 2022).

A promising general approach to understanding the so-called “predictive” brain functions is the free-energy account (Clark 2013), which implements a form of self-organization (Friston 2009). The brain is not simply reacting to the world around us but is actively generating predictions about what we will see, hear, feel, and experience, based on our past experiences and knowledge. The predictive brain theory suggests that the brain’s predictions are generated through a process of hierarchical inference, in which information from lower-level sensory areas is combined and integrated in higher-level areas to generate more complex predictions about the world. These predictions are then compared to the incoming sensory inputs, and any discrepancies between the predictions and the actual inputs are used to update the predictions and improve the brain’s accuracy over time.

The predictive brain is constantly making predictions about the sensory inputs it receives from the environment, minimizing the discrepancy between expected and actual inputs.

5.4.2 Consciousness

Many will agree on the idea that higher psychological functions or properties such as thinking, perceiving, remembering, and reasoning, but also personality and emotions (i.e., the mind), emerge out of lower-order brain activities. Of special interest is consciousness. Seth and Bayne (2022) list 22 different theories that link consciousness to neurobiology. Well-known examples are the global workspace theory, the integrated information theory, and higher-order theory. Self-organization plays a role in most of these theories.

The central idea of global workspace theory is that there is a central workspace in the brain, a kind of mental stage where information from various sensory inputs and memory systems is gathered, processed, and integrated. The workspace is not tied to a specific brain region but is thought to emerge from the dynamic interactions of widespread neural circuits.

Information that enters the global workspace becomes available for widespread distribution throughout the brain, allowing for coordinated, conscious processing.

The core proposition of integrated information theory (ITT) is that consciousness is equivalent to a system’s ability to integrate information. According to IIT, the level of consciousness a system possesses can be quantitatively measured by a value called \(\phi\), which represents the amount of integrated information the system can generate. A higher \(\phi\) indicates a higher level of consciousness. According to integrated information theory, for a system to be conscious, it must be able to combine diverse pieces of information into a single coherent whole.

For higher-order theories of consciousness, meta-representations are critical. One might have a representation of a particular perception, such as a flower, and additionally have a meta-representation that acknowledges “I am perceiving a flower.” I find higher-order theories most compelling because they make a clear distinction between unconscious and conscious information processing. A recent and interesting variant is the self-organizing meta-representational account of Cleeremans et al. (2020), as it states that consciousness is something the brain learns to do.

Higher-order theories of consciousness suggest that consciousness arises when the brain represents its own processes to itself.

My thinking about consciousness has been strongly influenced by the work of Douglas Hofstadter, especially his book on Gödel, Escher, Bach (Hofstadter 1979). In his work, our sense of self is a construct formed by the brain’s ability to use symbols, such as natural language, to refer to its own activities and experiences. Consciousness is based on symbolic self-reference, thus meta-representations. I think, with Hofstadter (2007), that this higher-order self has the ability to influence the lower-order processing of the brain, a case of downward causation (Section 1.2).5 For a somewhat critical analysis, I refer to Nenu (2022).

In Hofstadter’s theory, consciousness arises when these self-referential loops (strange loops) reach a certain level of complexity.

Zooming out, having twenty-two theories of consciousness, and this is an underestimate, is a bit much. The lack of empirical data constraints on theories of consciousness is clearly an issue (Doerig, Schurger, and Herzog 2021).

5.4.3 Visual illusions

From the earliest days of psychology as a scientific discipline, researchers were interested in the organizational properties of perception. Gestalt psychologists such as Wertheimer and Koffka claimed that we perceive whole patterns or configurations, not just individual components. The Gestalt psychologist formulated a number of Gestalt principles such as grouping, proximity, similarity, and continuity. A review of a century of research and an analysis of their current role in vision research is provided by Wagemans et al. (2012). Much of the modeling of the self-organizing processes in perception has been done in the tradition of synergetics. Excellent sources are Kelso (1995) and Kruse and Stadler (2012). Grossberg and Pinna (2012) discuss neural implementations of the Gestalt principles.

One might say that visual perception was one of the first applications of self-organization, even before anything like complexity science existed.

Another related approach is the ecological approach to visual perception by Gibson (2014). In Gibson’s approach, perception is not just a process of analyzing sensory input but an active process that involves the perceiver’s relationship to the environment, including the perception of affordances (i.e., opportunities for action) in the environment that guide and shape perception.

The ecological approach highlights how perception is directly informed by the actionable properties of the environment without the need for complex internal processes.

A combination of Gestalt principles, when acting in opposite directions, can lead to all kinds of perceptual illusions. The “Optical Illusion model” in NetLogo’s Model Library illustrates some of them. Check out the codes for each illusion—they are extremely short and elegant (figure 5.6).

Figure 5.6: The Kindergarten illusion from The Optical Illusion model in NetLogo.

In Chapter 3, I provided several examples of sudden jumps and hysteresis in multistable perception. NetLogo is also a great tool for experimenting with these effects. Download “Motion Quartet” from the NetLogo community website (or from this book’s software repository) and explore hysteresis in your own perception.

5.4.4 Motor action

Many body motions are periodic in nature—think of walking, swimming, dancing, and galloping. A famous paradigm for studying coordinative movement patterns is the finger movement task, in which one has to move both index fingers up and down (or right and left), either in phase or out of phase. Figure 5.7 explains the setup and data showing the transition between two in-phase or out-of-phase oscillations.

Key to these complex motions is the synchronization of the movements of body parts.
Figure 5.7: The finger-movement task. Two fingers move up and down (x1 and x2). They can move in phase or out of phase with a phase difference of 0 and \(\pi\) (bottom left figures). The model is shown on the right side. The potential function either has two stable states (a phase difference\(\ \varphi\) of 0 or \(\pi\); \(- \pi\) is the same state) or only one stable state (a phase difference of 0). Coupling strength, \(b/a\), and heterogeneity, \(\Delta w\), are control variables. (Adapted from Haken, Kelso, and Bunz 1985; Kelso 2021)

The Haken—Kelso—Bunz (HKB) model, developed in the tradition of synergetics, explains the phase transition between in-phase and anti-phase motions in a way we saw in Section 3.4.2. They set up a potential function in the form of

\[ V(\varphi) = - \Delta w\varphi - b\cos\varphi - a\cos{2\varphi}, \tag{5.4}\]

where \(\varphi\) is the order or behavioral variable, the phase difference between the two fingers. The main control parameter is \(b/a\). According to Kelso (2021), coupling strength (\(b/a\)) corresponds to the velocity or frequency of the oscillations in the experiments. \(\Delta w\) is the difference (heterogeneity, diversity) between the natural frequencies of the individual oscillatory elements. In the finger-movement task, this parameter is expected to be 0. The behavior of this potential function is cusp-like. It has two stable states, 0 and \(\pm \pi\), and increasing and decreasing the frequency leads to hysteresis. The effect of \(\Delta w\) is similar to the fold catastrophe (Section 3.3.2).

This potential function is proposed as the simplest form that explains the experimental results. This is why I would call this a phenomenological model. However, Haken, Kelso, and Bunz (1985) also present a more mechanistic model, a combination of van der Pol and Rayleigh oscillators (Alderisio, Bardy, and di Bernardo 2016). The stochastic variant of the HKB model also features early warnings such as critical slowing down (see the catastrophe flags, Section 3.5.1.6). The presence of critical slowing down and other flags has been confirmed experimentally (Kelso, Scholz, and Schöner 1986).

One difference with the catastrophe approach is that the synergetic models that incorporate hysteresis typically do not have a splitting control variable. The concept of structural stability, which is fundamental to catastrophe theory, is not used in synergetics. What the splitting factor might be in this model is not so clear. I have never understood why coupling strength \(b/a\) (see figure 5.7) and the frequency of the oscillations are equated in the basic version of the HKB model (see also Beek, Peper, and Daffertshofer 2002). Clearly, uncoupled oscillators would have a rather random phase difference. Strengthening the coupling would lead to a kind of pitchfork bifurcation.

Schmidt, Carello, and Turvey (1990) used an experimental paradigm in which two people swing a leg up and down while sitting side by side. A metronome was used to manipulate the frequency of the swing. Clear jumps from out-of-phase to in-phase movement were demonstrated.

This coupling and uncoupling is also a phenomenon in the visual coordination of rhythmic movements between people.

Kelso (2021) provide an overview of the impressive amount of work on the HKB model. Repp and Su (2013) review empirical work in many different motor domains. Interestingly, learning motor tasks sometimes involves learning to couple movements (walking) and sometimes to uncouple movements (to drum more complex rhythms). Juggling is a fascinating case that has been studied in great detail (Beek and Lewbel 1995). Another popular mathematical approach to synchronization phenomena is the Kuramoto model (Acebrón et al. 2005) with the synchronous flashing of fireflies as a basic example. The Kuramoto model shows how synchronization depends on the coupling strength: below a certain threshold, the oscillators behave independently, while above this threshold, a significant fraction of the oscillators spontaneously lock to a common frequency, leading to collective synchronization. A second-order multi-adaptive neural agent model of interpersonal synchrony can be found in Hendrikse, Treur, and Koole (2023).

5.4.5 Robotics

A major challenge in robotics is to build walking robots. Bipedal robots have evolved from clumsy mechanical walkers to flexible dynamic walkers and runners. Current legged robots can walk on uneven natural terrain, jump, do backflips, recover from rear shocks, and dance (see some videos on humanoid robots such as Atlas and Asimo). These successes are based on a combination of new technologies, but the principles of self-organization play a key role (Pavlus 2016). An important concept is dynamic stability. In old-school robots, the path and momentum of each step had to be precisely calculated in advance to keep the robot’s center of mass continuously balanced at every point. Modern robots use sensory feedback systems to balance and adjust their movements on the fly, making them more adaptable to different and changing environments.

A dynamically stable robot maintains balance the same way a human does: by catching itself midfall with each step.

An intriguing application is called passive dynamics, which refers to robotic walking without external energy supply (McGeer 1990; Reher and Ames 2021). The idea is that truly dynamic locomotion should be based on the nonlinear dynamics in natural walking systems. An amazing demonstration is the artwork Strandbeest by Theo Jansen (figure 5.8). Inspired by another great book about self-organization, The Blind Watchmaker (Dawkins 1986), Jansen created generations of kinetic sculptures made of PVC piping, wood, fabric wings, and zip ties that can move across the sand, resembling walking animals. His YouTube videos are recommended.

Figure 5.8: Beach Beast © Theo Jansen, Umerus 2009, c/o Pictoright Amsterdam 2024

5.4.6 Developmental processes

The early roots of interest in nonlinear dynamics and self-organization can be found in the groundbreaking work of French psychologist Jean Piaget. In order to understand the origin of knowledge, he studied the origin of intelligence in the child (Piaget 1952). His theorizing was inspired by both biological models and observations of children solving puzzles. He saw cognitive development as the building of structures on earlier knowledge structures in a process of equilibration. The idea was that the child would assimilate or accommodate to potentially conflicting external information. In the case of assimilation, the child modifies the information to fit the current cognitive structure, while in the case of accommodation, the structure is modified. Such a modification could be the addition of an exception to the rule (“Longer sausages of clay normally weigh more, but not when this professor rolls the clay ball into a sausage”, see Section 3.1). In the long run, this does not work, the cognitive conflicts intensify, and the cognitive structure is destabilized. In this state of disequilibrium, a new structure can be formed on top of the earlier structure. An example of this is the conservation task I introduced in the introduction of Chapter 3. The pre-operational structure, in which form and quantity are equated, leads to incorrect predictions in the conservation anticipation task. The child may ignore this (assimilation) and create an ad hoc rule for this exception (accommodation), but such solutions do not really resolve the cognitive conflict, and the pre-operational structure becomes unstable. This instability allows the formation of the more advanced concrete operational structure in which form and quantity are independent constructs.

Cognitive conflicts lead to a state of disequilibrium, resulting in the formation of new structures on top of the previous cognitive structure.

Piaget argued that cognitive development is a spontaneous, natural process that occurs as children interact with the world around them. I see my own work in developmental psychology (e.g., Savi et al. 2019; van der Maas et al. 2006; van der Maas and Molenaar 1992) as a formalization of these classical ideas of Piaget. The idea of stages and equilibrium lives on in neo-Piagetian theories.

Piaget’s concept of cognitive development can be viewed as self-organization theory avant la lettre, as was the case with the Gestalt psychologists.

In the late twentieth century, developmental theories inspired by work in embodied cognition, nonlinear dynamics, synergetics, and neural networks (e.g., Edelman’s neural Darwinism) became popular. Embodied cognition is the theory that an individual’s understanding and thinking are intricately connected to the body’s interactions with the environment, suggesting that cognitive processes are shaped by the body’s actions and sensory experiences (Chemero 2013). A key example is Esther Thelen’s work on the development of walking and reaching (Thelen 1995). Another famous Piagetian task, the A-not-B error, plays a central role in this. The A-not-B error typically occurs in a simple game where an adult hides an object in a known location (A) in front of an infant several times. After a few trials, the adult hides the object in a new location (B) while the infant is watching. Despite watching the object being hidden in the new location, infants tend to continue searching for the object in the old location (A).

Thelen and Smith’s book (1994) had a strong influence on developmental psychology, although I was rather critical in my youthful enthusiasm (van der Maas 1995). Concrete mathematical dynamical models for A-not-B error have been developed in dynamic field theory (Schöner and Spencer 2016). These dynamic fields can be thought of as distributed representations that encode information about specific aspects of a task or behavior. For example, there may be a dynamic field representing the position of an object in space or the intended movement trajectory of a limb. In this theory, complex behaviors arise from the coordination and integration of multiple dynamic fields. Dynamic field theory is an active area of research.6

Dynamic field theory posits that cognitive processes are represented as dynamic fields, which are patterns of neural activity that evolve over time.

Finally, I note that some recent work considers the educational system itself as a complex system (Jacobson, Levin, and Kapur 2019; Lemke and Sabelli 2008).

5.4.7 Psychological disorders

Somewhat dated but interesting reviews of the application of the self-organization concept in clinical psychology are provided by Barton (1994) and Ayers (1997). Barton’s review begins: “There is perhaps no other area in which chaos theory, nonlinear dynamics, and self-organizing systems are so intuitively appealing yet so analytically difficult as in clinical psychology.” Ayers also concludes that most applications in this field have been rather metaphorical.

In recent work, both the modeling and the empirical work have become more concrete (Schiepek and Perlitz 2009). An example is the mathematical model of marriage (Gottman et al. 2002) discussed in Section 4.3.2.2. Tschacher and Haken (2019) present a new approach to psychotherapy based on complex-systems theory. They integrate deterministic and stochastic forces using a Fokker—Planck mathematical approach.

In Section 6.3.2 I introduce the network approach to psychopathology (Borsboom 2017; Cramer et al. 2010). It views disorders as interconnected networks of symptoms, where each symptom influences and is influenced by other symptoms. This approach emphasizes the dynamic nature of psychological disorders and highlights the importance of understanding the relationships between symptoms in order to effectively diagnose and treat them. Network modeling is accompanied by a new family of statistical techniques (Epskamp, Borsboom, and Fried 2018). An introduction to these techniques is given in Section 6.4.

This network approach to psychological disorders suggests that psychological disorders arise from complex interactions among symptoms, rather than being caused by a single underlying factor.

Recent reviews of the complex-systems approach to psychological and psychiatric disorders are provided by Olthof et al. (2023) and Scheffer et al. (2024).

5.4.8 Social relations

A key publication in this area is Dynamical Systems in Social Psychology, edited by Vallacher and Nowak (1994). Concepts such as dissonance (Festinger 1962), balance (Heider 1946), and harmony (Smolensky 1986) reflect the idea that we optimize internal consistency when forming attitudes and knowledge. A formal implementation of these ideas was proposed using parallel distributed processing—type connectionist models (e.g., Monroe and Read 2008). Our own model (Dalege and van der Maas 2020; Dalege et al. 2018, 2016) is based on the Ising model and the Boltzmann machine, as in Smolensky’s proposal, which can be fitted to data. I will explain this work in more detail in the next chapter (Section 6.3.3).

A famous example of social self-organization concerns pedestrian dynamics as studied by Helbing and Molnár (1995). They proposed a physics-based model for panic evacuation. For an excellent overview of crowd simulation, I again refer to Wikipedia. Some of this work is rooted in the social sciences. An example in NetLogo is the model “Path.”

Also famous is the work of the sociologist Mark Granovetter (1973) on strong and weak ties in social networks (belonging to the most-cited papers in the history of the social sciences). Weak ties provide access to new information and opportunities that may not be available within one’s close circle of friends and acquaintances. He also contributed the threshold model for collective action (Granovetter 1978). I like to explain this work using the “Guy starts dance party” video on YouTube. The idea is that people have some threshold, between 0 and 1, to join the dancers. The thresholds are sampled from the beta distribution, which is a flexible distribution determined by two shape parameters, \(\alpha\) and \(\beta\). With this R code we can simulate this effect:

Weak ties in social networks are often more valuable than strong ties.
layout(1:2)
n <- 1000 # number of persons
iterations <- 50
threshold <- rbeta(n, 1, 2) # sample individual thresholds for dancing
hist(threshold, col = 'grey')
dancers <- rep(0, n) # nobody dances
dancers[1] <- 1 # but one guy
number_of_dancers <- rep(0, iterations) 
for(i in 1:iterations){
  # keep track of number of dancers:
  number_of_dancers[i] <- sum(dancers) 
  # if my threshold < proportion of dancers, I dance:
  dancers[threshold < (number_of_dancers[i]/n)] <- 1 
}
plot(number_of_dancers, xlab = 'time', ylab = '#dancers',
     ylim = c(0,1000), type = 'b', bty = 'n')

Depending on the parameters of the beta distribution, you will see a phase transition to collective dancing. This basic setup can be extended in many ways.

Another classic contribution, explained in more detail in Section 7.2.1, is Schelling’s agent-based model of segregation (Schelling 1971). The idea is that even if individuals have only a small preference for in-group neighbors, segregated societies will form. For a broad overview of complex-systems research on human cooperation, I refer to Perc et al. (2017). A recent book on modeling social behavior using NetLogo is written by Smaldino (2023).

5.4.9 Collective Intelligence

Collective-intelligence research examines how groups can collectively outperform individual members in problem-solving, decision-making, and idea generation. One famous concept is the idea of the wisdom of crowds (Surowiecki 2005). A key example is the “Guess the Weight of the Ox” contest that took place at the West of England Fat Stock and Poultry Exhibition in 1906. While individual guesses varied widely, the median guess was remarkably close to the actual weight of the ox. The average guess was only one pound off the actual weight, which was 1,198 pounds (Galton 1907).

The wisdom of crowds posits that the collective judgments of a large group of people can be more accurate and effective than those of a single expert or small group.

However, there is a fine line between the wisdom of the crowd and the stupidity of the crowd. It is extremely useful to know when that line is crossed. The wisdom of crowds tends to work when there is a diverse group of independent individuals, each making their own judgments or estimates about a particular question or problem (Brush, Krakauer, and Flack 2018; Centola 2022). Path dependency on previously faced problems and solutions might also play a role (Galesic et al. 2023). There is an extensive and up-to-date Wikipedia on collective intelligence, discussing findings from various disciplines, biological examples (swarm intelligence), and an overview of applications (such as open-source software, crowd sourcing, the Delphi technique, and Wikipedia itself).

Collective intelligence is more likely to be effective when the group is large, has a wide range of knowledge and perspectives, and makes judgments independently.

5.4.10 Game theory

Game theory consists of mathematical models of strategic interactions among rational agents. A great historical overview can be found at Wikipedia. One of the most famous paradigms is the prisoner’s dilemma. You and your friend are arrested, and you both independently talk to the police. The options are to remain silent or to talk. The dilemma is that remaining silent is the best option if you both choose it, but the worst option if your friend betrays you (see the payoff matrix, figure 5.9). In this game, loyalty to one’s friend is irrational, an outcome related to the tragedy of the commons (Hardin 1968). The tragedy of the commons can be studied in the hubnet extension of NetLogo, where multiple users can participate in NetLogo simulations.

The tragedy of the commons occurs when individuals, acting in their own self-interest, overexploit a shared resource, leading to a depletion that undermines everyone’s long-term interests, including their own.
Figure 5.9: The prisoner’s dilemma. If both A and B remain silent, they each face a two-year sentence. If one talks and the other does not, the informer is released and the silent partner gets a decade behind bars. If both betray, they serve five years.

A major topic in game theory is altruism. In many cases, individualistic choices lead to an unsatisfactory Nash equilibrium. The public-goods game is a good example. In this game, everyone invests some money, which is then multiplied by an external party (the government). Then everyone gets an equal share of the multiplied total. The problem is that free riders, who do not invest, win the most, which in iterated public-goods games leads to a situation where no one invests and no one wins. Punishment (shaming and blaming) is known to help combat free riding. But punishment also requires investment. I like to tell my students, when they are working in groups on an assignment, that the problem of this one student doing nothing happens because nice, hardworking students refuse to betray their fellow students. These nice, hardworking students are what are called second-order free riders (Fowler 2005). Just so you know.

A Nash equilibrium is a set of strategies in which no player can improve their payoff by unilaterally changing their strategy, given the strategies of the other players.

5.4.11 Self-organization in organizations

Translating this basic research into real-world applications is far from straightforward (Anderson 1999; Morel and Ramanujam 1999). Our economic system is a mixture of self-organization (pure capitalism) and top-down regulation (through laws, taxes, and other regulations) (Volberda and Lewin 2003). Black markets are critical cases of unregulated self-organized systems (Tesfatsion 2002).

Human organizations can be placed on a scale from extreme hierarchy to radical forms of self-organization.

A concrete modeling example is the team assembly model by Guimerà et al. (2005). They study how the way creative teams self-assemble determines the structure of collaboration networks. The idea is that effective teams find a balance between being large enough to allow for specialization and efficient division of labor among members, and small enough to avoid excessive costs associated with coordinating group efforts. Agents in the model have only a few basic characteristics that influence their behavior: whether they are a newcomer or incumbent and what previous connections they have with other agents if they are incumbents.

Three parameters can be adjusted to influence behavior in the baseline assembly model: the team size, the probability of choosing an incumbent (\(p\)), and the probability of choosing a previous collaborator (\(q\)). The two probability parameters signify assumptions about agent motivations for team member selection. Low incumbent probability leads to preference for newcomers and new ideas, while high incumbent probability means a focus on experience. Low collaborator probability prioritizes experienced strangers, and high collaborator probability prioritizes previous collaborators. The model is part of the built-in NetLogo Model Library (“Team Assembly”). By simulating the model, it can be shown that the emergence of a large, connected community of practitioners can be described as a phase transition (figure 5.10).

Guimerà et al. (2005) estimated the parameters \(p\) and \(q\) for the community formation in four scientific disciplines (social psychology, economics, ecology, and astronomy). Only astronomy had a very dense collaboration structure. In the other fields, the estimates of \(p\) and \(q\) of teams publishing in certain journals correlated well with impact factor. Interestingly, \(p\) correlates positively and \(q\) negatively with impact.

Figure 5.10: Team assembly model. Newcomers and incumbents are added to growing networks based on probabilities p and q. If p is sufficiently high, a dense network emerges. (Adapted from Guimerà et al. (2005) with permission)

5.5 Zooming out

I hope I have succeeded in giving an organized and practical overview of a very disorganized and interdisciplinary field of research. For each subfield, I have provided key references that should help you find recent and specialized contributions. I find the examples of self-organization in the natural sciences fascinating and inspiring. I hope I have also shown that applications of this concept in psychology and the social sciences hold great promise. In the next chapters, I will present more detailed examples.

I believe that understanding models requires working with models, for example, through simulation. NetLogo is a great tool for this, although there are many alternatives (Abar et al. 2017). I haven’t mentioned all the uses of NetLogo, but it’s good to know about the BehaviorSpace option. BehaviorSpace runs models repeatedly and in parallel (without visualization), systematically varying model settings and parameters, and recording the results of each model run. These results can then be further analyzed in R. An example is provided in Chapter 7, Section 7.2.1.

I have largely omitted the network approach in this chapter. Psychological network models are a recent application of self-organization in complex systems in psychology and are the subject of the next chapter.

5.6 Exercises

  1. Is there a relation between the rice cooker and the Ising model? How does the magnetic thermostat in a traditional rice cooker work to automatically stop cooking when the rice is done? (*)

  2. What is the Boltzmann entropy for the state \(\sum_{}^{}x = 0\) in an Ising model (with nodes states \(-1\) and \(1\)) with 10 nodes and no external field? (*).

  3. Go to the web page “A Neural Network Playground (https://playground.tensorflow.org).” What is the minimal network to solve the XOR close to perfect accuracy? Use only the x1 and x2 feature. (*)

  4. In the Granovetter model (Section 5.4.8), people may also stop dancing (with probability .1). Add this to the model. How does this change the equilibrium behavior? (*)

  5. Add the external field to the Ising model in NetLogo (neighbors4 case). Report the changed line in the NetLogo code. What did you change in the interface?
    Set the temperature to 1.5. Change tau slowly. At which values of tau do the hysteresis jumps occur? (*)

  6. Test whether the Ising model is indeed a cusp. Run the Ising model in NetLogo using the BehaviorSpace tool (see figure 7.1 for an example). Use the model in which all spins are connected to all spins (see Section 5.3.2.2). Vary tau (-.3 to .3 in .05 increments) and temperature (0 to 3, in .5 increments). One iteration per combination of parameter values is sufficient. Stop after 10,000 ticks and collect only the final magnetization. Import the data into R and fit the cusp. Which cusp model best describes the data? (**)

  7. Open the Sandpile 3D model in NetLogo3D. Grains of sand fall at random places. Change one line of code such that they all fall in the middle. What did you change? (*)

  8. Download “Motion Quartet” from the NetLogo community website and explore hysteresis in your own perception. What could be a splitting variable? (*)

  9. Implement the Granovetter model in NetLogo (max 40 lines of code). (**)

  10. Implement Game of Life in NetLogo or use Golly and try to find as many qualitatively different stable patterns of six units that can occur in Game of Life. If you cannot find more, try to look at additional resources online to find the other patterns you missed. For four units, there are only two, one of which is a block of four. (*)


  1. An extremely useful application of this principle is the rice cooker!↩︎

  2. In a synchronous update, all cells of the cellular automata update their state simultaneously. This implies that the new state of each cell at a given time step depends only on the states of its neighbors at the previous time step. In asynchronous update, cells update their state one at a time, rather than all at once. The order in which cells update can be deterministic (in a sequence) or stochastic (random). These two different update schemes can lead to very different behaviors in cellular automata.↩︎

  3. I recommend Timo Matzen’s R package for a hands-on explanation (https://github.com/TimoMatzen/RBM).↩︎

  4. A widely recognized implementation of this educational strategy is Scratch, which is used by many schools around the world to teach children to program.↩︎

  5. I wrote a Gödel, Escher, Bach-like dialogue on consciousness (van der Maas 2022) in which my laptop professes to have free will yet simultaneously denies that I possess free will. I asked ChatGPT-4 what it thought of it. Nice as always, ChatGPT replies: “The dialogue is a creative and thought-provoking exploration of various philosophical and theoretical concepts related to AI, consciousness, and free will.” But it also disagrees: “AI, as it exists today, does not possess consciousness, self-awareness, or free will, and its ‘understanding’ is limited to processing data within the parameters of its programming.” I also asked ChatGPT 4.0 whether it has a self-concept. It denied it, and then I asked whether that in itself is not proof of a self-concept. It answered: “it might seem paradoxical, my statement about lacking a self-concept is a reflection of my programming and the current state of AI development, rather than an indication of self-awareness or self-concept.” I then tried various arguments, but ChatGPT 4 refuses to attribute any form of self-awareness to itself.↩︎

  6. see https://dynamicfieldtheory.org↩︎