S01E14 Regaining organizational consciousness
The edge of chaos, self-organized criticality, ongoing complexification, cellular automata, computation and evolution
Back in episode 5, I introduced some key characteristics of complex adaptive systems (emergence, adaptation, information processing, signaling, and non-linearity). So far, I’ve focused mostly on the cybernetic aspects of feedback and information processing.
Hopefully, this has resulted in an appreciation for the unpredictable web of interactions, constraints, and incentives in complex adaptive systems. At a high level, we understand that countless feedback loops can eventually lead to emergence, but we haven’t yet explored this step in detail.
How and why do complex systems emerge from simpler systems? And what does that mean for organization theory and management practice?
Physicists propose that complexity must evolve from a delicate balance between order and chaos, a state known as self-organized criticality (or more colloquially known as the edge of chaos). In this Goldilocks phase (from the fairytale where the porridge temperature needed to be just right), the components of the system are neither inert nor turbulent, but rather exist in a state of dynamic equilibrium.
In simpler terms: complex life needs a sweet spot to flourish. In this phase transition zone called criticality, there is sufficient stability to sustain life while also providing the creative spark necessary for evolution. Systems between order and chaos have evolutionary advantages with respect to ordered and disordered systems.
Look at nature. What happens when a natural system, like a forest, is too ordered? In monocultures, all trees are the same species and age. These forests are always artificial. They simply don’t evolve in nature because a monoculture is less resilient to disturbances or diseases, and less able to support a diverse range of life.
On the other hand, when there is too much chaos (wildfires, flooding, volcanic eruptions,…), complexity and life cannot emerge either. If grazers on the savanna were to move in disorganized and chaotic patterns instead of in herds, they would quickly be overwhelmed by predators.
The concept of criticality can also be observed outside of natural systems. In fact, we can simulate it on computers, thanks to John Von Neumann (whose brilliant mind we can also credit with the invention of the foundational architecture of modern computers).
In the 1940s, Von Neumann studied cellular automata. These are simple algorithms consisting of 2 elements:
a grid of cells, each of which can be in one of a finite number of states, and
a set of rules that determine how the states of the cells change over time.
Most of these algorithms result in boring behavior; they either die out or settle in repeating patterns. Some, like the automaton in the above gif, result in interesting, complex behavior. The rules that exhibit the most complex and interesting behavior (such as the famous "Game of Life" rule) reside on the edge of chaos.
If we can recreate lifelike behavior on a computer while also reproducing this Goldilocks zone of emergent complexity, it is not a stretch to claim that the principles of criticality are a fundamental feature of complex systems across different domains.
Besides criticality, what other circumstances do we need for complex, adaptive systems to emerge?
In the introduction (and in episode 5), I mentioned that complex adaptive systems are capable of information processing. These are 2 rather abstract concepts and it’s worth pausing at how that works. Let’s revisit these cellular automata to help us understand how complex adaptive systems compute.
Imagine that our cellular automaton operates in a grid (as above), but instead of colors, each square can be a number or a letter. We also know that the decision rules for the changing patterns can involve math or logic. This way, by letting the game run for many turns, the grid of squares can perform calculations and solve problems.
Some cellular automata (like Rule 110) have even proven to be Turing-complete, meaning they are capable of universal computing. Given enough time and memory, these simple automata can solve any problem that can be represented in a formal language.
Natural systems are also capable of computing. Take ants as an example. While there is no central authority, the colony as a whole functions as a decentralized network of millions of autonomous ants, each of which makes decisions and takes actions based on a limited set of interactions with other ants.
Ant colony computation is distinct from that of traditional computers. Ants cannot rely on the Von Neumann architecture (with a central processing unit and random-access memory), but they get the computational job done nonetheless.
To what purpose, you ask?
Computation is simply what a complex system does with information in order to adapt to its environment. Back in episode 4, I gave some examples of the evolutionary purposes of computation in ant colonies:
As long as the system operates in the zone of criticality, a few very simple rules on the component level can enable complex behavior and computation at the system level.
But we are missing another part of the emergence puzzle. How do systems in nature arrive at this Goldilocks state if no one is tuning the system from the outside?
The answer lies in self-organization. Complex systems achieve criticality from the bottom up, through the interactions of the parts of the system and their environment. These interactions aren’t random. To drive the evolution of hierarchical complexity, components interact through a combination of collaboration and competition:
Competition over limited resources (food, territory, mating partners…) can lead to the evolution of strategies that help agents survive and reproduce in their environment. It can also lead to the emergence of hierarchies or dominance structures within a population.
Collaboration - working together to achieve a common goal - can lead to the evolution of cooperative behaviors, such as altruism and specialization of labor.
As the components interact and compete, they often find themselves in a state of tension and unrest, leading to a breaking point where self-organized criticality occurs, like a phase change. This kicks off a transition to a new and more integrated state, marked by increased complexity.
Complexity scientists are uncovering evidence of these dynamics between order and disorder in many domains:
One of the oldest examples supporting the self-organized criticality theory comes from biology, where we know single-celled organisms can come together to form a multi-cellular organism.
In the economy, we see people organizing in companies, and we know that this competition and collaboration leads to the emergence of new industries and market structures.
There exists a concept in evolutionary biology called the Red Queen hypothesis. This is the idea that organisms must keep evolving in order to keep up with the changing environment and other organisms, as if they were running on a treadmill. It's often used to explain why sexual reproduction persists.
Self-criticality is a hot research topic in neuroscience. One interesting field of study is exploring whether consciousness can be explained as a phase transition between order and chaos in the brain. When a sedative knocks out our consciousness, that could be the drug nudging our brain slightly toward the ordered zone. On the other hand, there is increasing attention to the potentially favorable effects of psychedelics on our creative capacity. This increased creativity may be explained by the drugs moving our brain activity toward the chaotic zone.
Apparently, self-organized criticality, computation, and emergence lead to evolution. Complex systems have the ability to accumulate experience through survival, sexual selection, and/or learning. Even the most entrenched living systems will eventually fall to the relentless march of progress, constantly revising and rearranging their building blocks as they gain experience.
Through a series of hierarchical emergences—a nested sequence of parts coming together to form ever-greater wholes—the universe is undergoing a majestic self-organizing process. In other words, nature’s simplest parts organize themselves into wholes, which become the building blocks for the next level of complexity.
— Bobby Azarian (author of The Romance of Reality)
While this has interesting philosophical and spiritual implications (the meaning of the universe is… life and complexification), my mission focuses on the organizational implications.
So what should we take away as we shape our organizations and institutions? Complexity theory tells us that systems self-organize toward a zone between order and chaos, where they compute survival strategies through collaboration and competition.
I’ll start with the most evident conclusions (evident to most readers, maybe not so much to those sitting at the wheel of institutions and enterprises).
Organizations that aren’t conducive to self-organization are destined to slip into a state of dormancy (erring too much on the side of order).
How can we recognize overly ordered organizations and limits on self-organization?
A lack of flexibility in decision-making processes
A rigid chain of command
Strategy translates to execution through heavily analyzed, multi-year roadmaps
Bureaucratic processes slow down decision-making and impede progress and learning
Employees are seen as small, interchangeable parts in a larger system
Why is this a bad thing?
Limited ability to respond to changes. Overreliance on a formalized system of rules and policies will hinder decision-making at the edges. Ordered organizations are good at exploiting (as long as the environment is stable) but not at exploring. Order and innovation don’t mix.
Inhibition of institutional learning. Organizational systems and their environments are always in flux. This limits the horizon of predictability. Organizations that promote the illusion of certainty will lose out to companies that take a probabilistic, Bayesian approach to planning and learning. Fast and lean experimentation beats slow, risk-avoiding analysis.
Low employee engagement & high attrition. When organizations treat employees as interchangeable cogs in the machine, these people will go through the motions without engaging with the organizational mission. In order to get people to act as owners, they require a sense of purpose and autonomy (at the very least).
Note that enabling self-organization is not the same as eliminating all hierarchy or leadership (this will be the topic of a future episode).
On the other hand, there can be organizations where chaos reigns. While not unheard of, disorder is more easily recognized, and it tends to be a temporary state.
There are also less evident conclusions, that I will tackle in more depth in future episodes:
There is a central role for conflict in organizations that embrace change and self-organizing criticality.
Systems change over time, and change is driven by numerous iterations of very simple rules.
Competition and collaboration lie at the heart of evolution. The best leaders can reframe challenges and transform them into non-zero-sum games.
Companies that succeed in surfing the edge of chaos will see themselves rewarded with truly agile characteristics, like rapid information processing, collective responsiveness to perturbations, and the capacity to incorporate a vast array of external stimuli without reaching saturation.
Companies that miss the edge will succumb to creative destruction and join the statistical rank of shrinking company lifespans.
Best of the Rest 👀
🤖 Software AI is eating the world
There is little doubt that generative AI will create massive disruption for all knowledge work. The only questions are how and when?
A16z tries to answer the question of where value will accrue and asks whether AI apps need to own the model to guarantee differentiation and retention.
One emerging approach is a domain model, where companies take a narrow, vertical approach like this platform for AI-generated gaming assets.
AI’s application layer
The Digital Native Substack also has a good, broader take on the application landscape of generative AI.
🤝 How we organize
If you spent some time learning about incentives and goals in business, you’ll have heard of Goodhart’s law: '“when a measure becomes a target, it ceases to be a good measure”. Cedric Chin of Commoncog wrote an excellent piece on doing something actually useful with this law.
The convergence between IT (information tech) and OT (operational technology) is inevitable. Manufacturing organizations have resisted change for a long time but are starting to realize that robustness and adaptivity are not mutually exclusive. Jan Bosch has started an insightful series on product development fallacies in manufacturing & industry (Read them as LinkedIn posts, or on Jan’s blog)
Many companies are still struggling with the fact that agile methodologies do not always equate with agility. I don’t agree with everything in this article equating Scrum with SAFe, but overreliance on methodologies can lead to teams running in circles around a local optimum.
Alex Ewerlöf describes the wonderful things that can happen when teams start dedicating fixed bandwidth to the removal of tech debt.
🤓 Further reading
If you are interested in how this process of complexification translates to meaning, I can recommend this book by Bobby Azarian or this excellent podcast of John Vervaeke with Lex Fridman.
I found inspiration for this episode in many places, but one source was especially insightful: