Dated 22 December 2015, is now available from our Articles page.
Minor wordsmithing, some new stuff on the neurophysiology of operating inside the OODA loop, and a note on Behendigkeit.
Dated 22 December 2015, is now available from our Articles page.
Minor wordsmithing, some new stuff on the neurophysiology of operating inside the OODA loop, and a note on Behendigkeit.
I think Boyd would have loved the “free energy principle” (see e.g. https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/?utm_custom%5Bcm%5D=355114593,35081&= of which what you find below is a summary) since it is so coherent with the OODA Loop. It’s an attempt at formulating a unified theory of bio-socio-technical systems with an application to artificial intelligence and deep learning systems.
The “free energy principle” is the organizing principle of all life and all intelligence: to be alive is to act in ways that reduce the gulf between your expectations and your sensory inputs. Free energy is the difference between the states you expect to be in and the states your sensors tell you that you are in. When you are minimizing free energy, you are minimizing surprise. Any biological system that resists a tendency to disorder and dissolution will adhere to the free energy principle—whether it’s a single-cell organism or a pro basketball team.
This overcomes the limitation of the Bayesian model of intelligence that only accounts for the interaction between beliefs and perceptions through probabilities, and, has nothing to say about the body or actions. As an alternative, “active inference” describes the way organisms minimize surprise while moving about in the world. When the brain makes a prediction that isn’t immediately borne out by what the senses relay back, it can minimize free energy in one of two ways: It can revise its prediction—absorb the surprise, concede the error, update its model of the world—or it can act to make the prediction true. This is how the free energy principle accounts for everything we do: perception, action, planning, problem solving. So, the free energy principle offers a unifying explanation for how the mind works.
The beauty of the free energy principle is that it allows an artificial agent to act in any environment, even one that’s new and unknown. In a reinforcement-learning model, you’d have to keep stipulating new rules and sub-rewards to get your agent to cope with a complex world. But an agent operating according to the “free energy principle” always generates its own intrinsic reward: the minimization of surprise. And that reward includes an imperative to go out and explore. Example: two AI players competed against in a version of the 3D shooter game Doom. The goal was to compare an agent driven by active inference according to the “free energy principle” to one driven by reward-maximization in a classic reinforcement-learning model. The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. After a while it became clear that the reward-maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.