Why don’t more people talk about ecological psychology?

An intermediate step in the abstraction staircase

I can’t claim to have deep knowledge or understanding about the topic, but ecological dynamics seems like a really interesting and underrated approach, and nobody else has done it, so I’ll do my best.

If you’ve never heard of ecological psychology before, it is mostly interested in coaching, sports science, driving, what one might call more “system 1” stuff. I personally discovered it through Rob Gray’s excellent Perception Action podcast and website, which are focused on these topics.

The term “ecological” comes from the idea that the behavior of an agent can be explained entirely through their environment (broadly speaking). More explicitly, ecological psychologists do their absolute best to avoid using concepts such as beliefs, mental models of the world, computation, memories and so on. This may seem completely incompatible with the Bayesian mindset, with its credence-sprinkled map of the territory, and indeed, partisans of the ecological approach often get involved in heated debates against predictive processing or similar approaches. They also use a lot of special terms and definitions (“field of affordances” “action manifolds” “prospection not prediction”) that may seem pedantic and willfully obscure, and it took me a while to get the way they think. But in my view, their ideas are more compatible with Bayesianism that they might seem, and they mesh really well with lesswrongian rationalism.


For instance, I think the argument against using concepts about about internal states such as credences or memories can be seen as a game of rationalist taboo, meant to avoid explaining things away without understanding them. Let’s say we want to know how an athlete can move in the right way to catch a ball. A common lay answer might be “well, the athlete elaborates a model of where the ball will be in the future, and then they run and move their hand to the right spot according to that model”. This may seem both satisfactory and obvious, but:

  • If you think about it, what question about behavior *can’t* you answer by “the agent did this action because they modeled the world and according to their models this action was the best”? If you can explain everything...

  • If you analyze behavior in that way, you’re going to miss some very nice insights. In the example of running to catch a ball (the “outfielder problem”), researchers found that instead of predicting the future position of the ball, they coupled their movements to those of the ball (although we’re not yet sure how). How can you find out about this kind of mechanism if you stay stuck in ideas of mental models?

In the end, not only is the actual control process interesting and surprising, it doesn’t seem to really rely on the agent “modeling” the world: the displacement of the player is governed not by an internal simulation of the ball trajectory, not by a cloud of probabilistic estimates of where the ball could be, but by its mere image on the retina. Of course, the idea of an agent “having a model of something” is not cut and dry (does a centrifugal governor have a “model” of the engine it is regulating?) , but you gotta admit this is less modelish than we might have thought.

A centrifugal governor regulates the speed of a steam engine by opening or closing a valve depending on its speed

And from what I gather, there are plenty of other simple agent/​environment interactions (*information-control laws* in ecological parlance) that people have discovered, such as the determination of “time to contact” with an object moving toward you at constant speed, or more complicated relationships that I could probably understand if I weren’t so math-lazy.


As I have said, in the eyes of an ecological psychologist, there are no models, no predictions, no memories. There is only matter in the form of an agent, perceiving their environment, and acting upon it. What actions do they perceive they might take? Here is where we come to affordances. Affordances are defined as “possibilities of action offered to the agent by its environment”. It’s important to note that:

  • We mean “environment” in an extremely broad sense, including other agents, tools, past environment[1] (through what is ecologically understood as “hysteresis”, like in memory-shape alloys—except we don’t talk about memory dang it!). So a tossed baseball generates the affordance of catching it, a fluffy cat affords petting, and a street crossing affords crossing the street, of all things.

  • We mean “action” in an even broader sense (perhaps too broad for my taste). For example, a baseball violently thrown at you generates not only a ducking affordance but a “getting hit in the head and collapsing” affordance, a cat affords getting scratched, and a street crossing affords getting hit by a car if you snub the “looking both ways” affordance. In this way, affordances can have a negative valence (I’m not sure whether it’s ecologically kosher to talk about “expected value”).

Affordances can be long-term (to a chess player, a chess board might afford a complex strategy) or even conditional (what do two boxes given to you by an playful superintelligence afford for you?). Overall, the agent is seen as evolving through a “field of affordances”, that depend both on characteristics of the environment and characteristics of the agent (Even when placed in the same environment, I don’t perceive as many dunking affordances as professional basketball players do). This allows ecological psychologists to analyze decision making in team sports (is the affordance of making a pass more or less salient than the scoring affordance?) pain and injury (man, that car accident earlier really boosted the limping region of your affordance field), or social science (through something called “shared affordances” apparently, I haven’t really grasped that stuff yet). Pretty neat stuff in my opinion, with wide-ranging implications for coaching, robotics, and philosophy.

But what about the Bayes?? I hear you screaming in affront (at least, that’s what I would do if I were you). Hasn’t the idea of updating credences according to relative likelihoods cut through the fog of a thousand biases, given way (for better or worse) to a revolution in artificial intelligence, and ushered in a new era of science (or at the very least, a new era of noticing how bad we are at science)? Speaking of science, aren’t scientific theories models of the world? How can you say that ecological models are a better map of the territory, if you can’t even have maps?

Well, here is where I diverge from Rob Gray’s and Andrew Wilson’s ideas as I understand them, so if for some reason you were trusting me until now, you’re welcome to stop.


Credences are ecologically irrelevant because they are not part of the environment, and they are not actions either. However, you know which actions are almost perfectly correlated with “credences”? Expressing credences!
And so I propose that credences can be reconciled with ecological dynamics by interpreting them as “betting affordances”. Continuing the trend of broad definitions, a “bet” could be, as Julia Galef defines it in the Scout Mindset, “A bet is any decision in which you stand to gain or lose something of value, based on the outcome. That could include money, health, time—or reputation”. So, putting money on something, but also staking your reputation on a bold prediction, or preserving your reputation by avoiding to answer a question.
So if the affordance of paying money to buy shares in a prediction market stops being enticing to you when the market estimate reaches 72%, we might as well extract that number and call it “credence”; and if in various situations you feel compelled to explain how a land value tax would be uniquely helpful, let’s go ahead and call it an “opinion”.


So where does that leave us? Does the ecological approach boil down to a weird strategy to signal competency through annoying nitpicks, definitional sleight of hand and mantras about “embodiment”? Well, some of it, sure. But remember, most ecological psychologists study sports and motor learning. Let’s come back to the outfielder example: does it make sense to say that the movement of the person running to catch the ball is determined by their credences about the ball? Not really!

If you were to ask the player where the ball will land while it’s still in the air, making a better prediction about where it will land wouldn’t necessarily help them run at the right speed. Also, by the time they would figure it out, the ball would have been deflected by a gust of wind, or it might have touched the ground already. When I step on a ledge to jump across a gap, my feet move to the right position to absorb the landing impact (roughly—still not a great athlete). But you would have to stop the flow of time (including my movement) to ask me to make predictions about whether I’m about to overshoot the jump, and by how much. And stopping the flow of time for the environment, my muscles and bones and nervous system to ask something of my still-running mind seems kind of dualistic to me (not to mention unpractical, hard to find a good time-freezer these days). That’s how I interpret a lot of the insistence about “embodied cognition”.

Also, you may have heard the point that Jason Collins made on Rationally Speaking in 2018 about the fact that behavioral economics is due for a paradigm change: when you have to postulate the existence of 200 biases independently pushing us out of the Way of Bayes, it might be a good time to look for a theory with less epicycles. And more recently he mentioned the outfielder problem research as an interesting way to think about heuristics.

That’s not to say that Bayes’ rule is not a good test for coherent predictions! It just means that we rarely use probability estimates to make decisions. And surely we should do that more often, and the myriad-biases model is still useful; but good luck catching a ball with credences.
The best criterion is not necessarily the best algorithm, and it certainly doesn’t have to be the only algorithm possible. In this post, Scott talked about “complicated messy thought patterns which, when they perform well, approximate the beautiful mathematical formalism [of world-modeling]. He was referring to predictive pattern-matching, not information control laws, but I believe the same can be said for the latter.

Coming back to coaching implications, if you want to change someone’s movement pattern or decision making process, changing the relevant affordances seems to yield better results than just telling them what you want them to do. For example, if you want to “improve someone’s form” whatever that means, you may be better off implementing either random or deliberate variations in their environment, than explaining them how you think they could get better.

From perceptionaction.com

There is a lot in psychology that the ecological approach can’t explain, and we shouldn’t pretend otherwise. But neither can thermodynamics: by now we should be used to the idea that more rigorous and parsimonious models can have value, even when they fail to describe complex phenomena. I’m not knowledgeable enough to be certain that ecological dynamics is not empty talk, but it seems to me that it can be a helpful level of abstraction when trying to bridge the gap between nerve talk and mind talk.

Not sure how to end this except by asking if anyone around here has heard of this stuff, and what you think about it. I probably can’t follow most debates between experts (to be honest I haven’t even read every link I posted) but I’d still love to hear your thoughts.

  1. ^

    Here I had to make a heroic effort to repress my desire to go on a hand-wavy tangent about evolutionary psychology. You’re welcome.