it feels like [the OP] implicitly and automatically rejects that something like a coffee habit can be the correct move even if you look several levels up.
Ah. Got it.
That’s not what I mean whatsoever.
I don’t think it’s a mistake to incur adaptive entropy. When it happens, it’s because that’s literally the best move the system in question (person, culture, whatever) can make, given its constraints.
Like, incurring technical debt isn’t a mistake. It’s literally the best move available at the time given the constraints. There’s no blame in my saying that whatsoever. It’s just true.
And, it’s also true that technical debt incurs an ongoing cost. Again, no blame. It’s just true.
In the same way (and really, as a generalization), incurring adaptive entropy always incurs a cost. That doesn’t make it wrong to do. It’s just true.
I do not, on priors, expect every “problem” to be a failure of adaptation.
I think this is a challenge of different definitions. To me, what “adaptation” and “problem” mean requires that every problem be a failure of adaptation. Otherwise it wouldn’t be a problem!
I’m getting the impression that questions of blame or screw-up or making mistakes are crawling into several discussion points in these comments. Those questions are so far removed from my frame of thinking here that I just flat-out forgot to orient to them. They just don’t have anything to do with what I’m talking about.
So when I say something like “a failure of adaptation”, I’m talking about a fact. No blame, no “should”. The way bacteria fail to adapt to bleach. Just a fact.
Everything we’re inclined to call a “problem” is an encounter with a need to adapt that we haven’t yet adapted to. That’s what a problem is, to me.
So any persistent problem is literally the same thing as an encounter with limitations in our ability to adapt.
I think… that I glimpse the dynamic you’re talking about, and that I’m generally aware of it’s simplest version and try to employ conditions/consequences reasoning, but I do not consistently see it more generally.
Cool, good to know. Thank you.
Sleeping on it, I also see connections to patterns of refactored agency (specifically pervasiveness) and out to get you. The difference is that while you’re describing something like a physical principle, from “out to get you” more of a social principle, and “refactored agency” is describing a useful thinking perspective.
I don’t follow this, sorry. I think I’d have to read those articles. I might later. For now, I’m just acknowledging that you’ve said… something here, but I’m not sure what you’ve said, so I don’t have much to say in response just yet.
I think this is a challenge of different definitions. To me, what “adaptation” and “problem” mean requires that every problem be a failure of adaptation. Otherwise it wouldn’t be a problem!
This was poor wording on my part; I think there’s both a narrow sense of “adaptation” and a broader sense in play, and I mistakenly invoked the narrow sense to disagree. Like, continuing with the convenient fictional example of an at-birth dopamine set-point, the body cannot adapt to increase the set-point, but this is qualitatively different than a set-point that’s controllable through diet; the latter has the potential to adapt, while the former cannot, so it’s not a “failure” in some sense.
I feel like there’s another relevant bit, though: whenever we talk of systems, a lot depends on where we draw the boundaries, and it’s inherently somewhat arbitrary. The “need” for caffeine may be a failure of adaptation in the subsystem (my body), but a habit of caffeine intake is an example of adaptation in the supersystem (my body + my agency + the modern supply chain)
I think I’d have to read those articles. I might later.
I think I can summarize the connection I made.
In “out to get you”, Zvi points out an adversarial dynamic when interacting with almost all human-created systems, in that they are designed to extract something from you, often without limit (the article also suggests that there are broadly four strategies for dealing with this). The idea of something intelligent being after your resources reminds me of your description of adaptive entropy.
In “refactored agency”, Venkatesh Rao describes a cognitive reframing that I find particularly useful in which you ascribe agency to different parts of a system. It’s not descriptive of a phenomenon (unlike, say, an egregore, or autopoesis) but of a lens through which to view a system. This is particularly useful for seeking novel insights or solutions; for example, how would problems and solutions differ if you view yourself as a “cog in the machine” vs “the hero”, or your coworkers as “Moloch’s pawns” rather than as “player characters, making choices” (these specific examples are my own extrapolations, not direct from the text). Again, ascribing agency/intelligence to the adaptive entropy, reminds me of this.
Everything we’re inclined to call a “problem” is an encounter with a need to adapt that we haven’t yet adapted to. That’s what a problem is, to me.
This is tangential, but it strongly reminds me of the TRIZ framing of a problem (or “contradiction” as they call it): it’s defined by the desire for two (apparently) opposing things (e.g. faster and slower).
Ah. Got it.
That’s not what I mean whatsoever.
I don’t think it’s a mistake to incur adaptive entropy. When it happens, it’s because that’s literally the best move the system in question (person, culture, whatever) can make, given its constraints.
Like, incurring technical debt isn’t a mistake. It’s literally the best move available at the time given the constraints. There’s no blame in my saying that whatsoever. It’s just true.
And, it’s also true that technical debt incurs an ongoing cost. Again, no blame. It’s just true.
In the same way (and really, as a generalization), incurring adaptive entropy always incurs a cost. That doesn’t make it wrong to do. It’s just true.
I think this is a challenge of different definitions. To me, what “adaptation” and “problem” mean requires that every problem be a failure of adaptation. Otherwise it wouldn’t be a problem!
I’m getting the impression that questions of blame or screw-up or making mistakes are crawling into several discussion points in these comments. Those questions are so far removed from my frame of thinking here that I just flat-out forgot to orient to them. They just don’t have anything to do with what I’m talking about.
So when I say something like “a failure of adaptation”, I’m talking about a fact. No blame, no “should”. The way bacteria fail to adapt to bleach. Just a fact.
Everything we’re inclined to call a “problem” is an encounter with a need to adapt that we haven’t yet adapted to. That’s what a problem is, to me.
So any persistent problem is literally the same thing as an encounter with limitations in our ability to adapt.
Cool, good to know. Thank you.
I don’t follow this, sorry. I think I’d have to read those articles. I might later. For now, I’m just acknowledging that you’ve said… something here, but I’m not sure what you’ve said, so I don’t have much to say in response just yet.
This was poor wording on my part; I think there’s both a narrow sense of “adaptation” and a broader sense in play, and I mistakenly invoked the narrow sense to disagree. Like, continuing with the convenient fictional example of an at-birth dopamine set-point, the body cannot adapt to increase the set-point, but this is qualitatively different than a set-point that’s controllable through diet; the latter has the potential to adapt, while the former cannot, so it’s not a “failure” in some sense.
I feel like there’s another relevant bit, though: whenever we talk of systems, a lot depends on where we draw the boundaries, and it’s inherently somewhat arbitrary. The “need” for caffeine may be a failure of adaptation in the subsystem (my body), but a habit of caffeine intake is an example of adaptation in the supersystem (my body + my agency + the modern supply chain)
I think I can summarize the connection I made.
In “out to get you”, Zvi points out an adversarial dynamic when interacting with almost all human-created systems, in that they are designed to extract something from you, often without limit (the article also suggests that there are broadly four strategies for dealing with this). The idea of something intelligent being after your resources reminds me of your description of adaptive entropy.
In “refactored agency”, Venkatesh Rao describes a cognitive reframing that I find particularly useful in which you ascribe agency to different parts of a system. It’s not descriptive of a phenomenon (unlike, say, an egregore, or autopoesis) but of a lens through which to view a system. This is particularly useful for seeking novel insights or solutions; for example, how would problems and solutions differ if you view yourself as a “cog in the machine” vs “the hero”, or your coworkers as “Moloch’s pawns” rather than as “player characters, making choices” (these specific examples are my own extrapolations, not direct from the text). Again, ascribing agency/intelligence to the adaptive entropy, reminds me of this.
This is tangential, but it strongly reminds me of the TRIZ framing of a problem (or “contradiction” as they call it): it’s defined by the desire for two (apparently) opposing things (e.g. faster and slower).