I think this is a challenge of different definitions. To me, what “adaptation” and “problem” mean requires that every problem be a failure of adaptation. Otherwise it wouldn’t be a problem!
This was poor wording on my part; I think there’s both a narrow sense of “adaptation” and a broader sense in play, and I mistakenly invoked the narrow sense to disagree. Like, continuing with the convenient fictional example of an at-birth dopamine set-point, the body cannot adapt to increase the set-point, but this is qualitatively different than a set-point that’s controllable through diet; the latter has the potential to adapt, while the former cannot, so it’s not a “failure” in some sense.
I feel like there’s another relevant bit, though: whenever we talk of systems, a lot depends on where we draw the boundaries, and it’s inherently somewhat arbitrary. The “need” for caffeine may be a failure of adaptation in the subsystem (my body), but a habit of caffeine intake is an example of adaptation in the supersystem (my body + my agency + the modern supply chain)
I think I’d have to read those articles. I might later.
I think I can summarize the connection I made.
In “out to get you”, Zvi points out an adversarial dynamic when interacting with almost all human-created systems, in that they are designed to extract something from you, often without limit (the article also suggests that there are broadly four strategies for dealing with this). The idea of something intelligent being after your resources reminds me of your description of adaptive entropy.
In “refactored agency”, Venkatesh Rao describes a cognitive reframing that I find particularly useful in which you ascribe agency to different parts of a system. It’s not descriptive of a phenomenon (unlike, say, an egregore, or autopoesis) but of a lens through which to view a system. This is particularly useful for seeking novel insights or solutions; for example, how would problems and solutions differ if you view yourself as a “cog in the machine” vs “the hero”, or your coworkers as “Moloch’s pawns” rather than as “player characters, making choices” (these specific examples are my own extrapolations, not direct from the text). Again, ascribing agency/intelligence to the adaptive entropy, reminds me of this.
Everything we’re inclined to call a “problem” is an encounter with a need to adapt that we haven’t yet adapted to. That’s what a problem is, to me.
This is tangential, but it strongly reminds me of the TRIZ framing of a problem (or “contradiction” as they call it): it’s defined by the desire for two (apparently) opposing things (e.g. faster and slower).
This was poor wording on my part; I think there’s both a narrow sense of “adaptation” and a broader sense in play, and I mistakenly invoked the narrow sense to disagree. Like, continuing with the convenient fictional example of an at-birth dopamine set-point, the body cannot adapt to increase the set-point, but this is qualitatively different than a set-point that’s controllable through diet; the latter has the potential to adapt, while the former cannot, so it’s not a “failure” in some sense.
I feel like there’s another relevant bit, though: whenever we talk of systems, a lot depends on where we draw the boundaries, and it’s inherently somewhat arbitrary. The “need” for caffeine may be a failure of adaptation in the subsystem (my body), but a habit of caffeine intake is an example of adaptation in the supersystem (my body + my agency + the modern supply chain)
I think I can summarize the connection I made.
In “out to get you”, Zvi points out an adversarial dynamic when interacting with almost all human-created systems, in that they are designed to extract something from you, often without limit (the article also suggests that there are broadly four strategies for dealing with this). The idea of something intelligent being after your resources reminds me of your description of adaptive entropy.
In “refactored agency”, Venkatesh Rao describes a cognitive reframing that I find particularly useful in which you ascribe agency to different parts of a system. It’s not descriptive of a phenomenon (unlike, say, an egregore, or autopoesis) but of a lens through which to view a system. This is particularly useful for seeking novel insights or solutions; for example, how would problems and solutions differ if you view yourself as a “cog in the machine” vs “the hero”, or your coworkers as “Moloch’s pawns” rather than as “player characters, making choices” (these specific examples are my own extrapolations, not direct from the text). Again, ascribing agency/intelligence to the adaptive entropy, reminds me of this.
This is tangential, but it strongly reminds me of the TRIZ framing of a problem (or “contradiction” as they call it): it’s defined by the desire for two (apparently) opposing things (e.g. faster and slower).