You’ve reminded me of a perspective I was meaning to include but then forgot to, actually. From the perspective of an equilibrium in which everyone’s implicitly expected to bring certain resources/capabilities as table stakes, making a personal decision that makes your life better but reduces your contribution to the pool can be seen as defection—and on a short time horizon or where you’re otherwise forced to take the equilibrium for granted, it seems hard to refute! (ObXkcd: “valuing unit standardization over being helpful possibly makes me a bad friend” if we take the protagonist as seeing “US customary units” as an awkward equilibrium.) Some offshoots of this which I’m not sure what to make of:
If the decision would lead to a better society if everyone did it, and leads to an improvement for you if only you do it, but requires the rest of a more localized group to spend more energy to compensate for you if you do it and they don’t, we have a sort of “incentive misalignment sandwich” going on. In practice I think there’s usually enough disagreement about the first point that this isn’t clear-cut, but it’s interesting to notice.
In the face of technological advances, what continues to count as table stakes tends to get set by Moloch and mimetic feedback loops rather than intentionally. In a way, people complaining vociferously about having to adopt new things are arguably acting in a counter-Moloch role here, but in the places I’ve seen that happen, it’s either been ineffective or led to a stressful and oppressive atmosphere of its own (or, most commonly and unfortunately, both).
I think intuitive recognition of (2) is a big motivator behind attacking adopters of new technology that might fall into this pattern, in a way that often gets poorly expressed in a “tech companies ruin everything” type of way. Personally taking up smartphones, or cars, or—nowadays the big one that I see in my other circles—generative AI, even if you don’t yourself look down on or otherwise directly negatively impact non-users, can be seen as playing into a new potential equilibrium where if you can, you ‘must’, or else you’re not putting in as much as everyone else, and so everyone else will gradually find that they get boxed in and any negative secondary effects on them are irrelevant compared to the phase transition energy. A comparison that comes to mind is actually labor unions; that’s another case where restraining individually expressed capabilities in order to retain a better collective bargaining position for others comes into play, isn’t it?
… hmm, come to think of it, maybe part of conformity-pressure in general can be seen as a special case of this where the pool resource is more purely “cognition and attention spent dealing with non-default things” and the nonconformity by default has more of a purely negative impact on that axis, whereas conformity-pressure over technology with specific capabilities causes the nature of the pool resource to be pulled in the direction of what the technology is providing and there’s an active positive thing going on that becomes the baseline… I wonder if anything useful can be derived from thinking about those two cases as denoting an axis of variation.
And when the conformity is to a new norm that may be more difficult to understand but produces relative positive externalities in some way, is that similar to treating the new norm as a required table stakes cognitive technology?
yeah a friend of mine gave in because she was getting so much attitude about needing people to give her directions.
You’ve reminded me of a perspective I was meaning to include but then forgot to, actually. From the perspective of an equilibrium in which everyone’s implicitly expected to bring certain resources/capabilities as table stakes, making a personal decision that makes your life better but reduces your contribution to the pool can be seen as defection—and on a short time horizon or where you’re otherwise forced to take the equilibrium for granted, it seems hard to refute! (ObXkcd: “valuing unit standardization over being helpful possibly makes me a bad friend” if we take the protagonist as seeing “US customary units” as an awkward equilibrium.) Some offshoots of this which I’m not sure what to make of:
If the decision would lead to a better society if everyone did it, and leads to an improvement for you if only you do it, but requires the rest of a more localized group to spend more energy to compensate for you if you do it and they don’t, we have a sort of “incentive misalignment sandwich” going on. In practice I think there’s usually enough disagreement about the first point that this isn’t clear-cut, but it’s interesting to notice.
In the face of technological advances, what continues to count as table stakes tends to get set by Moloch and mimetic feedback loops rather than intentionally. In a way, people complaining vociferously about having to adopt new things are arguably acting in a counter-Moloch role here, but in the places I’ve seen that happen, it’s either been ineffective or led to a stressful and oppressive atmosphere of its own (or, most commonly and unfortunately, both).
I think intuitive recognition of (2) is a big motivator behind attacking adopters of new technology that might fall into this pattern, in a way that often gets poorly expressed in a “tech companies ruin everything” type of way. Personally taking up smartphones, or cars, or—nowadays the big one that I see in my other circles—generative AI, even if you don’t yourself look down on or otherwise directly negatively impact non-users, can be seen as playing into a new potential equilibrium where if you can, you ‘must’, or else you’re not putting in as much as everyone else, and so everyone else will gradually find that they get boxed in and any negative secondary effects on them are irrelevant compared to the phase transition energy. A comparison that comes to mind is actually labor unions; that’s another case where restraining individually expressed capabilities in order to retain a better collective bargaining position for others comes into play, isn’t it?
(Now much more tangentially:)
… hmm, come to think of it, maybe part of conformity-pressure in general can be seen as a special case of this where the pool resource is more purely “cognition and attention spent dealing with non-default things” and the nonconformity by default has more of a purely negative impact on that axis, whereas conformity-pressure over technology with specific capabilities causes the nature of the pool resource to be pulled in the direction of what the technology is providing and there’s an active positive thing going on that becomes the baseline… I wonder if anything useful can be derived from thinking about those two cases as denoting an axis of variation.
And when the conformity is to a new norm that may be more difficult to understand but produces relative positive externalities in some way, is that similar to treating the new norm as a required table stakes cognitive technology?