I noticed (while reading your great modeling exercise about an important topic) a sort of gestalt presumption of “one big compartment (which is society itself)” in the write-up and this wasn’t challenged by the end.
Maybe this is totally valid? The Internet is a series of tubes, but most of the tubes connect to each other eventually, so it is kinda like all one big place maybe? Perhaps we shall all be assimilated one day.
But most of my thoughts about modeling how to cope with differences in preference and behavior focus a lot on the importance of spacial or topological or social separations to minimize conflicts and handle variations in context.
My general attitude is roughly: things in general are not “well mixed” and (considering how broken various things can be in some compartments) thank goodness for that!
This is a figure from this research where every cell basically represents a spacially embedded agent, and agents do iterated game playing with their neighbors and then react somehow.
In many similar bits of research (which vary in subtle ways, and reveals partly what the simulation maker wanted to see) a thing that often falls out is places where most agents are being (cooperatively?) gullible, or (defensively?) cheating, or doing tit-for-tat… basically you get regions of tragic conflict, and regions of simple goodness, and tit-for-tat is often at the boundaries (sometimes converting cheaters through incentives… sometimes with agents becoming complacent because T4T neighbors cooperate so much that it is easy to relax into gullibility… and so on).
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.
With a very simple little prisoner’s dilemma setup, utility is utility, and it is clear what “the actual right thing” is: lots of bilateral cooperate/cooperate interactions are Simply Good.
However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
It is pretty common, in my experience, for people to have coping strategies for local problems that they project out on others who are far from them, which they imagine to be morally universal rules. However, when particular local coping strategies are transported to new contexts, they often fail to translate to actual practical local benefits, because the world is big and details matter.
Putting on a sort of “engineering hat”, my general preference then is to focus on small specific situations, and just reason about “what ought to be done here and now” directly based on local details the the direct perception of objective goodness.
The REASON I would care about “copying others” is generally either (1) they figured out objectively good behavior that I can cheaply add to my repertoire, or (2) they are dangerous monsters who will try to hurt me of they see me acting differently. (There are of course many other possibilities, and subtleties, and figuring out why people are copying each other can be tricky sometimes.)
Your models here seem to be mostly about social contagion, and information cascades, and these mechanisms read to me as central causes of “why ‘we’ often can’t have nice things in practice” …because cascading contagion is usually anti-epistemic and often outright anti-social.
You’re having dinner with a party of 10 at a Chinese restaurant. Everyone else is using chop sticks. You know how to use chop sticks but prefer a fork. Do you ask for a fork? What if two other people are using a fork?
I struggled with this one because I will tend to use a chopstick at Chinese restaurants for fun, and sometimes I’m the only one using them, and several times I’ve had the opportunity to teach someone how to use them. The alternative preference in this story would be COUNTERFACTUAL to my normal life in numerous ways.
Trying to not fight the hypothetical too much, I could perhaps “prefer a fork” (as per the example) in two different ways:
(1) Maybe I “prefer a fork” as a brute fact of what makes me happy for no reason. In this case, you’re asking me about “a story person’s meta-social preferences whose object-level preferences are like mine but modified for the story situation” and I’m a bit confused by how to imagine that person answering the rest of the question. After making an imaginary person be like me but “prefer a fork as a brute emotional fact”… maybe the new mind would also be different in other ways as well? I couldn’t even figure out an answer to the question, basically. If this was my only way to play along, I would simply have directly “fought the hypothetical” forthrightly.
(2) However, another way to “prefer a fork” would be if the food wasn’t made properly for eating with a chopstick. Maybe there’s only rice, and the rice is all non-sticky separated grains, and with a chopstick I can only eat one grain at a time. This is a way that I could hypothetically “still have my actual dietary theories intact” and naturally “prefer a fork”… and in this external situation I would probably ask for a fork no matter how unfun or “not in the spirit of the experience” it seems? Plausibly, I would be miffed, and explain things to people close to me who had the same kind of rice, and I would predict that they would realize I was right, nod at my good sense, and probably ask the waiter to give them a fork as well.
But in that second try to generate an asnwer, it might LOOK like the people I predicted might copy me would be changing because “I was +1 to fork users and this mapped through a well defined social behavior curve feeling in them” but in my mental model the beginning of the cascade was actually caused by “I verbalized a real fact and explained an actually good method of coping with the objective problem” and the idea was objectively convincing.
I’m not saying that peer pressure should always be resisted. It would probably be inefficient for everyone to think from first principles all the time about everything. Also there are various “package deal” reasons to play along with group insanity, especially when you are relatively weak or ignorant or trying to make a customer happy or whatever. But… maybe don’t fall asleep while doing so, if you can help it? Elsewise you might get an objectively bad result before you wake up from sleep walking :-(
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.
[...]
However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
Something triggered in me by this response—and maybe similar to part of what you were saying in the later part: sometimes preferences aren’t affected much by the social context, within a given space of social contexts. People may just want to use chopsticks because they are fun, rather than caring about what other people think about them.
Also, societal preferences for a given thing might actually decrease when more and more people are interested in them. For example, demand for a thing might cause the price to rise. With orchestras: if lots of people are already playing violin, that increases the relative incentive for others to learn viola.
I noticed (while reading your great modeling exercise about an important topic) a sort of gestalt presumption of “one big compartment (which is society itself)” in the write-up and this wasn’t challenged by the end.
Maybe this is totally valid? The Internet is a series of tubes, but most of the tubes connect to each other eventually, so it is kinda like all one big place maybe? Perhaps we shall all be assimilated one day.
But most of my thoughts about modeling how to cope with differences in preference and behavior focus a lot on the importance of spacial or topological or social separations to minimize conflicts and handle variations in context.
My general attitude is roughly: things in general are not “well mixed” and (considering how broken various things can be in some compartments) thank goodness for that!
This is a figure from this research where every cell basically represents a spacially embedded agent, and agents do iterated game playing with their neighbors and then react somehow.
In many similar bits of research (which vary in subtle ways, and reveals partly what the simulation maker wanted to see) a thing that often falls out is places where most agents are being (cooperatively?) gullible, or (defensively?) cheating, or doing tit-for-tat… basically you get regions of tragic conflict, and regions of simple goodness, and tit-for-tat is often at the boundaries (sometimes converting cheaters through incentives… sometimes with agents becoming complacent because T4T neighbors cooperate so much that it is easy to relax into gullibility… and so on).
A lot depends on the details, but the practical upshot for me is that it is helpful to remember that the right thing in one placetime is not always the right thing everywhere or forever.
Arguably, “reminding people about context” is just a useful bravery debate position local to my context? ;-)
With a very simple little prisoner’s dilemma setup, utility is utility, and it is clear what “the actual right thing” is: lots of bilateral cooperate/cooperate interactions are Simply Good.
However in real life there is substantial variation in cultures and preferences and logistical challenges and coordinating details and so on.
It is pretty common, in my experience, for people to have coping strategies for local problems that they project out on others who are far from them, which they imagine to be morally universal rules. However, when particular local coping strategies are transported to new contexts, they often fail to translate to actual practical local benefits, because the world is big and details matter.
Putting on a sort of “engineering hat”, my general preference then is to focus on small specific situations, and just reason about “what ought to be done here and now” directly based on local details the the direct perception of objective goodness.
The REASON I would care about “copying others” is generally either (1) they figured out objectively good behavior that I can cheaply add to my repertoire, or (2) they are dangerous monsters who will try to hurt me of they see me acting differently. (There are of course many other possibilities, and subtleties, and figuring out why people are copying each other can be tricky sometimes.)
Your models here seem to be mostly about social contagion, and information cascades, and these mechanisms read to me as central causes of “why ‘we’ often can’t have nice things in practice” …because cascading contagion is usually anti-epistemic and often outright anti-social.
I struggled with this one because I will tend to use a chopstick at Chinese restaurants for fun, and sometimes I’m the only one using them, and several times I’ve had the opportunity to teach someone how to use them. The alternative preference in this story would be COUNTERFACTUAL to my normal life in numerous ways.
Trying to not fight the hypothetical too much, I could perhaps “prefer a fork” (as per the example) in two different ways:
(1) Maybe I “prefer a fork” as a brute fact of what makes me happy for no reason. In this case, you’re asking me about “a story person’s meta-social preferences whose object-level preferences are like mine but modified for the story situation” and I’m a bit confused by how to imagine that person answering the rest of the question. After making an imaginary person be like me but “prefer a fork as a brute emotional fact”… maybe the new mind would also be different in other ways as well? I couldn’t even figure out an answer to the question, basically. If this was my only way to play along, I would simply have directly “fought the hypothetical” forthrightly.
(2) However, another way to “prefer a fork” would be if the food wasn’t made properly for eating with a chopstick. Maybe there’s only rice, and the rice is all non-sticky separated grains, and with a chopstick I can only eat one grain at a time. This is a way that I could hypothetically “still have my actual dietary theories intact” and naturally “prefer a fork”… and in this external situation I would probably ask for a fork no matter how unfun or “not in the spirit of the experience” it seems? Plausibly, I would be miffed, and explain things to people close to me who had the same kind of rice, and I would predict that they would realize I was right, nod at my good sense, and probably ask the waiter to give them a fork as well.
But in that second try to generate an asnwer, it might LOOK like the people I predicted might copy me would be changing because “I was +1 to fork users and this mapped through a well defined social behavior curve feeling in them” but in my mental model the beginning of the cascade was actually caused by “I verbalized a real fact and explained an actually good method of coping with the objective problem” and the idea was objectively convincing.
I’m not saying that peer pressure should always be resisted. It would probably be inefficient for everyone to think from first principles all the time about everything. Also there are various “package deal” reasons to play along with group insanity, especially when you are relatively weak or ignorant or trying to make a customer happy or whatever. But… maybe don’t fall asleep while doing so, if you can help it? Elsewise you might get an objectively bad result before you wake up from sleep walking :-(
Martin Sustrik’s “Anti-Social Punishment” post is great real-life example of this
Something triggered in me by this response—and maybe similar to part of what you were saying in the later part: sometimes preferences aren’t affected much by the social context, within a given space of social contexts. People may just want to use chopsticks because they are fun, rather than caring about what other people think about them.
Also, societal preferences for a given thing might actually decrease when more and more people are interested in them. For example, demand for a thing might cause the price to rise. With orchestras: if lots of people are already playing violin, that increases the relative incentive for others to learn viola.