just another emergent phenomenon of cellular automata
it occurs to me that ‘rudeness’ in this framework is a sort of protective charm; by casting the person as rude, you discount their credibility and therefore don’t have to update your beliefs.
that can end up feeling like the information makes your cooking worse; because you update your belief about your cooking after receiving the information.
i’m not sure the simulacrum model is quite necessary to understand people’s responses to information. particularly in the first 3, i think the responses can be explained by cognitive dissonance. in 1 & 3, the hearer holds the belief “i offer a good product” and is confronted with the information “someone is not satisfied with my product.” in the gym example, the alternatives (skipping entirely, 10-minute self-warmup) are easily explained by “this person is busy.” in 2, you perhaps hold the belief “i am a good person who does not destroy library materials” and are confronted with the information “i might be destroying library materials.” in these examples, the dissonance could be resolved by more nuanced 3rd beliefs, such as “this person has special needs and has adapted my quality product to suit their needs” or “i am a good person make mistakes sometimes”. [this is all pretty straight out of psych theory on cognitive dissonance, applied to these examples]
with that in mind, your recommendation #3 is not necessary: it is possible to be honest and congenial.. it just takes some work. work first to understand a person’s propositional belief system, then to figure out how to present information in a way that can be rendered consistent with their belief system. of course this will not always be possible.. sometimes beliefs will have to change, but by being aware of what the effect of the info is you can try to cushion the blow. of course, that might not be worth it just for the hot sauce situation. could be worth it for the gym situation.
(Ref: Gawronski 2012, “Back to the future of dissonance theory”)
Perhaps it is possible in practice/process to disentangle value alignment issues from factual disagreements. Double-crux seems optimal at reaching consensus on factual truths (e.g., which widget will have a lower error rate?) and would at least *uncover* Carl’s crux, if everybody participates in good faith, and therefore make it possible to nonetheless discover the factual truth. Then maybe punt the non-objective argument to a different process like incentive alignment as you discuss.
I absolutely love this, and it leaves me wondering about the role of social in the sabbath. This post mentions early, “Most want more social events, but coordination is hard and events are work. Now there’s always Friday night,” but the subject does not come up again. And yet with regard to the historical referent, sociality is baked deeply into the sabbath. For the orthodox version, the minyan rule (plus the no driving rule) requires that people live close together and that they see each other once a week.
On the one hand, community has the potential to enhance the sabbath from the perspective of this article, which is sabbath as slack and relaxation. A social element in the weekly ritual naturally enforces regular adherence, since people must provide explanation if absent. Other people also tend to flag deviance and reinforce norms, reducing the cognitive burden of self-enforcement e.g. to a no-social-media rule.
From Ben’s perspective, i.e. Sabbath as an alarm system, social has added benefits. Conversation with trusted and familiar people can help identification and diagnosis of challenges of all kinds; e.g., other people can sometimes notice the depth of our stress we see it ourselves. Though I don’t know them all, there are countless other benefits of social networks for well-being. So if the sabbath is about well-being, it should be about social.
On the other hand, there are also unique benefits to isolation. If the sabbath is strictly about slack and relaxation, then social may play no role for some/many people. This conundrum, though, also highlights an interesting design feature of orthodox sabbath; Friday night may be a personal or family affair, and socialization is only enforced on Saturdays. My modern take is a quiet Saturday night and a shared Sunday Brunch.
but that just kicks the can down the road, leaving the question: “Could I have wanted X?”
reading all this has led me to think a lot about using MMOs as a testing ground for sociology
i think you are on the right track—a google scholar search reveals an enormous amount of social science conducted on virtual worlds including topics like teamwork, economics, and religion. don’t know about governance systems though.
I find myself wondering about disagreements (or subcomponents of disagreements) where appealing to objective reality may not be possible.
It seems like this is a special case of a broader type of process, fitting into the more general category of collaborative decision-making. (Here I’m thinking of the 5 approaches to conflict: compete, collaborate, avoid, accommodate, and compromise).
In the explicit product-as-widget case, there may always be an appeal to some objectively frameable question: what will make us more money? But even this can ignite debate: which is more important, short-term revenue or long-term revenue? I can imagine two people (perhaps one very young, and one very old) in dispute over a product design where they realize the root of the disagreement is this different personal timeline.
This example may be (or seem) intractable, but it’s a toy example to illustrate the possibility that disagreements can arise over matters which are not purely objective. In such cases, I would imagine that doublecrux would pair extremely well with other established methods for collaborative problem-solving (e.g. interest-based negotiation). I suspect even that this method could enhance the resolution of strictly value-based disagreements, since values can be converted back into objectively measurable outcomes and therefore become the subject of doublecrux inquiry.
I think the method would be basically the same, replacing “why do you believe X” with “what is important to you about X” in the process of inquiry.
This is interesting because mediators (who are essentially facilitating interest-based negotiation) are generally trained not to seek factual truth; but that’s usually facts about the past whereas doublecrux deals with facts about the future.
I read “At Home In The Universe: The Search for the Laws of Self Organization and Complexity” which is a very accessible and fun read—I am not a physicist/mathematician/biologist, etc, and it all made sense to me. The book talks about evolution, both biological and technological.
And the model described in that book has been quite commonly adapted by social scientists to study problem solving, so it’s been socially validated as a good framework for thinking about scientific research.
Sure, though the question of “why is science slowing down” and “what should we do now” are two different questions. If the answer of “why is science slowing down” is simply because—it’s getting harder.… then there may be absolutely nothing wrong with our coordination, and no action is required.
I’m not saying we can’t do even better, but crisis-response is distinct from self-improvement.