FWIW, we spend loads of time on belief-communication. This does mean (as Ruby says) that many of our beliefs are the same. But some are not, and sometimes the nuances matter.
In this world, the process of making decisions is about gathering information from team members about the relevant considerations, and then a consensus emerges about what the right thing to do is, driven by consensus beliefs about the likely outcomes.
This doesn’t seem very different from what we do, we just skip the step where everyone’s models necessarily converge. We still converge on a course of action. (habryka is main decision maker so in the event that consensus-about-the-relevant-details doesn’t emerge, tends to default to his judgment, or [empirically] to delaying action).
Even if they do converge (which they do quite frequently in simpler cases), I think the correct model of the situation is to say “I believe X, as does everyone else on my team”, which is a much better statement than “we believe X”, because the phrase “we believe” is usually not straightforwardly interpreted as “everyone on the team believes that X is true” instead it usually means “via a complicated exchange of political capital we have agreed to act as if we all believe X is true”.
I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
FWIW, we spend loads of time on belief-communication. This does mean (as Ruby says) that many of our beliefs are the same. But some are not, and sometimes the nuances matter.
This doesn’t seem very different from what we do, we just skip the step where everyone’s models necessarily converge. We still converge on a course of action. (habryka is main decision maker so in the event that consensus-about-the-relevant-details doesn’t emerge, tends to default to his judgment, or [empirically] to delaying action).
Even if they do converge (which they do quite frequently in simpler cases), I think the correct model of the situation is to say “I believe X, as does everyone else on my team”, which is a much better statement than “we believe X”, because the phrase “we believe” is usually not straightforwardly interpreted as “everyone on the team believes that X is true” instead it usually means “via a complicated exchange of political capital we have agreed to act as if we all believe X is true”.
To clarify, I didn’t think otherwise (and also, right now, I’m not confident that you thought I did think otherwise).
Sure—I now think that my comment overrated how much convergence was necessary for decision-making.
I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
Suck it, value drift!