[E]ven if there are collective decisions, there are no collective models. Not real models.
When the team agrees to do something, it is only because enough of the individual team members individually have models which indicate it is the right thing to do.
There’s something kind of worrying/sad about this. One would hope that with a small enough group, you’d be able to have discussion and Aumann magicconvergence lead to common models (and perhaps values?) being held by everybody. In this world, the process of making decisions is about gathering information from team members about the relevant considerations, and then a consensus emerges about what the right thing to do is, driven by consensus beliefs about the likely outcomes. When you can’t do this, you end up in voting theory land, where even if each individual is rational, methods to aggregate group preferences about plans can lead to self-contradictory results.
I don’t particularly have advice for you here—presumably you’ve already thought about the cost-benefit analysis of spending marginal time on belief communication—but the downside here felt worth pointing out.
I took the line written to mean that there are no “opinion leaders”. In a system where people could vote but actually trust someone elses judgement the amount of votes doesn’t reflect the amount of judgement processes employed.
I also think that in a system that requires a consensus it becomes tempting to produce a false consensus. This effect is strong enough that in all context where people bother with the concept of consensus there is enough basis to suspect that it doesn’t form that there is a significant chance that all particular consensuses are false. By allowing a system of functioning to tolerate non-consensus it becomes practical to be the first one to break a consensus and the value of this is enough to see requiring consensus to be harmful.
All the while it being true that while opinions diverge there is real debate to be had.
FWIW, we spend loads of time on belief-communication. This does mean (as Ruby says) that many of our beliefs are the same. But some are not, and sometimes the nuances matter.
In this world, the process of making decisions is about gathering information from team members about the relevant considerations, and then a consensus emerges about what the right thing to do is, driven by consensus beliefs about the likely outcomes.
This doesn’t seem very different from what we do, we just skip the step where everyone’s models necessarily converge. We still converge on a course of action. (habryka is main decision maker so in the event that consensus-about-the-relevant-details doesn’t emerge, tends to default to his judgment, or [empirically] to delaying action).
Even if they do converge (which they do quite frequently in simpler cases), I think the correct model of the situation is to say “I believe X, as does everyone else on my team”, which is a much better statement than “we believe X”, because the phrase “we believe” is usually not straightforwardly interpreted as “everyone on the team believes that X is true” instead it usually means “via a complicated exchange of political capital we have agreed to act as if we all believe X is true”.
I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
Hmm, I think you must have misunderstood the above sentence/we failed to get the correct point across. This is a statement about epistemology that I think is pretty fundamental, and is not something that one can choose not to do.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another). You can have systems that generate predictions and policies and actions that are not understood by any individual (as is common in many large organizations), but that is the exact state you want to avoid in a small team where you can invest the cost to have everything be driven by things at least one person on the team understands.
The thing described above is something you get to do if you can invest a lot of resources into communication, not something you have to do if you don’t invest enough resources.
I get the sense that you don’t understand me here.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another).
We can choose to live in a world where the model in my head is the same as the model in your head, and that this is common knowledge. In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model, the one that results from all the information we both have (just like the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide). If I believed that this was possible, I wouldn’t talk about how official group models are going to be impoverished ‘common denominator’ models, or conclude a paragraph with a sentence like “Organizations don’t have models, people do.”
In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model …
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
… the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide …
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.
I would phrase is that the number 3 in my head and the number 3 in your head both correspond to the number 3 “out there” or to the “”common social” number 3.
For example my number 3 might participate in being part of a input to a cached results of multiplication tables while I am not expecting everyone else to do so.
The old philosphical problem of whether the red I see the the same red that you see kind of highlights how the reds could plausibly be incomparable while the practical reality that color talk is possible is not in question.
There’s something kind of worrying/sad about this. One would hope that with a small enough group, you’d be able to have discussion and Aumann
magicconvergence lead to common models (and perhaps values?) being held by everybody. In this world, the process of making decisions is about gathering information from team members about the relevant considerations, and then a consensus emerges about what the right thing to do is, driven by consensus beliefs about the likely outcomes. When you can’t do this, you end up in voting theory land, where even if each individual is rational, methods to aggregate group preferences about plans can lead to self-contradictory results.I don’t particularly have advice for you here—presumably you’ve already thought about the cost-benefit analysis of spending marginal time on belief communication—but the downside here felt worth pointing out.
I took the line written to mean that there are no “opinion leaders”. In a system where people could vote but actually trust someone elses judgement the amount of votes doesn’t reflect the amount of judgement processes employed.
I also think that in a system that requires a consensus it becomes tempting to produce a false consensus. This effect is strong enough that in all context where people bother with the concept of consensus there is enough basis to suspect that it doesn’t form that there is a significant chance that all particular consensuses are false. By allowing a system of functioning to tolerate non-consensus it becomes practical to be the first one to break a consensus and the value of this is enough to see requiring consensus to be harmful.
All the while it being true that while opinions diverge there is real debate to be had.
This comment actually made our own policy clearer to me, thanks!
FWIW, we spend loads of time on belief-communication. This does mean (as Ruby says) that many of our beliefs are the same. But some are not, and sometimes the nuances matter.
This doesn’t seem very different from what we do, we just skip the step where everyone’s models necessarily converge. We still converge on a course of action. (habryka is main decision maker so in the event that consensus-about-the-relevant-details doesn’t emerge, tends to default to his judgment, or [empirically] to delaying action).
Even if they do converge (which they do quite frequently in simpler cases), I think the correct model of the situation is to say “I believe X, as does everyone else on my team”, which is a much better statement than “we believe X”, because the phrase “we believe” is usually not straightforwardly interpreted as “everyone on the team believes that X is true” instead it usually means “via a complicated exchange of political capital we have agreed to act as if we all believe X is true”.
To clarify, I didn’t think otherwise (and also, right now, I’m not confident that you thought I did think otherwise).
Sure—I now think that my comment overrated how much convergence was necessary for decision-making.
I second Ray’s claim that we spend loads of time on belief communication. Something like the Aumann convergence to common models might be be “theoretically” doable, but I think it’d require more than 100% of our time to get there. This is indeed a bit sad and worrying for human-human communication.
Is it newly sad and worrying, though?
By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.
Suck it, value drift!
Hmm, I think you must have misunderstood the above sentence/we failed to get the correct point across. This is a statement about epistemology that I think is pretty fundamental, and is not something that one can choose not to do.
In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another). You can have systems that generate predictions and policies and actions that are not understood by any individual (as is common in many large organizations), but that is the exact state you want to avoid in a small team where you can invest the cost to have everything be driven by things at least one person on the team understands.
The thing described above is something you get to do if you can invest a lot of resources into communication, not something you have to do if you don’t invest enough resources.
I get the sense that you don’t understand me here.
We can choose to live in a world where the model in my head is the same as the model in your head, and that this is common knowledge. In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model, the one that results from all the information we both have (just like the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide). If I believed that this was possible, I wouldn’t talk about how official group models are going to be impoverished ‘common denominator’ models, or conclude a paragraph with a sentence like “Organizations don’t have models, people do.”
I don’t think this actually makes sense. Models only make predictions when they’re instantiated, just as algorithms only generate output when run. And models can only be instantiated in someone’s head[1].
This is a statement about philosophy of mathematics, and not exactly an uncontroversial one! As such, I hardly think it can support the sort of rhetorical weight you’re putting on it…
[1] Or, if the model is sufficiently formal, in a computer—but that is, of course, not the sort of model we’re discussing.
I think models can be run on computers and I think people passing papers can work as computers. I do think it’s possible to have an organization that does informational work that none of it’s human participants do. I do appriciate that such work is often very secondary to the work that actual individuals do. But I think that if someone aggressively tried to make a system that would survive a “bad faith” human actor it might be possible and even feasible.
I would phrase is that the number 3 in my head and the number 3 in your head both correspond to the number 3 “out there” or to the “”common social” number 3.
For example my number 3 might participate in being part of a input to a cached results of multiplication tables while I am not expecting everyone else to do so.
The old philosphical problem of whether the red I see the the same red that you see kind of highlights how the reds could plausibly be incomparable while the practical reality that color talk is possible is not in question.