… on reflection, I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective. Asking “who should we blame?” is always engaging in a status fight. Status fights are generally mindkillers, and should be kept strictly separate from modelling and epistemics.
Now, this does not mean that we shouldn’t model status fights. Rather, it means that we should strive to avoid engaging in status fights when modelling them. Concretely: rather than ask “who should we blame?”, ask “what incentives do we create by blaming <actor>?”. This puts the question in an analytical frame, rather than a “we’re having a status fight right now” frame.
This was a pretty important couple of points. I’m not sure I agree with them as worded, but point towards something that I think is close to a pareto improvement, at least for LessWrong and maybe for the whole world.
I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective
The key problem is… sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you’re in a status fight. So a binary distinction of “trying to have good epistemics” vs “not” isn’t the right frame.
I think this might actually be a pretty good distinction for LessWrong’s frontpage – “status fight or no?” is close to the question that our Frontpage ‘politics’ distinction is aiming at. I do think it is probably reasonable that if you’re trying to write a frontpage page, you follow the “what incentives do we create by blaming?” rule, and if you want to more directly talk about “no actually we should blame Bob for X” then you write a personal blogpost.
The key problem is… sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you’re in a status fight. So a binary distinction of “trying to have good epistemics” vs “not” isn’t the right frame.
Part of my model here is that moral/status judgements (like “we should blame X for Y”) like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The “virtue theory of metabolism”, which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.
My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we’re trying to model our own value judgements, the analysis/engagement distinction still applies.)
Epistemics will still be involved in status fights, but the goal is to make that a one-way street as much as possible. Epistemics should influence status, not the other way around.
In practice it’s never that precise even when it works, largely because value connotations in everyday language can compactly convey epistemically-useful information - e.g. the weeds analogy above. But it’s still useful to regularly check that the value connotations can be taboo’d without the whole model ceasing to make sense, and it’s useful to perform that sort of check automatically when value judgements play a large role.
John and I had a fantastic offline discussion and I’m currently revising this in light of that. We’re also working on a postmortem on the whole thing that I expect to be very informative. I keep mission creeping on my edits and response and it’s going to take a while so I’m writing the bare minimum comment to register that this is happening.
This was a pretty important couple of points. I’m not sure I agree with them as worded, but point towards something that I think is close to a pareto improvement, at least for LessWrong and maybe for the whole world.
The key problem is… sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you’re in a status fight. So a binary distinction of “trying to have good epistemics” vs “not” isn’t the right frame.
I think this might actually be a pretty good distinction for LessWrong’s frontpage – “status fight or no?” is close to the question that our Frontpage ‘politics’ distinction is aiming at. I do think it is probably reasonable that if you’re trying to write a frontpage page, you follow the “what incentives do we create by blaming?” rule, and if you want to more directly talk about “no actually we should blame Bob for X” then you write a personal blogpost.
Part of my model here is that moral/status judgements (like “we should blame X for Y”) like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The “virtue theory of metabolism”, which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.
My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we’re trying to model our own value judgements, the analysis/engagement distinction still applies.)
Epistemics will still be involved in status fights, but the goal is to make that a one-way street as much as possible. Epistemics should influence status, not the other way around.
In practice it’s never that precise even when it works, largely because value connotations in everyday language can compactly convey epistemically-useful information - e.g. the weeds analogy above. But it’s still useful to regularly check that the value connotations can be taboo’d without the whole model ceasing to make sense, and it’s useful to perform that sort of check automatically when value judgements play a large role.
John and I had a fantastic offline discussion and I’m currently revising this in light of that. We’re also working on a postmortem on the whole thing that I expect to be very informative. I keep mission creeping on my edits and response and it’s going to take a while so I’m writing the bare minimum comment to register that this is happening.