The key problem is… sometimes you actually just do need to have status fights, and you still want to have as-good-epistemics-as-possible given that you’re in a status fight. So a binary distinction of “trying to have good epistemics” vs “not” isn’t the right frame.
Part of my model here is that moral/status judgements (like “we should blame X for Y”) like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The “virtue theory of metabolism”, which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.
My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we’re trying to model our own value judgements, the analysis/engagement distinction still applies.)
Epistemics will still be involved in status fights, but the goal is to make that a one-way street as much as possible. Epistemics should influence status, not the other way around.
In practice it’s never that precise even when it works, largely because value connotations in everyday language can compactly convey epistemically-useful information - e.g. the weeds analogy above. But it’s still useful to regularly check that the value connotations can be taboo’d without the whole model ceasing to make sense, and it’s useful to perform that sort of check automatically when value judgements play a large role.
John and I had a fantastic offline discussion and I’m currently revising this in light of that. We’re also working on a postmortem on the whole thing that I expect to be very informative. I keep mission creeping on my edits and response and it’s going to take a while so I’m writing the bare minimum comment to register that this is happening.
Part of my model here is that moral/status judgements (like “we should blame X for Y”) like to sneak into epistemic models and masquerade as weight-bearing components of predictions. The “virtue theory of metabolism”, which Yudkowsky jokes about a few times in the sequences, is an excellent example of this sort of thing, though I think it happens much more often and usually much more subtly than that.
My answer to that problem on a personal level is to rip out the weeds wherever I notice them, and build a dome around the garden to keep the spores out. In other words: keep morality/status fights strictly out of epistemics in my own head. In principle, there is zero reason why status-laden value judgements should ever be directly involved in predictive matters. (Even when we’re trying to model our own value judgements, the analysis/engagement distinction still applies.)
Epistemics will still be involved in status fights, but the goal is to make that a one-way street as much as possible. Epistemics should influence status, not the other way around.
In practice it’s never that precise even when it works, largely because value connotations in everyday language can compactly convey epistemically-useful information - e.g. the weeds analogy above. But it’s still useful to regularly check that the value connotations can be taboo’d without the whole model ceasing to make sense, and it’s useful to perform that sort of check automatically when value judgements play a large role.
John and I had a fantastic offline discussion and I’m currently revising this in light of that. We’re also working on a postmortem on the whole thing that I expect to be very informative. I keep mission creeping on my edits and response and it’s going to take a while so I’m writing the bare minimum comment to register that this is happening.