FWIW I think that confrontation-worthy empathy and use of the phrase “everyone will die” to describe AI risk are approximately mutually exclusive with each other, because communication using the latter phrase results from a failure to understand communication norms.
(Separately I also think that “if we build AGI, everyone will die” is epistemically unjustifiable given current knowledge. But the point above still stands even if you disagree with that bit.)
What I mean by confrontation-worthy empathy is about that sort of phrase being usable. I mean, I’m not saying it’s the best phrase, or a good phrase to start with, or whatever. I don’t think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.
This maybe isn’t so related to what you’re saying here, but I’d follow the policy of first making it common knowledge that you’re reporting your inside views (which implies that you’re not assuming that the other person would share those views); and then you state your inside views. In some scenarios you describe, I get the sense that Person 2 isn’t actually wanting Person 1 to say more modest models, they’re wanting common knowledge that they won’t already share those views / won’t already have the evidence that should make them share those views.
“I don’t think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.”
The main point of my post is that accounting for disagreements about Knightian uncertainly is the best way to actually communicate object level things, since otherwise people get sidetracked by epistemological disagreements.
“I’d follow the policy of first making it common knowledge that you’re reporting your inside views”
This is a good step, but one part of the epistemological disagreements I mention above is that most people consider inside views to be much a much less coherent category, and much less separable from other views, than most rationalists do. So I expect that more such steps are typically necessary.
“they’re wanting common knowledge that they won’t already share those views”
I think this is plausibly true for laypeople/non-ML-researchers, but for ML researchers it’s much more jarring when someone is making very confident claims about their field of expertise, that they themselves strongly disagree with.
FWIW I think that confrontation-worthy empathy and use of the phrase “everyone will die” to describe AI risk are approximately mutually exclusive with each other, because communication using the latter phrase results from a failure to understand communication norms.
(Separately I also think that “if we build AGI, everyone will die” is epistemically unjustifiable given current knowledge. But the point above still stands even if you disagree with that bit.)
What I mean by confrontation-worthy empathy is about that sort of phrase being usable. I mean, I’m not saying it’s the best phrase, or a good phrase to start with, or whatever. I don’t think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.
This maybe isn’t so related to what you’re saying here, but I’d follow the policy of first making it common knowledge that you’re reporting your inside views (which implies that you’re not assuming that the other person would share those views); and then you state your inside views. In some scenarios you describe, I get the sense that Person 2 isn’t actually wanting Person 1 to say more modest models, they’re wanting common knowledge that they won’t already share those views / won’t already have the evidence that should make them share those views.
“I don’t think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.”
The main point of my post is that accounting for disagreements about Knightian uncertainly is the best way to actually communicate object level things, since otherwise people get sidetracked by epistemological disagreements.
“I’d follow the policy of first making it common knowledge that you’re reporting your inside views”
This is a good step, but one part of the epistemological disagreements I mention above is that most people consider inside views to be much a much less coherent category, and much less separable from other views, than most rationalists do. So I expect that more such steps are typically necessary.
“they’re wanting common knowledge that they won’t already share those views”
I think this is plausibly true for laypeople/non-ML-researchers, but for ML researchers it’s much more jarring when someone is making very confident claims about their field of expertise, that they themselves strongly disagree with.