I’m not proposing being unrealistic about the challenges we face
Well yes, you are. You can’t both say “let’s assume we have a real shot at success regardless of factual beliefs” and “let’s be realistic about the challenges we face”. If the model says that the challenges are so hard that we don’t have a real shot (which is in fact the case here, for Eliezer’s model), then these two things are a straight-forward contradiction.
Which is also the problem with your argument. Pretending as if we have a real shot requires lying. However, I think lying is really bad idea. Your argument implicitly assumes that the optimal strategy is independent of the odds of success, but I think that assumption is false—I want to know if Eliezer thinks the current approach is doomed, so that we can look for something else (like a policy approach). If Elliezer had chosen to lie about P(doom-given-alignment), we may keep working on alignment rather than policy, and P(overall-doom) may increase!
Well yes, you are. You can’t both say “let’s assume we have a real shot at success regardless of factual beliefs” and “let’s be realistic about the challenges we face”. If the model says that the challenges are so hard that we don’t have a real shot (which is in fact the case here, for Eliezer’s model), then these two things are a straight-forward contradiction.
Which is also the problem with your argument. Pretending as if we have a real shot requires lying. However, I think lying is really bad idea. Your argument implicitly assumes that the optimal strategy is independent of the odds of success, but I think that assumption is false—I want to know if Eliezer thinks the current approach is doomed, so that we can look for something else (like a policy approach). If Elliezer had chosen to lie about P(doom-given-alignment), we may keep working on alignment rather than policy, and P(overall-doom) may increase!