Thanks for the post, I agree with a lot of it. A few quick comments on your dialogue with imaginary me/Rohin, which highlight the main points of disagreement:
And even if not that-exact-thing, then there are all sorts of ways that some other thing could come out of left field and just render the problem easy. So I don’t see why you’re worried.
More accurate to say “I don’t see why you’re so confident”. I think I see why you’re worried, and I’m worried too for the same reasons. Indeed, I wrote a similar post recently which lists out research directions and reasons why I don’t expect them to solve the problem if it turns out to be hard. So in general you should probably put me down as having a reasonable amount of credence (20%?) on your view, but also considering many other possibilities plausible.
Nate: I have considered an array of clever ideas that look to me like they would predictably-to-me fail to solve the problems, and I admit that my guess is that you’re putting most of your hope on small clever ideas that I can already see would fail.
The ideas that come out of left field are generally the ones you haven’t considered yet, that’s what it means for them to come out of left field. I expect that this is frustrating for you to hear, because it seems my position is therefore unfalsifiable, but I don’t think it makes much pragmatic difference—I’m not saying we should relax because ideas will come out of left field. I think we should do a better job of looking for them, which involves people aiming more directly at worlds where the problem is hard, for which posts like this one help. I just also think that there’s probably more leeway than you think, because I feel pretty uncertain how far past human level a sharp left turn would happen by default.
Thanks for the post, I agree with a lot of it. A few quick comments on your dialogue with imaginary me/Rohin, which highlight the main points of disagreement:
More accurate to say “I don’t see why you’re so confident”. I think I see why you’re worried, and I’m worried too for the same reasons. Indeed, I wrote a similar post recently which lists out research directions and reasons why I don’t expect them to solve the problem if it turns out to be hard. So in general you should probably put me down as having a reasonable amount of credence (20%?) on your view, but also considering many other possibilities plausible.
The ideas that come out of left field are generally the ones you haven’t considered yet, that’s what it means for them to come out of left field. I expect that this is frustrating for you to hear, because it seems my position is therefore unfalsifiable, but I don’t think it makes much pragmatic difference—I’m not saying we should relax because ideas will come out of left field. I think we should do a better job of looking for them, which involves people aiming more directly at worlds where the problem is hard, for which posts like this one help. I just also think that there’s probably more leeway than you think, because I feel pretty uncertain how far past human level a sharp left turn would happen by default.