My understanding is that a crucial aspect of Eliezer’s worldview is that we’d be fucked even if we had a 10-year pause where we had access to AGI that we could use to work on developing and aligning superintelligence. I disagree. But this means that he thinks that some truly crazy stuff has to happen in order for ASI to be aligned, which naturally leads to lots of disagreements. (I am curious whether you agree with him on this point.)
I don’t feel competent to have that strong opinion on this, but I’m like 60% on “you need to do some major ‘solve difficult technical philosophy’ that you can only partially outsource to AI, that still requires significant serial time.”
And, while it’s hard for someone withy my (lack-of) background to have a strong opinion, it feels intuitively crazy to me put that as <15% likely, which feels sufficient to me to motivate “indefinite pause is basically necessary, or, humanity has clearly fucked up if we don’t do it, even if it turned out to be on the easier side.”
indefinite pause is basically necessary, or, humanity has clearly fucked up if we don’t do it
I think it’s really important to not equivocate between “necessary” and “humanity has clearly fucked up if we don’t do it.”
“Necessary” means “we need this in order to succeed; there’s no chance of success without this”. Because humanity is going to massively underestimate the risk of AI takeover, there is going to be lots of stuff that doesn’t happen that would have passed cost-benefit analysis for humanity.
If you think it’s 15% likely that we need really large amounts of serial time to prevent AI takeover, then it’s very easy to imagine situations where the best strategy on the margin is to work on the other 85% of worlds. I have no idea why you’re describing this as “basically necessary”.
I don’t feel competent to have that strong opinion on this, but I’m like 60% on “you need to do some major ‘solve difficult technical philosophy’ that you can only partially outsource to AI, that still requires significant serial time.”
And, while it’s hard for someone withy my (lack-of) background to have a strong opinion, it feels intuitively crazy to me put that as <15% likely, which feels sufficient to me to motivate “indefinite pause is basically necessary, or, humanity has clearly fucked up if we don’t do it, even if it turned out to be on the easier side.”
I think it’s really important to not equivocate between “necessary” and “humanity has clearly fucked up if we don’t do it.”
“Necessary” means “we need this in order to succeed; there’s no chance of success without this”. Because humanity is going to massively underestimate the risk of AI takeover, there is going to be lots of stuff that doesn’t happen that would have passed cost-benefit analysis for humanity.
If you think it’s 15% likely that we need really large amounts of serial time to prevent AI takeover, then it’s very easy to imagine situations where the best strategy on the margin is to work on the other 85% of worlds. I have no idea why you’re describing this as “basically necessary”.