We might solve alignment in Yudkowsky’s sense of “not causing human extinction” or in Drexler’s sense of “will answer your questions and then shutdown”.
It may be possible to put a slightly (but not significantly) superhuman AI in a box and get useful work done by it despite it being not fully aligned. It may be possible for an AI to be superhuman in some domains and not others, such that it can’t attempt a takeover or even think of doing it.
I agree what you are saying is more relevant if I assume we just deploy the ASI, it takes over the world and then does more stuff.
I feel like I already addressed this not in my previous comment, but the one before that. We might put a a semi-corrigible weak AI in a box and try extract work from it in the near future, but that’s clealry not the end goal.
I guess you now have better understanding of why people are still interested in solving morality and politics and meaning, without delegating these problems to an ASI.
I agree. What I’m puzzled by is people who assume we’ll solve alignment, but then still think there are a bunch of problems left.
We might solve alignment in Yudkowsky’s sense of “not causing human extinction” or in Drexler’s sense of “will answer your questions and then shutdown”.
It may be possible to put a slightly (but not significantly) superhuman AI in a box and get useful work done by it despite it being not fully aligned. It may be possible for an AI to be superhuman in some domains and not others, such that it can’t attempt a takeover or even think of doing it.
I agree what you are saying is more relevant if I assume we just deploy the ASI, it takes over the world and then does more stuff.
I feel like I already addressed this not in my previous comment, but the one before that. We might put a a semi-corrigible weak AI in a box and try extract work from it in the near future, but that’s clealry not the end goal.
Okay cool.
I guess you now have better understanding of why people are still interested in solving morality and politics and meaning, without delegating these problems to an ASI.
No, I don’t think so.