I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I don’t agree with this:
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
Right, so you don’t take the analogy seriously—but the quoted claim was meant to say basically “IF you took the analogy seriously...”
Feel free not to respond, I feel like the thread of conversation has been lost somehow.