I made a tweet and someone said to me that its exactly the same idea as in your comment, do you think so?
my tweet - “One assumption in Yudkovian AI risk model is that misalignment and capability jump happen simultaneously. If misalignment happens without capability jump, we get only AI virus at worst, slow and lagging. If capability jump happens without misalignment, AI will just inform human about it. Obviously, capabilities jump can trigger misalignment, though it is against orthogonally thesis. But more advanced AI can have a bigger world picture and can predict its own turn off or some other bad things.”
I made a tweet and someone said to me that its exactly the same idea as in your comment, do you think so?
my tweet - “One assumption in Yudkovian AI risk model is that misalignment and capability jump happen simultaneously. If misalignment happens without capability jump, we get only AI virus at worst, slow and lagging. If capability jump happens without misalignment, AI will just inform human about it. Obviously, capabilities jump can trigger misalignment, though it is against orthogonally thesis. But more advanced AI can have a bigger world picture and can predict its own turn off or some other bad things.”