Well to flesh that out , we could have an ASI that seems valye aligned and controllable...until it isn’t.
I think that scenario falls under the “worlds where iterative approaches fail” bucket, at least if prior to that we had a bunch of examples of AGIs that seemed and were value aligned and controllable, and the misalignment only showed up in the superhuman domain.
There is a different failure mode, which is “we see a bunch of cases of deceptive alignment in sub-human-capability AIs causing minor to moderate disasters, and we keep scaling up despite those disasters”. But that’s not so much “iterative approaches cannot work” as “iterative approaches do not work if you don’t learn from your mistakes”.
I think that scenario falls under the “worlds where iterative approaches fail” bucket, at least if prior to that we had a bunch of examples of AGIs that seemed and were value aligned and controllable, and the misalignment only showed up in the superhuman domain.
There is a different failure mode, which is “we see a bunch of cases of deceptive alignment in sub-human-capability AIs causing minor to moderate disasters, and we keep scaling up despite those disasters”. But that’s not so much “iterative approaches cannot work” as “iterative approaches do not work if you don’t learn from your mistakes”.