I don’t think anybody is hung up on “the AI can one-shot predict a successful plan that doesn’t require any experimentation or course correction” as a pre-requisite for doom, or even comprise a substantial chunk of their doom %.
I would say that anyone stating...
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
(EY, of course)
...is assuming exactly that. Particularly given the “shortly”.
I would say that anyone stating...
(EY, of course)
...is assuming exactly that. Particularly given the “shortly”.
No, Eliezer’s explicitly clarified that isn’t a required component of his model.
Does he? A lot of his arguments hinge on us shortly dying after it appears.