A possibility the post touches on is getting a warning shot regime by default, sufficiently slow takeoff making serious AI x-risk concerns mainstream and meaningful second chances at getting alignment right available. In particular, alignment techniques debugged on human-level AGIs might scale when eventually they get more capable, unlike alignment techniques developed for AIs less capable than humans.
This possibility seems at least conceivable, though most of the other points in the post sound to me like arguments for plausibility of some stay of execution (eating away at the edges of AI x-risk). I still don’t expect this regime/possibility, because I expect that (some) individual humans with infrastructural advantages of AIs would already be world domination worthy. Ability to think (at least) dozens of times faster and without rest, to learn in parallel and then use the learning in many instances running in parallel, to convert wealth into population of researchers. So I don’t consider humans an example of AGI that doesn’t immediately overturn the world order.
A possibility the post touches on is getting a warning shot regime by default, sufficiently slow takeoff making serious AI x-risk concerns mainstream and meaningful second chances at getting alignment right available. In particular, alignment techniques debugged on human-level AGIs might scale when eventually they get more capable, unlike alignment techniques developed for AIs less capable than humans.
This possibility seems at least conceivable, though most of the other points in the post sound to me like arguments for plausibility of some stay of execution (eating away at the edges of AI x-risk). I still don’t expect this regime/possibility, because I expect that (some) individual humans with infrastructural advantages of AIs would already be world domination worthy. Ability to think (at least) dozens of times faster and without rest, to learn in parallel and then use the learning in many instances running in parallel, to convert wealth into population of researchers. So I don’t consider humans an example of AGI that doesn’t immediately overturn the world order.