There are basically two kinds of plans: 1) Stay in control of AI as it becomes increasingly super-human and increasingly powerful, 2) Stop AI from getting too powerful in the first place. At the moment, there are no good plans of type (1), for staying in control.
This is just a subclass of the space of possible plans. Both (1) and (2) assume that humans stay in control indefinitely, that’s what they have in common with each other.
But there are all kinds of plans which don’t assume that. For example, the original plan by Eliezer to align AI to the Coherent Extrapolated Volition of humanity does not assume humans staying in control indefinitely. And there are many other plans which do not assume humans staying in control indefinitely, trying instead to assure human flourishing via different mechanisms.
This is just a subclass of the space of possible plans. Both (1) and (2) assume that humans stay in control indefinitely, that’s what they have in common with each other.
But there are all kinds of plans which don’t assume that. For example, the original plan by Eliezer to align AI to the Coherent Extrapolated Volition of humanity does not assume humans staying in control indefinitely. And there are many other plans which do not assume humans staying in control indefinitely, trying instead to assure human flourishing via different mechanisms.