I think that if AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad” then I am much less worried about GD than about power-seeking AI.
If the AI is that well-aligned, then presumably power-seeking AI is also not much of a problem, and you shouldn’t be that concerned about either?
Maybe you mean “if I assume that I don’t need to be worried about GD outside of the cases where AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”, then I am overall much less worried about GD than about power-seeking AI”?
If the AI is that well-aligned, then presumably power-seeking AI is also not much of a problem, and you shouldn’t be that concerned about either?
Maybe you mean “if I assume that I don’t need to be worried about GD outside of the cases where AI “does what the developer wants and approves, resolving conflicts between their wants in a way which is not egregiously bad”, then I am overall much less worried about GD than about power-seeking AI”?
Thanks—yep that’s what i meant!