Controlling optimisation power requires controlling both access to resources and controlling optimisation-power-per-unit-resource (which we might call “intelligence”).
It is the “access to resources” part that is key here. You’re looking at two categories of AI. Seed AIs, that are deliberately designed by humanity to not be self-improving (or even self-modifying) past a certain point, but which have high access to resources; and ‘free citizen’ AIs that are fully self-modifying, but which initially may have restricted access to resources.
When you (Alex) talk about “the first AI”, what you’re talking about is the first ‘free citizen’ AI, but there will already be seed AIs out there which (initially) will have greater optimisation power and the ability to choke off the access to resources of the new ‘free citizen’ AI if it doesn’t play nicely.
Giles wrote:
It is the “access to resources” part that is key here. You’re looking at two categories of AI. Seed AIs, that are deliberately designed by humanity to not be self-improving (or even self-modifying) past a certain point, but which have high access to resources; and ‘free citizen’ AIs that are fully self-modifying, but which initially may have restricted access to resources.
When you (Alex) talk about “the first AI”, what you’re talking about is the first ‘free citizen’ AI, but there will already be seed AIs out there which (initially) will have greater optimisation power and the ability to choke off the access to resources of the new ‘free citizen’ AI if it doesn’t play nicely.