Maybe the problem is with the idea that something like that should have owners to begin with?
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.