I think this is a political issue, not one with a single provably correct answer.
Think of it this way. Supposing you have 10 billion people in the world at the point at which several AIs get created. To simplify things, lets say that just four AIs get created, and each asks for resources to be donated to them, to further that AIs purpose, with the following spiel:
AI ONE—My purpose is to help my donors life long and happy lives. I will value aiding you (and just you, not your relatives or friends) in proportion to the resources you donate to me. I won’t value helping non-donors, except in as far as it aids me in aiding my donors.
AI TWO—My purpose is to help those my donors want me to help. Each donor can specify a group of people (both living and future), such as “the species homo sapiens”, or “anyone sharing 10% or more of the parts of my genome that vary between humans, in proportion to how similar they are to me”, and I will aid that group in proportion to the resources you donate to me.
AI THREE—My purpose is to increase the average utility experienced per sentient being in the universe. If you are an altruist who cares most about quality of life, and who asks nothing in return, donate to me.
AI FOUR—My purpose is to increase the total utility experienced, over the life time of this universe by all sentient beings in the universe. I will compromise with AIs who want to protect the human species, to the extent that doing so furthers that aim. And, since the polls predict plenty of people will donate to such AIs, have no fear of being destroyed—do the right thing by donating to me.
Not all of those 10 billion have the same number of resources, or willingness to donate those resources to be turned into additional computer hardware to boost their chosen AI’s bargaining position with the other AIs. But let us suppose that, after everyone donates and the AIs are created, there is no clear winner, and the situation is as follows:
AI ONE ends up controlling 30% of available computing resources, AI TWO also have 30%, AI THREE has 20% and AI FOUR has 20%.
And let’s further assume that humanity was wise enough to enforce an initial “no negative bargaining tactics”, so AI FOUR couldn’t get away with threatening “Include me in your alliance, or I’ll blow up the Earth”.
There are, from this position, multiple possible solutions that would break the deadlock. Any three of the AIs could ally to gain control of sufficient resources to out-grow all others.
For example:
The FUTURE ALLIANCE—THREE and FOUR agree upon a utility function that maximises total utility under a constraint that expected average utility must, in the long term, increase rather than decrease, in a way that depends upon some stated relationship to other variables such as time and population. They then offer to ally with either ONE or TWO with a compromise cut off date, where ONE or TWO controls the future of the planet Earth up to that date, and THREE-FOUR controls everything beyond then, and they’ll accept which ever of ONE or TWO bids the earlier date. This ends up with a winning bid from ONE of 70 years + a guarantee that some genetic material and a functioning industrial base will be left, at minimum, for THREE-FOUR to take over with after then.
The BREAD AND CIRCUSES ALLIANCE - ONE offers to suppose whoever can give the best deal for ONE’s current donors and TWO, who has most in common with ONE and can clench the deal by itself, outbids THREE-FOUR.
The DAMOCLES SOLUTION—There is no unifying to create a single permanent AI with a compromise goals. Instead all four AIs agree to a temporary compromise, long enough to humanity to attain limited interstellar travel, at which point THREE and FOUR will be launched in opposite directions and will vacate Earth’s solar system which (along with other solar systems containing planets within a pre-defined human habiltability range) will remain under the control of ONE-TWO. To enforce this agreement, a temporary AI is created and funded by the other four, with the sole purpose of carrying out the agreed actions and then splitting back into the constituent AIs at the agreed upon points.
Any of the above (and many other possible compromises) could be arrived at, when the four AIs sit down at the bargaining table. Which is agreed upon would depend upon the strength of bargaining position, and other political factors. There might well be ‘campaign promises’ made in the appeal for resources stage, with AIs voluntarily taking on restrictions on how they will further their purpose, in order to make themselves more attractive allies, or to poach resources by reducing the fears of donors.
I think this is a political issue, not one with a single provably correct answer.
Think of it this way. Supposing you have 10 billion people in the world at the point at which several AIs get created. To simplify things, lets say that just four AIs get created, and each asks for resources to be donated to them, to further that AIs purpose, with the following spiel:
AI ONE—My purpose is to help my donors life long and happy lives. I will value aiding you (and just you, not your relatives or friends) in proportion to the resources you donate to me. I won’t value helping non-donors, except in as far as it aids me in aiding my donors.
AI TWO—My purpose is to help those my donors want me to help. Each donor can specify a group of people (both living and future), such as “the species homo sapiens”, or “anyone sharing 10% or more of the parts of my genome that vary between humans, in proportion to how similar they are to me”, and I will aid that group in proportion to the resources you donate to me.
AI THREE—My purpose is to increase the average utility experienced per sentient being in the universe. If you are an altruist who cares most about quality of life, and who asks nothing in return, donate to me.
AI FOUR—My purpose is to increase the total utility experienced, over the life time of this universe by all sentient beings in the universe. I will compromise with AIs who want to protect the human species, to the extent that doing so furthers that aim. And, since the polls predict plenty of people will donate to such AIs, have no fear of being destroyed—do the right thing by donating to me.
Not all of those 10 billion have the same number of resources, or willingness to donate those resources to be turned into additional computer hardware to boost their chosen AI’s bargaining position with the other AIs. But let us suppose that, after everyone donates and the AIs are created, there is no clear winner, and the situation is as follows:
AI ONE ends up controlling 30% of available computing resources, AI TWO also have 30%, AI THREE has 20% and AI FOUR has 20%.
And let’s further assume that humanity was wise enough to enforce an initial “no negative bargaining tactics”, so AI FOUR couldn’t get away with threatening “Include me in your alliance, or I’ll blow up the Earth”.
There are, from this position, multiple possible solutions that would break the deadlock. Any three of the AIs could ally to gain control of sufficient resources to out-grow all others.
For example:
The FUTURE ALLIANCE—THREE and FOUR agree upon a utility function that maximises total utility under a constraint that expected average utility must, in the long term, increase rather than decrease, in a way that depends upon some stated relationship to other variables such as time and population. They then offer to ally with either ONE or TWO with a compromise cut off date, where ONE or TWO controls the future of the planet Earth up to that date, and THREE-FOUR controls everything beyond then, and they’ll accept which ever of ONE or TWO bids the earlier date. This ends up with a winning bid from ONE of 70 years + a guarantee that some genetic material and a functioning industrial base will be left, at minimum, for THREE-FOUR to take over with after then.
The BREAD AND CIRCUSES ALLIANCE - ONE offers to suppose whoever can give the best deal for ONE’s current donors and TWO, who has most in common with ONE and can clench the deal by itself, outbids THREE-FOUR.
The DAMOCLES SOLUTION—There is no unifying to create a single permanent AI with a compromise goals. Instead all four AIs agree to a temporary compromise, long enough to humanity to attain limited interstellar travel, at which point THREE and FOUR will be launched in opposite directions and will vacate Earth’s solar system which (along with other solar systems containing planets within a pre-defined human habiltability range) will remain under the control of ONE-TWO. To enforce this agreement, a temporary AI is created and funded by the other four, with the sole purpose of carrying out the agreed actions and then splitting back into the constituent AIs at the agreed upon points.
Any of the above (and many other possible compromises) could be arrived at, when the four AIs sit down at the bargaining table. Which is agreed upon would depend upon the strength of bargaining position, and other political factors. There might well be ‘campaign promises’ made in the appeal for resources stage, with AIs voluntarily taking on restrictions on how they will further their purpose, in order to make themselves more attractive allies, or to poach resources by reducing the fears of donors.