general category of “Competing superintelligent AI systems could do bad things, even if they are aligned”
This general category could potentially be solved by AIs being very good at cooperating with other AIs. For example maybe AIs can merge together in a secure/verifiable way. (How to ensure this seems to be another overly neglected topic.) However the terms of any merger will likely reflect the pre-merger balance of power, which in this particular competitive arena seems to (by default) disfavor people who have a proper amount of value complexity and moral uncertainty (as I suggested in the OP).
Good question. :)
This general category could potentially be solved by AIs being very good at cooperating with other AIs. For example maybe AIs can merge together in a secure/verifiable way. (How to ensure this seems to be another overly neglected topic.) However the terms of any merger will likely reflect the pre-merger balance of power, which in this particular competitive arena seems to (by default) disfavor people who have a proper amount of value complexity and moral uncertainty (as I suggested in the OP).