If we’re being honest, the compensation would probably have to be capped at some maximum amount. If the AIs gave up an 80% chance at world takeover for our benefit, it would probably not be within an AI company’s power to give away 80% of all future resources in compensation (or anything close to that).
It seems pretty hard to predict whether an AI company would have such power in conditions which are that unusual. After all, it would have a pretty powerful AI trying to enforce the agreement.
I don’t see the benefit to setting a cap. Let’s just inform the AI as best we can about the uncertainties involved, and promise to do the best we can to uphold agreements.
Yeah, that’s fair. Seems fine to give a nuanced description of what you think you’ll be able to do, and what you think you’ll be willing to do. No need for a hard cap.
It seems pretty hard to predict whether an AI company would have such power in conditions which are that unusual. After all, it would have a pretty powerful AI trying to enforce the agreement.
I don’t see the benefit to setting a cap. Let’s just inform the AI as best we can about the uncertainties involved, and promise to do the best we can to uphold agreements.
Yeah, that’s fair. Seems fine to give a nuanced description of what you think you’ll be able to do, and what you think you’ll be willing to do. No need for a hard cap.