If the humans understood their world, and were still load-bearing participants in its ebbs of power, then perhaps the bending would be greater.
I don’t get why it’s important for humans to understand the world, if they can align AIs to be fully helpful to them. Is it that:
When you refer to “the technology to control the AIs’ goals [which] arrived in time”, you’re only referring to the ability to give simple / easily measurable goals, and not more complex ones? (Such as “help me understand the pros and cons of different ways to ask ‘what would I prefer if I understood the situation better?’, and then do that” or even “please optimize for getting me lots of option-value, that I can then exercise once I understand what I want”.)
...or that humans for some reasons choose to abstain from (or are prevented from) using AIs with those types of goals?
...or that this isn’t actually about the limitations of humans, but instead a fact about the complexity of the world relative to the smartest agents in it? I.e., even if you replaced all the humans with the most superintelligent AIs that exist at the time — those AIs would still be stuck in this multipolar dilemma, not understand the world well enough to escape it, and have just as little bending power as humans.
Nice scenario!
I’m confused about the ending. In particular:
I don’t get why it’s important for humans to understand the world, if they can align AIs to be fully helpful to them. Is it that:
When you refer to “the technology to control the AIs’ goals [which] arrived in time”, you’re only referring to the ability to give simple / easily measurable goals, and not more complex ones? (Such as “help me understand the pros and cons of different ways to ask ‘what would I prefer if I understood the situation better?’, and then do that” or even “please optimize for getting me lots of option-value, that I can then exercise once I understand what I want”.)
...or that humans for some reasons choose to abstain from (or are prevented from) using AIs with those types of goals?
...or that this isn’t actually about the limitations of humans, but instead a fact about the complexity of the world relative to the smartest agents in it? I.e., even if you replaced all the humans with the most superintelligent AIs that exist at the time — those AIs would still be stuck in this multipolar dilemma, not understand the world well enough to escape it, and have just as little bending power as humans.