Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it’s an optimum (even a local one). Consider this: the question “how can humans have true agency and other things they value, when ASIs are around” is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That’s clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that’s optimal on the mix of dimensions that we care about.
Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it’s an optimum (even a local one). Consider this: the question “how can humans have true agency and other things they value, when ASIs are around” is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That’s clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that’s optimal on the mix of dimensions that we care about.