Another ambiguity is misaligned vs. aligned AIs, as the most likely outcome I expect involves AIs aligned to humans in the same sense as humans are aligned to each other (different in detail, weird and in many ways quantitatively unusual). So the kind of permanent disempowerment I think is most likely involves AIs that could be said to be “aligned”, if only “weakly”. Duvenaud also frames gradual disempowerment as (in part) what still plausibly happens if we manage to solve “alignment” for some sense of the word.
So the real test has to be whether individual humans end up with at least stars (or corresponding compute, for much longer than trillions of years), any considerations of process to getting there are too gameable by the likely overwhelmingly more capable AI economy and culture to be stated as part of the definition of ending up permanently disempowered vs. not (deciding to appoint successors; trade agreements; no “takeovers” or property rights violations; humans not starting out with legal ownership of stars in the first place). In this way, you basically didn’t clarify the issue.
I currently don’t expect human disempowerment in favor of AIs (that aren’t appointed successors) conditional on no misalignmed AI takeover, but agree this is possible; it doesn’t form a large enough probability to substantially alter my communication.
Another ambiguity is misaligned vs. aligned AIs, as the most likely outcome I expect involves AIs aligned to humans in the same sense as humans are aligned to each other (different in detail, weird and in many ways quantitatively unusual). So the kind of permanent disempowerment I think is most likely involves AIs that could be said to be “aligned”, if only “weakly”. Duvenaud also frames gradual disempowerment as (in part) what still plausibly happens if we manage to solve “alignment” for some sense of the word.
So the real test has to be whether individual humans end up with at least stars (or corresponding compute, for much longer than trillions of years), any considerations of process to getting there are too gameable by the likely overwhelmingly more capable AI economy and culture to be stated as part of the definition of ending up permanently disempowered vs. not (deciding to appoint successors; trade agreements; no “takeovers” or property rights violations; humans not starting out with legal ownership of stars in the first place). In this way, you basically didn’t clarify the issue.