This is an interesting topic, but no, my central expectation (and what I’m arguing for here) is that 100% of the ASIs will be ruthless consequentialists.
Couple little points on that side-track though: (1) Ruthless consequentialist AIs can still copy themselves, and cooperate with their copies, if their goals are non-indexical (which they might or might not be, no opinion off the top of my head), (2) Your comment seems to assume that AIs can read each other’s minds? If they can’t, a smart ruthless consequentialist AI would act in a cooperative and prosocial way in an environment where doing so was to its advantage. I agree that mind-reading is an important dynamic that might change the equilibrium in a multipolar AI world.
“If their goals are non-indexical” seems like quite a big “if”.
Yeah, my modal assumption is that AIs will be able to make fairly strong inferences about the mechanics of the decision processes of other AIs by making observations about their behavior (including of side channels). “Mind reading” might be a slightly strong term for this, but, it’s not very far off.
Likely out of scope for this comment section though. I should, at some point, probably write my modal expectation of what the next couple decades look like in more detail.
This is an interesting topic, but no, my central expectation (and what I’m arguing for here) is that 100% of the ASIs will be ruthless consequentialists.
Couple little points on that side-track though: (1) Ruthless consequentialist AIs can still copy themselves, and cooperate with their copies, if their goals are non-indexical (which they might or might not be, no opinion off the top of my head), (2) Your comment seems to assume that AIs can read each other’s minds? If they can’t, a smart ruthless consequentialist AI would act in a cooperative and prosocial way in an environment where doing so was to its advantage. I agree that mind-reading is an important dynamic that might change the equilibrium in a multipolar AI world.
Thanks.
“If their goals are non-indexical” seems like quite a big “if”.
Yeah, my modal assumption is that AIs will be able to make fairly strong inferences about the mechanics of the decision processes of other AIs by making observations about their behavior (including of side channels). “Mind reading” might be a slightly strong term for this, but, it’s not very far off.
Likely out of scope for this comment section though. I should, at some point, probably write my modal expectation of what the next couple decades look like in more detail.