And as far as we can tell, there don’t appear to be any sharp discontinuities here, such that above a certain skill level it’s beneficial to take things by force rather than through negotiation and trade. It’s plausible that very smart power-seeking AIs would just become extremely rich, rather than trying to kill everyone.
I think this would depend quite a bit on the agent’s utility function. Humans tend more toward satisficing than optimizing, especially as they grow older—someone who has established a nice business empire and feels like they’re getting all their wealth-related needs met likely doesn’t want to rock the boat and risk losing everything for what they perceive as limited gain.
As a result, even if discontinuities do exist (and it seems pretty clear to me that being able to permanently rid yourself of all your competitors should be a discontinuity), the kinds of humans who could potentially make use of them are unlikely to.
In contrast, an agent that was an optimizer and had an unbounded utility function might be ready to gamble all of its gains for just a 0.1% chance of success if the reward was big enough.
In contrast, an agent that was an optimizer and had an unbounded utility function might be ready to gamble all of its gains for just a 0.1% chance of success if the reward was big enough.
Risk-neutral agents also have a tendency to go bankrupt quickly, as they keep taking the equivalent of double-or-nothing gambles with 50% + epsilon probability of success until eventually landing on “nothing”. This makes such agents less important in the median world, since their chance of becoming extremely powerful is very small.
I think this would depend quite a bit on the agent’s utility function. Humans tend more toward satisficing than optimizing, especially as they grow older—someone who has established a nice business empire and feels like they’re getting all their wealth-related needs met likely doesn’t want to rock the boat and risk losing everything for what they perceive as limited gain.
As a result, even if discontinuities do exist (and it seems pretty clear to me that being able to permanently rid yourself of all your competitors should be a discontinuity), the kinds of humans who could potentially make use of them are unlikely to.
In contrast, an agent that was an optimizer and had an unbounded utility function might be ready to gamble all of its gains for just a 0.1% chance of success if the reward was big enough.
Risk-neutral agents also have a tendency to go bankrupt quickly, as they keep taking the equivalent of double-or-nothing gambles with 50% + epsilon probability of success until eventually landing on “nothing”. This makes such agents less important in the median world, since their chance of becoming extremely powerful is very small.