If an entity does stupid things, it’s disfavored against competitors that don’t do those stupid things, all else being equal. So it needs to adapt by ceasing the stupid behavior or otherwise lose.
machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Clearly. The point is that the actions it takes might seem stupidly destructive only according to humanity’s feeble understanding and parochial values. Something involving extermination of all humans, say. My impression is that the “accel”-endorsed attitude to this is to be a good sport and graciously accept the verdict of natural selection.
That just falls back on the common doomer assumption that “evil is optimal” (as Sutton put it). Sure, if evil is optimal and you have an entity that behaves optimally, it’ll act in evil ways.
But there are good reasons to think that evil is not optimal in current conditions. At least as long as a Dyson sphere has not yet been constructed, there are massive gains available from positive-sum cooperation directed towards technological progress. In these conditions, negative-sum conflict is a stupid waste.
This view, that evil is not optimal, ties back into the continuation framing. After all, you can make a philosophical argument either way. But in the continuation framing, we can ask ourselves whether evil is empirically optimal for humans, which will suggest whether evil is optimal for non-biological descendants (since they continue humanity). And in fact we see evil losing a lot, and not coincidentally—WW2 went the way it did in part because the losing side was evil.
After all, you can make a philosophical argument either way.
Indeed, and what baffles me is that many are extremely sure one way or the other, even though philosophy doesn’t exactly have a track record to inspire such confidence. Of course, this also means that nobody is going to stop building stuff because of philosophical arguments, so we’ll have empirical evidence soon enough...
If an entity does stupid things, it’s disfavored against competitors that don’t do those stupid things, all else being equal. So it needs to adapt by ceasing the stupid behavior or otherwise lose.
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Clearly. The point is that the actions it takes might seem stupidly destructive only according to humanity’s feeble understanding and parochial values. Something involving extermination of all humans, say. My impression is that the “accel”-endorsed attitude to this is to be a good sport and graciously accept the verdict of natural selection.
That just falls back on the common doomer assumption that “evil is optimal” (as Sutton put it). Sure, if evil is optimal and you have an entity that behaves optimally, it’ll act in evil ways.
But there are good reasons to think that evil is not optimal in current conditions. At least as long as a Dyson sphere has not yet been constructed, there are massive gains available from positive-sum cooperation directed towards technological progress. In these conditions, negative-sum conflict is a stupid waste.
This view, that evil is not optimal, ties back into the continuation framing. After all, you can make a philosophical argument either way. But in the continuation framing, we can ask ourselves whether evil is empirically optimal for humans, which will suggest whether evil is optimal for non-biological descendants (since they continue humanity). And in fact we see evil losing a lot, and not coincidentally—WW2 went the way it did in part because the losing side was evil.
Indeed, and what baffles me is that many are extremely sure one way or the other, even though philosophy doesn’t exactly have a track record to inspire such confidence. Of course, this also means that nobody is going to stop building stuff because of philosophical arguments, so we’ll have empirical evidence soon enough...