I’m afraid that I’m not following the point of the first line of argument. Yes, people sometimes do pointless destructive things for stupid reasons. Such behavior is in the long-term penalized by selective pressures. More-intelligent descendants would be less likely to engage in such behavior, precisely because they are smarter.
Sure, but obviously this isn’t an all-or-nothing proposition, with either biological or artificial descendants, and it’s clear to me that most people aren’t indifferent about where on that spectrum those descendants will end up. Do you disagree with that, or think that only “accels” are indifferent (and in some metaphysical sense “correct”)?
I doubt that most people think about long-term descendants at all, honestly.
Such behavior is in the long-term penalized by selective pressures.
Which ones? Recursive self-improvement is no longer something that only weird contrarians on obscure blogs talk about, it’s the explicit theory of change of leading multibillion AI corps. They might all be deluded of course, but if they happen to be even slightly correct, machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity (even assuming that it actually would be long-term maladaptive, which is far from obvious).
I doubt that most people think about long-term descendants at all, honestly.
You only get to long-term descendants through short-term ones.
If an entity does stupid things, it’s disfavored against competitors that don’t do those stupid things, all else being equal. So it needs to adapt by ceasing the stupid behavior or otherwise lose.
machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Clearly. The point is that the actions it takes might seem stupidly destructive only according to humanity’s feeble understanding and parochial values. Something involving extermination of all humans, say. My impression is that the “accel”-endorsed attitude to this is to be a good sport and graciously accept the verdict of natural selection.
That just falls back on the common doomer assumption that “evil is optimal” (as Sutton put it). Sure, if evil is optimal and you have an entity that behaves optimally, it’ll act in evil ways.
But there are good reasons to think that evil is not optimal in current conditions. At least as long as a Dyson sphere has not yet been constructed, there are massive gains available from positive-sum cooperation directed towards technological progress. In these conditions, negative-sum conflict is a stupid waste.
This view, that evil is not optimal, ties back into the continuation framing. After all, you can make a philosophical argument either way. But in the continuation framing, we can ask ourselves whether evil is empirically optimal for humans, which will suggest whether evil is optimal for non-biological descendants (since they continue humanity). And in fact we see evil losing a lot, and not coincidentally—WW2 went the way it did in part because the losing side was evil.
After all, you can make a philosophical argument either way.
Indeed, and what baffles me is that many are extremely sure one way or the other, even though philosophy doesn’t exactly have a track record to inspire such confidence. Of course, this also means that nobody is going to stop building stuff because of philosophical arguments, so we’ll have empirical evidence soon enough...
I’m afraid that I’m not following the point of the first line of argument. Yes, people sometimes do pointless destructive things for stupid reasons. Such behavior is in the long-term penalized by selective pressures. More-intelligent descendants would be less likely to engage in such behavior, precisely because they are smarter.
I doubt that most people think about long-term descendants at all, honestly.
Which ones? Recursive self-improvement is no longer something that only weird contrarians on obscure blogs talk about, it’s the explicit theory of change of leading multibillion AI corps. They might all be deluded of course, but if they happen to be even slightly correct, machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity (even assuming that it actually would be long-term maladaptive, which is far from obvious).
You only get to long-term descendants through short-term ones.
If an entity does stupid things, it’s disfavored against competitors that don’t do those stupid things, all else being equal. So it needs to adapt by ceasing the stupid behavior or otherwise lose.
Any assumption of the form “super-intelligent AI will take actions that are super-stupid” is dubious.
Clearly. The point is that the actions it takes might seem stupidly destructive only according to humanity’s feeble understanding and parochial values. Something involving extermination of all humans, say. My impression is that the “accel”-endorsed attitude to this is to be a good sport and graciously accept the verdict of natural selection.
That just falls back on the common doomer assumption that “evil is optimal” (as Sutton put it). Sure, if evil is optimal and you have an entity that behaves optimally, it’ll act in evil ways.
But there are good reasons to think that evil is not optimal in current conditions. At least as long as a Dyson sphere has not yet been constructed, there are massive gains available from positive-sum cooperation directed towards technological progress. In these conditions, negative-sum conflict is a stupid waste.
This view, that evil is not optimal, ties back into the continuation framing. After all, you can make a philosophical argument either way. But in the continuation framing, we can ask ourselves whether evil is empirically optimal for humans, which will suggest whether evil is optimal for non-biological descendants (since they continue humanity). And in fact we see evil losing a lot, and not coincidentally—WW2 went the way it did in part because the losing side was evil.
Indeed, and what baffles me is that many are extremely sure one way or the other, even though philosophy doesn’t exactly have a track record to inspire such confidence. Of course, this also means that nobody is going to stop building stuff because of philosophical arguments, so we’ll have empirical evidence soon enough...