I agree. For what it’s worth, in most of these cases, it seems to me that for catastrophe to occur, some AI eventually has to take over on purpose. But I agree that a lot of the action (and many of the points for intervention) might have occurred earlier with AIs that didn’t intentionally take over and instead were not as helpful as they could be, either because of misalignment or other problems.
Humans might go extinct very soon after AI because AI accelerates technological progress and there was an extinction technology ahead of us. That is, we would’ve gone extinct from the same technology without the AI in 2500, but 2500 is “reached” a decade after AI, so it looks like the AI was the cause.
(This point is from Paul Christiano iirc.)
Note that AI needn’t take over (voluntarily or forcefully) to accelerate technological progress like this.
I agree. For what it’s worth, in most of these cases, it seems to me that for catastrophe to occur, some AI eventually has to take over on purpose. But I agree that a lot of the action (and many of the points for intervention) might have occurred earlier with AIs that didn’t intentionally take over and instead were not as helpful as they could be, either because of misalignment or other problems.
Humans might go extinct very soon after AI because AI accelerates technological progress and there was an extinction technology ahead of us. That is, we would’ve gone extinct from the same technology without the AI in 2500, but 2500 is “reached” a decade after AI, so it looks like the AI was the cause.
(This point is from Paul Christiano iirc.)
Note that AI needn’t take over (voluntarily or forcefully) to accelerate technological progress like this.