Responding to the disagree reaction, while I do think the non-reaction isn’t explained well by selfishness and near-term utility focused over long-run utility, because I do think they’d probably ask to shut it down or potentially even speed it up, I do think it predicts the AI arms race dynamic relatively well, because you no longer need astronomically low probability of extinction to develop AI to ASI, and it becomes even more important that your side win, if you believe in anything close to the level of power of AI that LW thinks, and selfishness means that the effects of generally increasing AI risk don’t actually matter until it’s likely that you personally die.
Indeed, this can easily go to >50% or more depending on both selfishness levels and how focused you are on the long-term.
Responding to the disagree reaction, while I do think the non-reaction isn’t explained well by selfishness and near-term utility focused over long-run utility, because I do think they’d probably ask to shut it down or potentially even speed it up, I do think it predicts the AI arms race dynamic relatively well, because you no longer need astronomically low probability of extinction to develop AI to ASI, and it becomes even more important that your side win, if you believe in anything close to the level of power of AI that LW thinks, and selfishness means that the effects of generally increasing AI risk don’t actually matter until it’s likely that you personally die.
Indeed, this can easily go to >50% or more depending on both selfishness levels and how focused you are on the long-term.