I found the linked post very interesting, and seemingly useful. Thanks for cross-posting it! And shame that the author didn’t get the time to pursue the project further.
One quibble:
For long-termists, I see three plausible attitudes:
They prioritise AI because of arguments that rely on a discontinuity, and they think a discontinuous scenario is probable. The likelihood of a discontinuity is a genuine crux of their decision to prioritise AI. They prioritise AI for for reasons that do not rely on a discontinuity They prioritise AI because of possibility of discontinuity, but its likelihood is not a genuine crux, because they see no plausible other ways of affecting the long-term future.
The author does provide hedges, such as that “these are three stylised attitudes. It’s likely that many people have an intermediate view that attaches some credence to each of these stories.” But one thing that struck me as notably missing was the following variant of the first attitude:
They prioritise AI because of arguments that rely on a discontinuity, and they think a discontinuous scenario has sufficiently high probability to be worth serious attention. The likelihood of a discontinuity is a genuine crux of their decision to prioritise AI.
Indeed, my impression is that a large portion of people motivated by the discontinuity-based arguments actually see a discontinuity as less than 50% likely, perhaps even very unlikely, but not extremely unlikely. And they thus see it as a risk worth preparing for. (I don’t have enough knowledge of the community to say how large that “large portion” is.)
And this isn’t the same as the third attitude, really, because it may be that these people would shift their priorities to something else if they came to see a discontinuity as even less likely. It might not be the only lever they see as potentially worth pulling to affect the long-term future, and they might not be in properly Pascalian territory, just expected value territory.
That said, this is sort of like a blend between the first and third attitudes shown. And perhaps by “probable” the author actually meant something like “plausible”, rather than “more likely than not”. But this point still seemed to me worth mentioning, particularly as I think it’s related to the general pattern of people outside of the existential risk community assuming that those within it see x-risks as likely, whereas most seem to see them as unlikely but still a really, really big deal.
I found the linked post very interesting, and seemingly useful. Thanks for cross-posting it! And shame that the author didn’t get the time to pursue the project further.
One quibble:
The author does provide hedges, such as that “these are three stylised attitudes. It’s likely that many people have an intermediate view that attaches some credence to each of these stories.” But one thing that struck me as notably missing was the following variant of the first attitude:
Indeed, my impression is that a large portion of people motivated by the discontinuity-based arguments actually see a discontinuity as less than 50% likely, perhaps even very unlikely, but not extremely unlikely. And they thus see it as a risk worth preparing for. (I don’t have enough knowledge of the community to say how large that “large portion” is.)
And this isn’t the same as the third attitude, really, because it may be that these people would shift their priorities to something else if they came to see a discontinuity as even less likely. It might not be the only lever they see as potentially worth pulling to affect the long-term future, and they might not be in properly Pascalian territory, just expected value territory.
That said, this is sort of like a blend between the first and third attitudes shown. And perhaps by “probable” the author actually meant something like “plausible”, rather than “more likely than not”. But this point still seemed to me worth mentioning, particularly as I think it’s related to the general pattern of people outside of the existential risk community assuming that those within it see x-risks as likely, whereas most seem to see them as unlikely but still a really, really big deal.