P(Doom) is best understood as a difference in probabilities.
Words should have meanings. When different meanings are much more useful and appropriate, different words must therefore be used. P(Doom) is literally naming probability of something, even if it’s quite unclear of what. So it’s not best understood as not a probability of something.
Now some difference in probabilities could be much more useful than probability of “Doom” for talking about the impact of AI, but that more useful difference in probabilities is nonetheless not a probability of something (especially when negative), and therefore not P(Doom), regardless of what “Doom” is, and regardless of whether discussing probability of Doom is useful for any purpose. Perhaps that difference in probabilities is so valuable a concept that it deserves its own short name, but that name still shouldn’t be “P(Doom)”.
Nobody knows what probabilistic event in the state space “Doom” actually refers to. It’s more of a rhetorical device anyway, so we may as well make it into a fair rhetorical device by allowing it to range between −1 and 1.
P(Doom) is literally naming probability of something
Literally naming the probability of something is not the optimal thing for P(Doom) to mean. It is better for it to be a number between −1 and 1 which represents the badness of AI, overall, because that is the thing that people actually want and that is in practice how they use it.
So I have developed the meme of negative P(Doom), e.g. if you think that P(Doom) without AI is 90%, but 5% with AI, then P(Doom) is −85% (negative 85%)
If you limit P(Doom) to being positive, that makes it literally impossible to express the view that AI is actually good in the framework that people are trying to popularize by asking everyone for their P(Doom) but not asking them to also give their P(Doom|~AI)
But P(Doom) is best understood as a difference in probabilities. It’s the probability of x-risk from AI minus the probability of x-risk without AI.
e.g. if you think that P(Doom) without AI is 90%, but 5% with AI, then P(Doom) is −85% (negative 85%).
Words should have meanings. When different meanings are much more useful and appropriate, different words must therefore be used. P(Doom) is literally naming probability of something, even if it’s quite unclear of what. So it’s not best understood as not a probability of something.
Now some difference in probabilities could be much more useful than probability of “Doom” for talking about the impact of AI, but that more useful difference in probabilities is nonetheless not a probability of something (especially when negative), and therefore not P(Doom), regardless of what “Doom” is, and regardless of whether discussing probability of Doom is useful for any purpose. Perhaps that difference in probabilities is so valuable a concept that it deserves its own short name, but that name still shouldn’t be “P(Doom)”.
yes, this is the other problem with P(Doom).
Nobody knows what probabilistic event in the state space “Doom” actually refers to. It’s more of a rhetorical device anyway, so we may as well make it into a fair rhetorical device by allowing it to range between −1 and 1.
Literally naming the probability of something is not the optimal thing for P(Doom) to mean. It is better for it to be a number between −1 and 1 which represents the badness of AI, overall, because that is the thing that people actually want and that is in practice how they use it.
So I have developed the meme of negative P(Doom), e.g. if you think that P(Doom) without AI is 90%, but 5% with AI, then P(Doom) is −85% (negative 85%)
If you limit P(Doom) to being positive, that makes it literally impossible to express the view that AI is actually good in the framework that people are trying to popularize by asking everyone for their P(Doom) but not asking them to also give their P(Doom|~AI)
This just seems confusing to the average person. P(Doom|AI) or P(Doom|~AI) are both greater than zero in this case and seems easier to discuss
The problem is that when normal people hear “P(Doom)” they assume implicitly that P(Doom|~AI) is zero, and it’s very hard to undo this assumption.
So it creates more truth to allow P(Doom) to be negative, because that more closely tracks what people actually care about.