In one sentence, the main reason it matters is because once we drop the assumption of long-termism and impose a limit to how far we care about the future, a 1% probability will give you massive differences in policy compared to a 99% probability of doom, especially if we assume that the benefits and risks are mostly symmetrical. A 1% probability implies that AI should be regulated for tail risks, but a lot of the policies like say a single organization developing AGI or a broad pause become negative EV, under certain other assumptions. 99% obviously flips the script, and massive stoppage of AI, even at the risk of bringing billions of deaths is now positive EV.
And this is worse once we introduce prospect theory, which roughly argues that we overestimate how much we should react to low probability high impact events, because we anchor to misleadingly high probability numbers like 1%, and thus we are likely to massively overestimate the probability of AI doom conditional on the assumption of AI being easy to control being correct.
Strong Evidence is Common generates a way for very low or high probability events to occur, because 1 bit halves the probability conditional on independence.
The relevant thing is how probability both gets clearer and improves with further research enabled by pause. Currently, as a civilization we are at the startled non-sapient deer stage, that’s not a position from which to decide the future of the universe.
The relevant thing is how probability both gets clearer and improves with further research enabled by pause.
I can make the same argument for how probability gets clearer and improves with further research enabled by not pausing, and I actually think this is the case both in general and for this specific problem, so this argument doesn’t work.
In one sentence, the main reason it matters is because once we drop the assumption of long-termism and impose a limit to how far we care about the future, a 1% probability will give you massive differences in policy compared to a 99% probability of doom, especially if we assume that the benefits and risks are mostly symmetrical. A 1% probability implies that AI should be regulated for tail risks, but a lot of the policies like say a single organization developing AGI or a broad pause become negative EV, under certain other assumptions. 99% obviously flips the script, and massive stoppage of AI, even at the risk of bringing billions of deaths is now positive EV.
And this is worse once we introduce prospect theory, which roughly argues that we overestimate how much we should react to low probability high impact events, because we anchor to misleadingly high probability numbers like 1%, and thus we are likely to massively overestimate the probability of AI doom conditional on the assumption of AI being easy to control being correct.
Strong Evidence is Common generates a way for very low or high probability events to occur, because 1 bit halves the probability conditional on independence.
https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common
The relevant thing is how probability both gets clearer and improves with further research enabled by pause. Currently, as a civilization we are at the startled non-sapient deer stage, that’s not a position from which to decide the future of the universe.
I can make the same argument for how probability gets clearer and improves with further research enabled by not pausing, and I actually think this is the case both in general and for this specific problem, so this argument doesn’t work.