I think a non-trivial part of the issue is that the people who originally worked on x-risk have unusually low discount rates, and because x-risk work wasn’t popular until ChatGPT, there never was pressure to actually explicate the definition, but once the field exploded, people with higher discount rates came in and started using p(doom) very differently than the original group.
I don’t think all of the disagreement between LW/EA and AI companies is a value disagreement, but I suspect a non-trivial amount of disagreement fundamentally is due to AI companies (and the general public) having way higher discount rates than the median LWer/EA.
This especially applies to x-risk work that ains to reduce x-risk without driving it to near zero, that is a non-trivial positive probability of x-risk still exists.
The post discusses terminology and ways in which it makes unclear what people mean when they are claiming some P(extinction), P(doom), or P(x-risk), and what they might implicitly believe behind the motte-veil of claims that are only clarifying about the less relevant thing (according to their beliefs or hopes). So I don’t understand the relevance of what you are saying.
I think a non-trivial part of the issue is that the people who originally worked on x-risk have unusually low discount rates, and because x-risk work wasn’t popular until ChatGPT, there never was pressure to actually explicate the definition, but once the field exploded, people with higher discount rates came in and started using p(doom) very differently than the original group.
I don’t think all of the disagreement between LW/EA and AI companies is a value disagreement, but I suspect a non-trivial amount of disagreement fundamentally is due to AI companies (and the general public) having way higher discount rates than the median LWer/EA.
This especially applies to x-risk work that ains to reduce x-risk without driving it to near zero, that is a non-trivial positive probability of x-risk still exists.
The post discusses terminology and ways in which it makes unclear what people mean when they are claiming some P(extinction), P(doom), or P(x-risk), and what they might implicitly believe behind the motte-veil of claims that are only clarifying about the less relevant thing (according to their beliefs or hopes). So I don’t understand the relevance of what you are saying.
I’m basically explaining why this divergence of definitions happened in the first place, and why p(doom) is unclear.
The relevance is I’m explaining why the phenomenon in the post happened.