Earlier this week I attended a presentation on AI use in only-somewhat-techie corporate contexts, and found it fascinating how LW terminology has gone mainstream but the meanings haven’t: the presenter talked a lot about ‘existential risk’ (which I slowly inferred meant ‘AI-using competitors might put us out of business’), and ‘alignment’ (which he helpfully defined as ‘getting various AI modalities—coding, search, image gen etc—to work together harmoniously’).
“Existential risk” here doesn’t necessarily come from lesswrong. Using the phrase “existential risk” to refer to your company going out of business makes perfect sense as it’s literally a risk to your companies existence. Alignment is a trickier one but even there the phrasing makes enough sense that it could plausibly not be lesswrong inspired.
Anecdotally I agree with OP – I basically never heard companies use those phrases from ~2008-2023, and then around 2024 “alignment” and “existential risk” became a lot more commonly used.
I also think this is a fairly common pattern – someone invents jargon with a very specific meaning (e.g. “emotional labor”), that phrase gets used in a wider context, and people interpret the phrase based on their most direct interpretation of the literal words involved, which is sometimes pretty different from the original meaning.
Interesting. Existential threat could make sense because AI is clearly a threat to the existence of SAAS companies and whatnot. Alignment is trickier to square and LW influence could definitely be the best explanation.
I just looked at the Google trends and it appears the term “existential threat” was very rare up until about 2009 and then steadily increases which does track well with OPs theory.
Earlier this week I attended a presentation on AI use in only-somewhat-techie corporate contexts, and found it fascinating how LW terminology has gone mainstream but the meanings haven’t: the presenter talked a lot about ‘existential risk’ (which I slowly inferred meant ‘AI-using competitors might put us out of business’), and ‘alignment’ (which he helpfully defined as ‘getting various AI modalities—coding, search, image gen etc—to work together harmoniously’).
I think we need to call it human extinction risk to make it clear. Or even abrupt extermination risk
“Existential risk” here doesn’t necessarily come from lesswrong. Using the phrase “existential risk” to refer to your company going out of business makes perfect sense as it’s literally a risk to your companies existence. Alignment is a trickier one but even there the phrasing makes enough sense that it could plausibly not be lesswrong inspired.
Anecdotally I agree with OP – I basically never heard companies use those phrases from ~2008-2023, and then around 2024 “alignment” and “existential risk” became a lot more commonly used.
I also think this is a fairly common pattern – someone invents jargon with a very specific meaning (e.g. “emotional labor”), that phrase gets used in a wider context, and people interpret the phrase based on their most direct interpretation of the literal words involved, which is sometimes pretty different from the original meaning.
Interesting. Existential threat could make sense because AI is clearly a threat to the existence of SAAS companies and whatnot. Alignment is trickier to square and LW influence could definitely be the best explanation.
I just looked at the Google trends and it appears the term “existential threat” was very rare up until about 2009 and then steadily increases which does track well with OPs theory.