Consider the sociology of violence in the AI risk/doom memeplex.
It seeks to leverage the state’s power to accomplish its objectives (e.g. a ban on further capabilities research) using (the threat of) violence. Beyond that, violence is explicitly rejected.
This contrasts with other memeplexes that resorted to violence which was not legitimized by the state they operated in, including the American and Bolshevik revolutions, pro-democracy/independence movements, and religious/race riots. Furthermore, all of these examples share the apparent quality of fighting for ostensibly lower stakes than “doom” as construed by people discussing AI risk, which appears to be paradoxical.
Why is that?
Claude’s ideas on this include:
the demographic composition of the AI doom memeplex being anti-correlated with the kind that produces violence, i.e. affluent nerds with comfortable lives who implicitly code violence as low-status/generally immoral
the lack of concrete suffering or oppression in the here-and-now to point to
epistemic uncertainty introduced by the probabilistic framing of the issue
the belief that it would be counterproductive for getting buy-in from the existing power structure, i.e. the current strategy.
I would also add that “concrete suffering or oppression” was actually beneficial to the oppressors themselves. Were a state to create a misaligned ASI, it would also slay even the AI’s creators and heads of states, and the state would have no reason not to try and prevent the AI’s creation.
Consider the sociology of violence in the AI risk/doom memeplex.
It seeks to leverage the state’s power to accomplish its objectives (e.g. a ban on further capabilities research) using (the threat of) violence. Beyond that, violence is explicitly rejected.
This contrasts with other memeplexes that resorted to violence which was not legitimized by the state they operated in, including the American and Bolshevik revolutions, pro-democracy/independence movements, and religious/race riots. Furthermore, all of these examples share the apparent quality of fighting for ostensibly lower stakes than “doom” as construed by people discussing AI risk, which appears to be paradoxical.
Why is that?
Claude’s ideas on this include:
the demographic composition of the AI doom memeplex being anti-correlated with the kind that produces violence, i.e. affluent nerds with comfortable lives who implicitly code violence as low-status/generally immoral
the lack of concrete suffering or oppression in the here-and-now to point to
epistemic uncertainty introduced by the probabilistic framing of the issue
the belief that it would be counterproductive for getting buy-in from the existing power structure, i.e. the current strategy.
I would also add that “concrete suffering or oppression” was actually beneficial to the oppressors themselves. Were a state to create a misaligned ASI, it would also slay even the AI’s creators and heads of states, and the state would have no reason not to try and prevent the AI’s creation.