There’s some risk that either the CCP or half the voters in the US will develop LLM psychosis. I’m predicting that that risk will be low enough that it shouldn’t dominate our ASI strategy. I don’t think I have a strong enough argument here to persuade skeptics.
I’ve been putting some thought into this, because my strong intuition is that something like this is an under-appreciated scenario. My basic argument is that mass brainwashing, for lack of a better word, is cheaper and less risky than other forms of ASI control. The idea is that we (humans) are extremely programmable (plenty of historical examples), it just requires a more sophisticated “multi-level” messaging scheme—so it’s not going to look like an AI cult, more like an AI “movement” with a fanatical base. Here is one pathway worked out in detail—will be generalizing soon: https://www.lesswrong.com/posts/zvkjQen773DyqExJ8/the-memetic-cocoon-threat-model-soft-ai-takeover-in-an
I’ve been putting some thought into this, because my strong intuition is that something like this is an under-appreciated scenario. My basic argument is that mass brainwashing, for lack of a better word, is cheaper and less risky than other forms of ASI control. The idea is that we (humans) are extremely programmable (plenty of historical examples), it just requires a more sophisticated “multi-level” messaging scheme—so it’s not going to look like an AI cult, more like an AI “movement” with a fanatical base.
Here is one pathway worked out in detail—will be generalizing soon: https://www.lesswrong.com/posts/zvkjQen773DyqExJ8/the-memetic-cocoon-threat-model-soft-ai-takeover-in-an