Lesswrong core members unwillingness to engage in conflict is directly leading to the end of the world.
By conflict I mean publicly humiliating developers at these companies (including Anthropic), cutting them out of your social circle, organising protests outside their offices, and running for election with an antiAI company message.
I am willing to go further by supporting whistleblowers and cyberattackers against AI companies. But the above is the minimum to become my ally.
I listed the plan above in short. Friendships are the biggest conflict of interest. If you are not willing to distance from people building ASI, you are unlikely to pursue actually effective plans to stopping ASI development.
Lesswrong core members unwillingness to engage in conflict is directly leading to the end of the world.
By conflict I mean publicly humiliating developers at these companies (including Anthropic), cutting them out of your social circle, organising protests outside their offices, and running for election with an antiAI company message.
I am willing to go further by supporting whistleblowers and cyberattackers against AI companies. But the above is the minimum to become my ally.
Do you have a specific plan, or is this just a call to signal virtue by doing costly unhelpful actions?
I listed the plan above in short. Friendships are the biggest conflict of interest. If you are not willing to distance from people building ASI, you are unlikely to pursue actually effective plans to stopping ASI development.
If you want a longer version, here it is: Support the movement