Thats not a valid criticism if we are simply about choosing one action to reduce X-risk. Consider for example the cold war—the guys with nukes did the most to endanger humanity however it was most important that they cooperated to reduce it.
Good reply. The big difference is that in the Cold War, there was no entity with the ability to stop the 2 parties engaged in the nuclear arms race whereas instead of hoping for the leading labs to come to an agreement, in the current situation, we can lobby the governments of the US and the UK to shut the leading labs down or at least nationalize them. Yes, the still leaves an AI race between the developed nations, but the US and the UK are democracies, and most voters don’t like AI whereas the main concern of the leaders of China and Russia is to avoid rebellions of their respective populations and they understand correctly that AI is a revolutionary technology with unpredictable societal effects that might easily empower rebels, so as long as Beijing and Moscow have access to the AI technology useful for effective surveillance of the people, they might stop racing if the US and the UK stop racing (and AI-extinction-risk activists should IMHO probably be helping Beijing and Moscow obtain the AI tech they need to effectively surveil their populations to reduce the incentive for Beijing and Moscow to support native AI research efforts).
To clarify: there is only a tiny shred of hope in the plan I outlined, but it is a bigger shred IMHO than hoping for the AI labs to suddenly start acting responsibly.
Your specific action places most of its hope for human survival on the entities that have done the most to increase extinction risk.
Thats not a valid criticism if we are simply about choosing one action to reduce X-risk. Consider for example the cold war—the guys with nukes did the most to endanger humanity however it was most important that they cooperated to reduce it.
Good reply. The big difference is that in the Cold War, there was no entity with the ability to stop the 2 parties engaged in the nuclear arms race whereas instead of hoping for the leading labs to come to an agreement, in the current situation, we can lobby the governments of the US and the UK to shut the leading labs down or at least nationalize them. Yes, the still leaves an AI race between the developed nations, but the US and the UK are democracies, and most voters don’t like AI whereas the main concern of the leaders of China and Russia is to avoid rebellions of their respective populations and they understand correctly that AI is a revolutionary technology with unpredictable societal effects that might easily empower rebels, so as long as Beijing and Moscow have access to the AI technology useful for effective surveillance of the people, they might stop racing if the US and the UK stop racing (and AI-extinction-risk activists should IMHO probably be helping Beijing and Moscow obtain the AI tech they need to effectively surveil their populations to reduce the incentive for Beijing and Moscow to support native AI research efforts).
To clarify: there is only a tiny shred of hope in the plan I outlined, but it is a bigger shred IMHO than hoping for the AI labs to suddenly start acting responsibly.