I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
Thus, I propose that we are trapped in this suicide race unless we can come up with better coordination mechanisms. Lower counterfactual cost to participation, lower entry cost, higher reliability, less eeliance on a centralized authority… Decentralized AI-powered privacy-preserving safety inspections and realtime monitoring.
Components include: if you opt in to the monitoring of a particular risk a, then you get to view the reports of my monitor on myself for risk a. Worried about a? Me too. Let’s both monitor and report to each other. Not worried about b? Fine, then I’ll only tell my fellow b-monitoring participants about the reports from my b monitors.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
This seems false to me. Eg Trump for one seems likely to do what the person who pays him the most & is the most loyal to him tells him to do, and AI risk worriers do not have the money or the politics for either of those criteria compared to, for example, Elon Musk.
Ah, I meant, would agree to pause once things came to a head. Pretty sure these political leaders are selfish enough that if they saw clear evidence of their imminent demise, and had a safer option, they’d take the out.
If that’s the situation, then why the “if and only if”, if we magically make then all believe they will die if they make ASI, then they would all individually be incentivized to stop it from happening independent of China’s actions.
I think that China and the US would definitely agree to pause if and only if they can confirm the other also committing to a pause. Unfortunately, this is a really hard thing to confirm, much harder than with nuclear.
Thus, I propose that we are trapped in this suicide race unless we can come up with better coordination mechanisms. Lower counterfactual cost to participation, lower entry cost, higher reliability, less eeliance on a centralized authority… Decentralized AI-powered privacy-preserving safety inspections and realtime monitoring.
Components include: if you opt in to the monitoring of a particular risk a, then you get to view the reports of my monitor on myself for risk a. Worried about a? Me too. Let’s both monitor and report to each other. Not worried about b? Fine, then I’ll only tell my fellow b-monitoring participants about the reports from my b monitors.
This seems false to me. Eg Trump for one seems likely to do what the person who pays him the most & is the most loyal to him tells him to do, and AI risk worriers do not have the money or the politics for either of those criteria compared to, for example, Elon Musk.
Ah, I meant, would agree to pause once things came to a head. Pretty sure these political leaders are selfish enough that if they saw clear evidence of their imminent demise, and had a safer option, they’d take the out.
If that’s the situation, then why the “if and only if”, if we magically make then all believe they will die if they make ASI, then they would all individually be incentivized to stop it from happening independent of China’s actions.