FWIW I broadcast the former rather than the latter because from the 25% perspective there are many possible worlds which the “stop” coalition ends up making much worse, and therefore I can’t honestly broadcast “this is ridiculous and should stop” without being more specific about what I’d want from the stop coalition.
A (loose) analogy: leftists in Iran who confidently argued “the Shah’s regime is ridiculous and should stop”. It turned out that there was so much variance in how it stopped that this argument wasn’t actually a good one to confidently broadcast, despite in some sense being correct.
Maybe it’s hard to communicate nuance, but it seems like there’s a crazy thing going on where many people in the AI x-risk community think something like “Well obviously I wish it would stop, and the current situation does seem crazy and unacceptable by any normal standards of risk management. But there’s a lot of nuance in what I actually think we should do, and I don’t want to advocate for a harmful stop.”
And these people end up communicating to external people something like “Stopping is a naive strategy, and continuing (maybe with some safeguards etc) is my preferred strategy for now.”
This seems to miss out the really important part where they would actually want to stop if we could, but it seems hard and difficult/nuanced to get right.
Yeah, I agree that it’s easy to err in that direction, and I’ve sometimes done so. Going forward I’m trying to more consistently say the “obviously I wish people just wouldn’t do this” part.
Though note that even claims like “unacceptable by any normal standards of risk management” feel off to me. We’re talking about the future of humanity, there is no normal standard of risk management. This should feel as silly as the US or UK invoking “normal standards of risk management” in debates over whether to join WW2.
FWIW I broadcast the former rather than the latter because from the 25% perspective there are many possible worlds which the “stop” coalition ends up making much worse, and therefore I can’t honestly broadcast “this is ridiculous and should stop” without being more specific about what I’d want from the stop coalition.
A (loose) analogy: leftists in Iran who confidently argued “the Shah’s regime is ridiculous and should stop”. It turned out that there was so much variance in how it stopped that this argument wasn’t actually a good one to confidently broadcast, despite in some sense being correct.
Maybe it’s hard to communicate nuance, but it seems like there’s a crazy thing going on where many people in the AI x-risk community think something like “Well obviously I wish it would stop, and the current situation does seem crazy and unacceptable by any normal standards of risk management. But there’s a lot of nuance in what I actually think we should do, and I don’t want to advocate for a harmful stop.”
And these people end up communicating to external people something like “Stopping is a naive strategy, and continuing (maybe with some safeguards etc) is my preferred strategy for now.”
This seems to miss out the really important part where they would actually want to stop if we could, but it seems hard and difficult/nuanced to get right.
Yeah, I agree that it’s easy to err in that direction, and I’ve sometimes done so. Going forward I’m trying to more consistently say the “obviously I wish people just wouldn’t do this” part.
Though note that even claims like “unacceptable by any normal standards of risk management” feel off to me. We’re talking about the future of humanity, there is no normal standard of risk management. This should feel as silly as the US or UK invoking “normal standards of risk management” in debates over whether to join WW2.