Yeah, strategic planning under massive uncertainty is mostly guesswork.
My preferred policy is a halt. (And not a short one, because I figure ending the halt means we have an excellent chance of dying.) Anthropic’s preferred policy appears to be “try to build a better-than-replacement superintelligence before someone builds an awful one.” (Assuming I understand their actions and writing correctly.) Other people are all in on trying to find some way to improve how much models like happy, thriving humans. Who’s right? None of us know all the details about how this will play out.
Banning data centers would be more promising if it actually affects enough countries to make a difference. Ideally, I would like to see a worldwide frontier training ban with teeth, enforced by at least the US and China. I think this might buy us decades with humans in control of what happens to us, if we’re lucky.
But my model is very much “How much time can we buy?”
Yeah, strategic planning under massive uncertainty is mostly guesswork.
My preferred policy is a halt. (And not a short one, because I figure ending the halt means we have an excellent chance of dying.) Anthropic’s preferred policy appears to be “try to build a better-than-replacement superintelligence before someone builds an awful one.” (Assuming I understand their actions and writing correctly.) Other people are all in on trying to find some way to improve how much models like happy, thriving humans. Who’s right? None of us know all the details about how this will play out.
Banning data centers would be more promising if it actually affects enough countries to make a difference. Ideally, I would like to see a worldwide frontier training ban with teeth, enforced by at least the US and China. I think this might buy us decades with humans in control of what happens to us, if we’re lucky.
But my model is very much “How much time can we buy?”