No, I am not. I can imagine such a lab, and would even support a viable plan of action for its formation, but it doesn’t exist, so it doesn’t factor into our hypothetical on how humanity would fare against an omnicidal superintelligence.
Don’t we have the resources and people to set up such a lab? If you think we don’t have the compute (and couldn’t get access to enough cloud compute or wouldn’t want to), that’s something we could invest in now, since there’s still time. Also, if there are still AI safety teams at any of the existing big labs, can’t they start their own projects there?
Don’t we have the resources and people to set up such a lab?
At present, not by a long shot. And doing so would probably make the problem worse; if we didn’t solve the underlying problem DeepMind would do whatever it was they were going to do anyways, except faster.
I find incredible that people can conceive AGI-designed nanofactories built in months but cannot imagine that a big lab can spend some spare time or money into looking at this, especially when there are probably people from those companies who are frequent LW readers.
I might have missed it, but it seems to be the first time you talk about “months” in your scenario. Wasn’t it “days” before? It matters because I don’t think it would take months for an AGI to built a nanotech factory.
Son, I wrote an entire longform explaining why we need to we attempt this. It’s just hard. The ML researchers and Google executives who are relevant to these decisions have a financial stake in speeding capabilities research along as fast as possible, and often have very dead-set views about AGI risk advocates being cultists or bikeshedders or alarmists. There is an entire community stigma against even talking about these issues in the neck of the woods you speak of. I agree that redirecting money from capabilities research to anything called alignment research would be good on net, but the problem is finding clear ways of doing that.
I don’t think it’s impossible! If you want to help, I can give you some tasks to start with. But we’re already tryin
Don’t we have the resources and people to set up such a lab? If you think we don’t have the compute (and couldn’t get access to enough cloud compute or wouldn’t want to), that’s something we could invest in now, since there’s still time. Also, if there are still AI safety teams at any of the existing big labs, can’t they start their own projects there?
At present, not by a long shot. And doing so would probably make the problem worse; if we didn’t solve the underlying problem DeepMind would do whatever it was they were going to do anyways, except faster.
I find incredible that people can conceive AGI-designed nanofactories built in months but cannot imagine that a big lab can spend some spare time or money into looking at this, especially when there are probably people from those companies who are frequent LW readers.
I might have missed it, but it seems to be the first time you talk about “months” in your scenario. Wasn’t it “days” before? It matters because I don’t think it would take months for an AGI to built a nanotech factory.
Son, I wrote an entire longform explaining why we need to we attempt this. It’s just hard. The ML researchers and Google executives who are relevant to these decisions have a financial stake in speeding capabilities research along as fast as possible, and often have very dead-set views about AGI risk advocates being cultists or bikeshedders or alarmists. There is an entire community stigma against even talking about these issues in the neck of the woods you speak of. I agree that redirecting money from capabilities research to anything called alignment research would be good on net, but the problem is finding clear ways of doing that.
I don’t think it’s impossible! If you want to help, I can give you some tasks to start with. But we’re already tryin
Too busy at the moment but if you remind me this in a few months time, I may. Thanks