I find incredible that people can conceive AGI-designed nanofactories built in months but cannot imagine that a big lab can spend some spare time or money into looking at this, especially when there are probably people from those companies who are frequent LW readers.
I might have missed it, but it seems to be the first time you talk about “months” in your scenario. Wasn’t it “days” before? It matters because I don’t think it would take months for an AGI to built a nanotech factory.
Son, I wrote an entire longform explaining why we need to we attempt this. It’s just hard. The ML researchers and Google executives who are relevant to these decisions have a financial stake in speeding capabilities research along as fast as possible, and often have very dead-set views about AGI risk advocates being cultists or bikeshedders or alarmists. There is an entire community stigma against even talking about these issues in the neck of the woods you speak of. I agree that redirecting money from capabilities research to anything called alignment research would be good on net, but the problem is finding clear ways of doing that.
I don’t think it’s impossible! If you want to help, I can give you some tasks to start with. But we’re already tryin
I find incredible that people can conceive AGI-designed nanofactories built in months but cannot imagine that a big lab can spend some spare time or money into looking at this, especially when there are probably people from those companies who are frequent LW readers.
I might have missed it, but it seems to be the first time you talk about “months” in your scenario. Wasn’t it “days” before? It matters because I don’t think it would take months for an AGI to built a nanotech factory.
Son, I wrote an entire longform explaining why we need to we attempt this. It’s just hard. The ML researchers and Google executives who are relevant to these decisions have a financial stake in speeding capabilities research along as fast as possible, and often have very dead-set views about AGI risk advocates being cultists or bikeshedders or alarmists. There is an entire community stigma against even talking about these issues in the neck of the woods you speak of. I agree that redirecting money from capabilities research to anything called alignment research would be good on net, but the problem is finding clear ways of doing that.
I don’t think it’s impossible! If you want to help, I can give you some tasks to start with. But we’re already tryin
Too busy at the moment but if you remind me this in a few months time, I may. Thanks