If something along these lines is the fastest path to AGI, I think it needs to be in the right hands. My goal would be, some months or years from now, to get research results that make it clear we’re on the right track to building AGI. I’d go to folks I trust such as Eliezer Yudkowsky/MIRI/OpenAI, and basically say “I think we’re on track to build an AGI, can we do this together and make sure its safe?” Of course understanding that we may need to completely pause further capabilities research at some point if our safety team does not give us the OK to proceed.
If you “completely pause further capabilities research”, what will stop other AI labs from pursuing that research direction further? (And possibly hiring your now frustrated researchers who by this point have a realistic hope for getting immense fame, a Turing Award, etc.).
Valid concern. I would say (1) keep our research results very secret (2) hire people that are fairly aligned? But I agree that’s not a sure fire solution at all.
If you “completely pause further capabilities research”, what will stop other AI labs from pursuing that research direction further? (And possibly hiring your now frustrated researchers who by this point have a realistic hope for getting immense fame, a Turing Award, etc.).
Valid concern. I would say (1) keep our research results very secret (2) hire people that are fairly aligned? But I agree that’s not a sure fire solution at all.