I think a good move for the world would be to consolidate AI researchers into larger, better monitored, more bureaucratic systems that moved more slowly and carefully, with mandatory oversight. I don’t see a way to bring that about. I think it’s a just-not-going-to-happen sort of situation to think that every independent researcher or small research group will voluntarily switch to operating in a sufficiently safe manner. As it is, I think a final breakthrough AGI is 4-5x more likely to be developed by a big lab than by a small group or individual, but that’s still not great odds. And I worry that, after being developed the inventor will run around shouting ‘Look at this cool advance I made!’ and the beans will be fully spilled before anyone has the chance to decide to hush them, and then foolish actors around the world will start consequence-avalanches they cannot stop. For now, I’m left hoping that somewhere at-least-as-responsible as DeepMind or OpenAI wins the race.
I think a good move for the world would be to consolidate AI researchers into larger, better monitored, more bureaucratic systems that moved more slowly and carefully, with mandatory oversight. I don’t see a way to bring that about. I think it’s a just-not-going-to-happen sort of situation to think that every independent researcher or small research group will voluntarily switch to operating in a sufficiently safe manner. As it is, I think a final breakthrough AGI is 4-5x more likely to be developed by a big lab than by a small group or individual, but that’s still not great odds. And I worry that, after being developed the inventor will run around shouting ‘Look at this cool advance I made!’ and the beans will be fully spilled before anyone has the chance to decide to hush them, and then foolish actors around the world will start consequence-avalanches they cannot stop. For now, I’m left hoping that somewhere at-least-as-responsible as DeepMind or OpenAI wins the race.