If the argument is that 1e9 very smart humans at 100x speed yield safe superintelligent outcomes “soon”, how is that very different from “pause everything now and let N very smart humans figure out safe, aligned superintelligent outcomes over an extended timeframe, on the order of 1e11/N days/years”? It’s just time-shifting safe human work.
I also worry that billions of very smart super-fast humans might decide to try building superintelligence directly, as fast as they can, so that we get doom in months instead of years
If the argument is that 1e9 very smart humans at 100x speed yield safe superintelligent outcomes “soon”, how is that very different from “pause everything now and let N very smart humans figure out safe, aligned superintelligent outcomes over an extended timeframe, on the order of 1e11/N days/years”? It’s just time-shifting safe human work.
I also worry that billions of very smart super-fast humans might decide to try building superintelligence directly, as fast as they can, so that we get doom in months instead of years