I strongly agree, and as I’ve argued before, long timelines to ASI are possible even if we have proto-AGI soon, and aligning AGI doesn’t necessarily help solve ASI risks. It seems like people are being myopic, assuming their modal outcome is effectively certain, and/or not clearly holding multiple hypotheses about trajectories in their minds, so they are undervaluing conditionally high value research directions.
I strongly agree, and as I’ve argued before, long timelines to ASI are possible even if we have proto-AGI soon, and aligning AGI doesn’t necessarily help solve ASI risks. It seems like people are being myopic, assuming their modal outcome is effectively certain, and/or not clearly holding multiple hypotheses about trajectories in their minds, so they are undervaluing conditionally high value research directions.