But in that scenario we don’t win really, we just don’t lose by 2027 because progress was a bit slower due to practical difficulties like a global recession from tariffs and an invasion of Taiwan. So I’m not totally counting it. It does provide more time for alignment. But how does alignment get accomplished, other than “make the AI do our alignment homework?”
So I’d say the best detailed scenario where we win is the version of AGI-2027 where we win. The other variations on that scenario where we win are going to be based on details of the political/economic/personal path to building it, and on the details of how alignment is solved adequately.
There’s an alignment-focused scenario sitting in my drafts folder. It does focus on how the technical problem gets solved, but it’s also about the sociopolitical challenges we’ll have to face to make that realistic, and how we prevent proliferation of AGI. Right now this is spread out across my posts. This comment in response to “what’s the plan for short timelines?” is the closest I’ve come to putting it in one place so far.
This isn’t detailed yet, though. My model matches Daniel K’s quite closely, although with slightly longer timelines due to practical difficulties like those discussed in the optimistic 2027 scenario I linked above.
Well there’s post An Optimistic 2027 Timeline, published just one day before your question here :)
But in that scenario we don’t win really, we just don’t lose by 2027 because progress was a bit slower due to practical difficulties like a global recession from tariffs and an invasion of Taiwan. So I’m not totally counting it. It does provide more time for alignment. But how does alignment get accomplished, other than “make the AI do our alignment homework?”
So I’d say the best detailed scenario where we win is the version of AGI-2027 where we win. The other variations on that scenario where we win are going to be based on details of the political/economic/personal path to building it, and on the details of how alignment is solved adequately.
There’s an alignment-focused scenario sitting in my drafts folder. It does focus on how the technical problem gets solved, but it’s also about the sociopolitical challenges we’ll have to face to make that realistic, and how we prevent proliferation of AGI. Right now this is spread out across my posts. This comment in response to “what’s the plan for short timelines?” is the closest I’ve come to putting it in one place so far.
This isn’t detailed yet, though. My model matches Daniel K’s quite closely, although with slightly longer timelines due to practical difficulties like those discussed in the optimistic 2027 scenario I linked above.