Of course, I agree that in some worlds AI progress has substantially slowed down, and we have received evidence that things will take longer, but “are we alive and are things still OK in 2028?” is a terrible way to operationalize that. Most people do not expect anything particularly terrible to have happened by 2028!
Sure, but to the extent that we put probability mass on AGI as early as 2027, we correspondingly should update from not having seen it, and especially not having seen the precursors we expect to see, by then.
I haven’t seen an AI produce a groundbreaking STEM paper by 2027, my probability LLMs + RL will scale to superintelligence, drops from about 80% to about 70%.
Sure, but to the extent that we put probability mass on AGI as early as 2027, we correspondingly should update from not having seen it, and especially not having seen the precursors we expect to see, by then.
I haven’t seen an AI produce a groundbreaking STEM paper by 2027, my probability LLMs + RL will scale to superintelligence, drops from about 80% to about 70%.