If so, by default the existence of AGI will be a closely guarded secret for some months. Only a few teams within an internal silo, plu,s leadership & security, will know about the capabilities of the latest systems.,
Are they really going to be that secret—at this point, progress is if not linear, almost predictable and we are well aware of the specific issues to be solved next for AGI—longer task horizons, memory, fewer hallucinations, etc. If you tell me someone is 3-9 months ahead and nearing AGI, I’d simply guess those are the things they are ahead on.
>Even worse, a similarly tiny group of people — specifically, corporate leadership + some select people, from the executive branch of the US government — will be the only people reading the reports and making high-stakes judgment calls
That does sound pretty bad, yes. My last hope in this scenario is that at the last step (even only for the last week or two) when it’s clear they’ll win they at least withold it from the US executive branch and make some of the final decisions on their own—not ideal, but a few % more chance the final decisions aren’t godawful.
For example, imagine Ilya’s lab ends up ahead—I can at least imagine him doing some last minute fine-tuning to make the AGI work for humanity first, ignoring what the US executive branch has ordered, and I can imagine some chance that once that’s done it can mostly be too late to change it.
Are they really going to be that secret—at this point, progress is if not linear, almost predictable and we are well aware of the specific issues to be solved next for AGI—longer task horizons, memory, fewer hallucinations, etc. If you tell me someone is 3-9 months ahead and nearing AGI, I’d simply guess those are the things they are ahead on.
>Even worse, a similarly tiny group of people — specifically, corporate leadership + some select people, from the executive branch of the US government — will be the only people reading the reports and making high-stakes judgment calls
That does sound pretty bad, yes. My last hope in this scenario is that at the last step (even only for the last week or two) when it’s clear they’ll win they at least withold it from the US executive branch and make some of the final decisions on their own—not ideal, but a few % more chance the final decisions aren’t godawful.
For example, imagine Ilya’s lab ends up ahead—I can at least imagine him doing some last minute fine-tuning to make the AGI work for humanity first, ignoring what the US executive branch has ordered, and I can imagine some chance that once that’s done it can mostly be too late to change it.