[Question] What would make you confident that AGI has been achieved?

Consensus seems to be that there is no fire alarm for artificial intelligence. We may see the smoke and think nothing of it. But at what point do we acknowledge that the fire has entered the room? To be less euphemistic, what would have to happen to convince you that a human-level AGI has been created?

I ask this because it isn’t obvious to me that there is any level of evidence which would convince many people, at least not until the AGI is beyond human levels. Even then, it may not be clear to many that superintelligence has actually been achieved. For instance, I can easily imagine the following hypothetical scenario:


A future GPT-N which scores a perfect 50% on a digital Turing Test (meaning nobody can detect if a sample output is written by humans or GPT-N), is announced by OpenAI. Let’s imagine they do the responsible thing and don’t publicly release the API. My intuition is that most people will not enter panic mode at that point, but will either:

  1. Assume that this is merely some sort of publicity stunt, with the test being artificially rigged in some way.

  2. Say something like “yes, it passed the Turing test, but that doesn’t really count because [insert reason x], and even if it did, that doesn’t mean it will be generalizable to domains outside of [domain y that GPT-N is believed to lie inside of].”

  3. Claim that being a good conversationalist does not fully capture what it means to be intelligent, and thereby dismiss the news as being yet another step in the long road towards “true” AGI.

The next week, OpenAI announces that the same model has solved a massive open problem in mathematics, something that a number of human mathematicians had previously claimed wouldn’t be solved this century. I predict a large majority of people (though probably few in the rationalist community) would not view this as being indicative of AGI, either.

The next week, GPT-N+1 escapes, and takes over the world. Nobody has an opinion on this, because they’re all dead.


This thought experiment leads me to ask: at what point would you be convinced that human-level AGI has been achieved? What about superhuman AGI? Additionally, at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?