Sorry if it’s unclear (I’m open to rewording), but my intention was that the link in the first sentence was my (loose) definition of AGI, and the following sentences were not a definition but rather an example of something that AI cannot do yet.
I deliberately chose an example where it’s just super duper obvious that we’re not even remotely close to AI succeeding at the task, because I find there are lots of LLM-focused people who have a giant blind spot: They read the questions on Humanity’s Last Exam or whatever, and scratch their head and say “C’mon, when future LLMs saturate the HLE benchmark, what else is there? Look how hard those questions are! They’re PhD level in everything! If that’s not superintelligence, what is?” …And then my example (autonomously founding a company and growing it to $1B/year revenue over the course of years) is supposed to jolt those people into saying “ohhh, right, there’s still a TON of headroom above current AI”.
I fully agree with you that AGI should be able to figure out things it doesn’t know, and that this is a major blindspot in benchmarks. (I often give novel problem-solving as a requirement, which is very similar.) My issue is that there is a wide range of human abilities in this regard. Most/all humans can figure things out to some extent, but most aren’t that good at it. If you give a genius an explanation of basic calculus and differential equation to figure out how to solve, it won’t be that difficult. If you give the same task to an average human, it isn’t happening. Describing AGI as being able to make a $1b/yr company or develop innovative science at a John von Neumann level is describing a faculty that most/all humans have, but at a level vastly above where most humans are.
Most of my concern about AI (why I am, unlike you, most worried about improved LRMs) stems from the fact that current SOTA systems have ability to figure things out within the human range and fairly rapidly increasing across it. (Current systems do have limitations that few humans have in other faculties like time horizons and perception, but those issues are decreasing with time.) Also, even if we never reach ASI, AI having problem-solving on par with normal smart humans, especially when coupled with other faculties, could have massively bad consequences.
Sorry if it’s unclear (I’m open to rewording), but my intention was that the link in the first sentence was my (loose) definition of AGI, and the following sentences were not a definition but rather an example of something that AI cannot do yet.
I deliberately chose an example where it’s just super duper obvious that we’re not even remotely close to AI succeeding at the task, because I find there are lots of LLM-focused people who have a giant blind spot: They read the questions on Humanity’s Last Exam or whatever, and scratch their head and say “C’mon, when future LLMs saturate the HLE benchmark, what else is there? Look how hard those questions are! They’re PhD level in everything! If that’s not superintelligence, what is?” …And then my example (autonomously founding a company and growing it to $1B/year revenue over the course of years) is supposed to jolt those people into saying “ohhh, right, there’s still a TON of headroom above current AI”.
I fully agree with you that AGI should be able to figure out things it doesn’t know, and that this is a major blindspot in benchmarks. (I often give novel problem-solving as a requirement, which is very similar.) My issue is that there is a wide range of human abilities in this regard. Most/all humans can figure things out to some extent, but most aren’t that good at it. If you give a genius an explanation of basic calculus and differential equation to figure out how to solve, it won’t be that difficult. If you give the same task to an average human, it isn’t happening. Describing AGI as being able to make a $1b/yr company or develop innovative science at a John von Neumann level is describing a faculty that most/all humans have, but at a level vastly above where most humans are.
Most of my concern about AI (why I am, unlike you, most worried about improved LRMs) stems from the fact that current SOTA systems have ability to figure things out within the human range and fairly rapidly increasing across it. (Current systems do have limitations that few humans have in other faculties like time horizons and perception, but those issues are decreasing with time.) Also, even if we never reach ASI, AI having problem-solving on par with normal smart humans, especially when coupled with other faculties, could have massively bad consequences.