I fully agree with you that AGI should be able to figure out things it doesn’t know, and that this is a major blindspot in benchmarks. (I often give novel problem-solving as a requirement, which is very similar.) My issue is that there is a wide range of human abilities in this regard. Most/all humans can figure things out to some extent, but most aren’t that good at it. If you give a genius an explanation of basic calculus and differential equation to figure out how to solve, it won’t be that difficult. If you give the same task to an average human, it isn’t happening. Describing AGI as being able to make a $1b/yr company or develop innovative science at a John von Neumann level is describing a faculty that most/all humans have, but at a level vastly above where most humans are.
Most of my concern about AI (why I am, unlike you, most worried about improved LRMs) stems from the fact that current SOTA systems have ability to figure things out within the human range and fairly rapidly increasing across it. (Current systems do have limitations that few humans have in other faculties like time horizons and perception, but those issues are decreasing with time.) Also, even if we never reach ASI, AI having problem-solving on par with normal smart humans, especially when coupled with other faculties, could have massively bad consequences.
I fully agree with you that AGI should be able to figure out things it doesn’t know, and that this is a major blindspot in benchmarks. (I often give novel problem-solving as a requirement, which is very similar.) My issue is that there is a wide range of human abilities in this regard. Most/all humans can figure things out to some extent, but most aren’t that good at it. If you give a genius an explanation of basic calculus and differential equation to figure out how to solve, it won’t be that difficult. If you give the same task to an average human, it isn’t happening. Describing AGI as being able to make a $1b/yr company or develop innovative science at a John von Neumann level is describing a faculty that most/all humans have, but at a level vastly above where most humans are.
Most of my concern about AI (why I am, unlike you, most worried about improved LRMs) stems from the fact that current SOTA systems have ability to figure things out within the human range and fairly rapidly increasing across it. (Current systems do have limitations that few humans have in other faculties like time horizons and perception, but those issues are decreasing with time.) Also, even if we never reach ASI, AI having problem-solving on par with normal smart humans, especially when coupled with other faculties, could have massively bad consequences.