I work at ARC evals, and mostly the answer is that this was sort of a trial run.
For some context ARC evals spun up as a project last fall, and collaborations with labs are very much in their infancy. We tried to be very open about limitations in the system card, and I think the takeaways from our evaluations so far should mostly be that “It seems urgent to figure out the techniques and processes for evaluating dangerous capabilities of models, several leading scaling labs (including OpenAI and Anthropic) have been working with ARC on early attempts to do this”.
I would not consider the evaluation we did of GPT-4 to be nearly good enough for future models where the prior of existential danger is higher, but I hope we can make quick progress and I have certainly learned a lot from doing this initial evaluation. I think you should hold us and the labs to a much higher standard going forward though, in light of the capabilities of GPT-4.
I agree the term AGI is rough and might be more misleading than it’s worth in some cases. But I do quite strongly disagree that current models are ‘AGI’ in the sense most people intend.
Examples of very important areas where ‘average humans’ plausibly do way better than current transformers:
Most humans succeed in making money autonomously. Even if they might not come up with a great idea to quickly 10x $100 through entrepreneurship, they are able to find and execute jobs that people are willing to pay a lot of money for. And many of these jobs are digital and could in theory be done just as well by AIs. Certainly there is a ton of infrastructure built up around humans that help them accomplish this which doesn’t really exist for AI systems yet, but if this situation was somehow equalized I would very strongly bet on the average human doing better than the average GPT-4-based agent. It seems clear to me that humans are just way more resourceful, agentic, able to learn and adapt etc. than current transformers are in key ways.
Many humans currently do drastically better on the METR task suite (https://github.com/METR/public-tasks) than any AI agents, and I think this captures some important missing capabilities that I would expect an ‘AGI’ system to possess. This is complicated somewhat by the human subjects not being ‘average’ in many ways, e.g. we’ve mostly tried this with US tech professionals and the tasks include a lot of SWE, so most people would likely fail due to lack of coding experience.
Take enough randomly sampled humans and set them up with the right incentives and they will form societies, invent incredibly technologies, build productive companies etc. whereas I don’t think you’ll get anything close to this with a bunch of GPT-4 copies at the moment
I think AGI for most people evokes something that would do as well as humans on real-world things like the above, not just something that does as well as humans on standardized tests.