Given that we basically got AGI (without the creativity of best humans) that is a Karnofsky’s Tool AI very unexpectedly, as you admit, can you look back and see what assumptions were wrong in expecting the tools agentizing on their own and pretty quickly? Or is everything in that Eliezer’s post still correct or at least reasonable, and we are simply not at the level where “foom” happens yet?
Come to think of it, I wonder if that post had been revisited somewhere at some point, by Eliezer or others, in light of the current SOTA. Feels like it could be instructive.
That is definitely my observation, as well: “general world understanding but not agency”, and yes, limited usefulness, but also… much more useful than gwern or Eliezer expected, no? I could not find a link.
I guess whether it counts as AGI depends on what one means by “general intelligence”. To me it was having a fairly general world model and being able to reason about it. What is your definition? Does “general world understanding” count? Or do you include the agency part in the definition of AGI? Or maybe something else?
Hmm, maybe this is a General Tool, as opposed a General Intelligence?