MIRI’s early work (for example modal combat and work on Loeb’s theorem) assumed that UAO would be instantiated through hand-written AI programs that were just good enough to improve themselves slightly, leading to an intelligence explosion (with a bunch of other assumptions).
Agent foundations work makes / needs no assumptions about how first AGIs are written, or intelligence explosion, it’s not about that. It’s about deconfusion, noticing and formulating concepts that help with thinking about agents-in-a-very-loose-sense.
You probably know better than me, but I still have this intuition that seed-AI and FOOM have oriented the framing of the problem and the sort of question asked. I think people who came to agent foundations from different routes ended up asking slightly different questions.
I could totally be wrong though, thanks for making this weakness of my description explicit!
Agent foundations work makes / needs no assumptions about how first AGIs are written, or intelligence explosion, it’s not about that. It’s about deconfusion, noticing and formulating concepts that help with thinking about agents-in-a-very-loose-sense.
You probably know better than me, but I still have this intuition that seed-AI and FOOM have oriented the framing of the problem and the sort of question asked. I think people who came to agent foundations from different routes ended up asking slightly different questions.
I could totally be wrong though, thanks for making this weakness of my description explicit!