I think once an AI is extremely good at AI R&D, lots of these skills will transfer to other domains, so it won’t have to be that much more capable to generalize to all domains, especially if trained in environments designed for teaching general skills.
This step, especially, really struck me as under-argued relative to how important it seems to be for the conclusion. This isn’t to pick on the authors of AI 2027 in particular. I’m generally confused as to why arguments for an (imminent) intelligence explosion don’t say more on this point, as far as I’ve read. (I’m reminded of this comic.) But I might well have missed something!
The basic arguments are that (a) becoming fully superhuman at something which involves long-horizon agency across a diverse range of situations seems like it requires agency skills that will transfer pretty well to other domains (b) once AIs have superhuman data efficiency, they can pick up whatever domain knowledge they need for new tasks very quickly.
I agree we didn’t justify it thoroughly in our supplement, the reason it’s not justified more is because we didn’t get around to it.
This step, especially, really struck me as under-argued relative to how important it seems to be for the conclusion. This isn’t to pick on the authors of AI 2027 in particular. I’m generally confused as to why arguments for an (imminent) intelligence explosion don’t say more on this point, as far as I’ve read. (I’m reminded of this comic.) But I might well have missed something!
The basic arguments are that (a) becoming fully superhuman at something which involves long-horizon agency across a diverse range of situations seems like it requires agency skills that will transfer pretty well to other domains (b) once AIs have superhuman data efficiency, they can pick up whatever domain knowledge they need for new tasks very quickly.
I agree we didn’t justify it thoroughly in our supplement, the reason it’s not justified more is because we didn’t get around to it.