Can you expand your argument why LLM will not reach AGI?
I’m generally not very enthusiastic about arguing with people about whether LLMs will reach AGI.
If I’m talking to someone unconcerned about x-risk, just trying to make ASI as fast as possible, then I sure don’t want to dissuade them from working on the wrong thing (see §1.6.1 and §1.8.4).
If I’m talking to someone concerned about LLM x-risk, and thus contingency planning for LLMs reaching AGI, then that seems like a very reasonable thing to do, and I would feel bad about dissuading them too. After all, I’m not that confident—I don’t feel good about building a bigger community of non-LLM-x-risk mitigators by proselytizing people away from the already-pathetically-small community of LLM-x-risk mitigators.
…But fine, see this comment for part of my thinking.
To create ASI you don’t need a billion of Sam Altmans, you need a billion of Ilya Sutskevers.
I’m curious what you imagine the billion Ilya Sutskevers are going to do. If you think they’re going to invent a new better AI paradigm, then we have less disagreement than it might seem—see §1.4.4. Alternatively, if you think they’re going to build a zillion datasets and RL environments to push the LLM paradigm ever farther, then what do you make of the fact that human skill acquisition seems very different from that process (see §1.3.2 and this comment)?
I’m generally not very enthusiastic about arguing with people about whether LLMs will reach AGI.
If I’m talking to someone unconcerned about x-risk, just trying to make ASI as fast as possible, then I sure don’t want to dissuade them from working on the wrong thing (see §1.6.1 and §1.8.4).
If I’m talking to someone concerned about LLM x-risk, and thus contingency planning for LLMs reaching AGI, then that seems like a very reasonable thing to do, and I would feel bad about dissuading them too. After all, I’m not that confident—I don’t feel good about building a bigger community of non-LLM-x-risk mitigators by proselytizing people away from the already-pathetically-small community of LLM-x-risk mitigators.
…But fine, see this comment for part of my thinking.
I’m curious what you imagine the billion Ilya Sutskevers are going to do. If you think they’re going to invent a new better AI paradigm, then we have less disagreement than it might seem—see §1.4.4. Alternatively, if you think they’re going to build a zillion datasets and RL environments to push the LLM paradigm ever farther, then what do you make of the fact that human skill acquisition seems very different from that process (see §1.3.2 and this comment)?