Belief propagation seems too much of a core of AI capability to me. I’d rather place my hope on GPT7 not being all that good yet at accelerating AI research and us having significantly more time.
This just seems doomed to me. The training runs will be even more expensive, the difficulty of doing anything significant as an outsider ever-higher. If the eventual plan is to get big labs to listen to your research, then isn’t it better to start early? (If you have anything significant to say, of course.)
I’d imagine it not too hard to get >1OOM efficiency improvement which one can demonstrate in smaller AI and one might use this to get a lab to listen. If the labs are sufficiently uninterested in alignment it’s pretty doomy anyway even if they adopted a better paradigm.
Also government interventions might still happen (perhaps more likely because of AI-caused unemployment than x-risk, and it won’t buy amazingly much time, but still).
Also the strategy of “maybe if AIs are more rational they will solve alignment or at least realize that they cannot” seems also very unlikely to me to work on the current DL paradigm, though still slightly helpful.
(Also maybe some supergenius or my future self or some other group can figure something out.)
I’d imagine it not too hard to get >1OOM efficiency improvement which one can demonstrate in smaller AI and one might use this to get a lab to listen. If the labs are sufficiently uninterested in alignment it’s pretty doomy anyway even if they adopted a better paradigm.
Also government interventions might still happen (perhaps more likely because of AI-caused unemployment than x-risk, and it won’t buy amazingly much time, but still).
Also the strategy of “maybe if AIs are more rational they will solve alignment or at least realize that they cannot” seems also very unlikely to me to work on the current DL paradigm, though still slightly helpful.
(Also maybe some supergenius or my future self or some other group can figure something out.)