I think this is wrong, but a useful argument to make.
I disagree even though I generally agree with each of your sub-points. The key problem is that the points can all be correct, but don’t add to the conclusion that this is safe. For example, perhaps an interpretable model is only 99.998% likely to be a misaligned AI system, instead of 99.999% for a less interpretable one. I also think that the current paradigm is shortening timelines, and regardless of how we do safety, less time makes it less likely that we will find effective approaches in time to preempt disaster.
(I would endorse the weaker claim that LLMs are more plausibly amenable to current approaches to safety than alternative approaches, but it’s less clear that we wouldn’t have other and even more promising angles to consider if a different paradigm was dominant.)
Thank you for this comment. I’m curious to understand the source of disagreement between us, given that you generally agree with each of the sub-points. Do you really think that the chances of misalignment with LM-based AI systems is above 90%? What exactly do you mean by misalignment in this context and why do you think it’s the most likely result with such AI? Do you think it will happen even if humanity sticks with the paradigm I described (of chaining pure language models while avoiding training models on open-ended tasks)?
I want to also note that my argument is less about “developing language models was counterfactually a good thing” and more “given that language models have been developed (which is now a historic fact), the safest path towards human-level AGI might be to stick with pure language models”.
I think this is wrong, but a useful argument to make.
I disagree even though I generally agree with each of your sub-points. The key problem is that the points can all be correct, but don’t add to the conclusion that this is safe. For example, perhaps an interpretable model is only 99.998% likely to be a misaligned AI system, instead of 99.999% for a less interpretable one. I also think that the current paradigm is shortening timelines, and regardless of how we do safety, less time makes it less likely that we will find effective approaches in time to preempt disaster.
(I would endorse the weaker claim that LLMs are more plausibly amenable to current approaches to safety than alternative approaches, but it’s less clear that we wouldn’t have other and even more promising angles to consider if a different paradigm was dominant.)
Thank you for this comment. I’m curious to understand the source of disagreement between us, given that you generally agree with each of the sub-points. Do you really think that the chances of misalignment with LM-based AI systems is above 90%? What exactly do you mean by misalignment in this context and why do you think it’s the most likely result with such AI? Do you think it will happen even if humanity sticks with the paradigm I described (of chaining pure language models while avoiding training models on open-ended tasks)?
I want to also note that my argument is less about “developing language models was counterfactually a good thing” and more “given that language models have been developed (which is now a historic fact), the safest path towards human-level AGI might be to stick with pure language models”.