I don’t think the experts are expert on this question at all. Eliezer’s train of thought essentially started with “Supposing you had a really effective AI, what would follow from that?” His thinking wasn’t at all predicated on any particular way you might build a really effective AI, and knowing a lot about how to build AI isn’t expertise on what the results are when it’s as effective as Eliezer posits. It’s like thinking you shouldn’t have an opinion on whether there will be a nuclear conflict over Kashmir unless you’re a nuclear physicist.
(Replying without the context I imagine to be present here)
I agree with a version of this which goes “just knowing how to make SGD go brrr does not at all mean you have expertise for predicting what happens with effective AI.”
Perhaps the response by experts is something like: “the only kind of AI we have are LLMs, and people who work with LLMs know that they cannot be really effective, therefore Eliezer’s premises are not realistic?”
Okay, it sounds stupid when I write it like this, so likely a strawman. But maybe it points in the right direction...
Extracted from a Facebook comment:
I don’t think the experts are expert on this question at all. Eliezer’s train of thought essentially started with “Supposing you had a really effective AI, what would follow from that?” His thinking wasn’t at all predicated on any particular way you might build a really effective AI, and knowing a lot about how to build AI isn’t expertise on what the results are when it’s as effective as Eliezer posits. It’s like thinking you shouldn’t have an opinion on whether there will be a nuclear conflict over Kashmir unless you’re a nuclear physicist.
(Replying without the context I imagine to be present here)
I agree with a version of this which goes “just knowing how to make SGD go brrr does not at all mean you have expertise for predicting what happens with effective AI.”
I disagree with a version of this comment which means, “Having a lot of ML expertise doesn’t mean you have expertise for thinking about effective AIs.” Eliezer could have started off his train of thought by imagining systems which are not the kind of system which gets trained by SGD. There’s no guarantee that thought experiments nominally about “effective AIs” are at all relevant to real-world effective AIs. (Example specific critique A of claims about minds-in-general, example specific critique B of attempts to use AIXI as a model of effective intelligence.)
Perhaps the response by experts is something like: “the only kind of AI we have are LLMs, and people who work with LLMs know that they cannot be really effective, therefore Eliezer’s premises are not realistic?”
Okay, it sounds stupid when I write it like this, so likely a strawman. But maybe it points in the right direction...