There appears to be a motte-and-bailey worth unpacking. The weaker, easily defensible claim is that advanced AI could be risky or dangerous. This modest assertion requires little evidence, similar to claims that extraterrestrial aliens, advanced genetic engineering of humans, or large-scale human cloning might be dangerous. I do not dispute this modest claim.
The stronger claim about AI doom is that doom is likely rather than merely possible. This substantial claim demands much stronger evidence than the weaker claim. The tension I previously raised addresses this stronger claim of probable AI doom (“AI doomerism”), not the weaker claim that advanced AI might be risky.
Many advocates of the strong claim of AI doom explicitly assert that their belief is backed by technical arguments, such as the counting argument for scheming behavior in SGD, among other arguments. However, if the premise of AI doom does not, in fact, rely on such technical arguments, then it is a mistake to argue about these ideas as if they are the key cruxes generating disagreement about AI doom.
I think the word “technical” is a red herring here. If someone tells me a flood is coming, I don’t much care how much they know about hydrodynamics, even if in principle this knowledge might allow me to model the threat with more confidence. Rather, I care about things like e.g. how sure they are about the direction from which the flood is coming, about the topography of our surroundings, etc. Personally, I expect I’d be much more inclined to make large/confident updates on the basis of information at levels of abstraction like these, than at levels about e.g. hydrodynamics or particle physics or so forth, however much more “technical,” or related-in-principle in some abstract reductionist sense, the latter may be.
I do think there are also many arguments beyond this simple one which clearly justify additional (and more confident) concern. But I try to assess such arguments based on how compelling they are, where “technical precision” is one, but hardly the only factor which might influence this; e.g., another is whether the argument even involves the relevant level of abstraction, or bears on the question at hand.
There appears to be a motte-and-bailey worth unpacking. The weaker, easily defensible claim is that advanced AI could be risky or dangerous. This modest assertion requires little evidence, similar to claims that extraterrestrial aliens, advanced genetic engineering of humans, or large-scale human cloning might be dangerous. I do not dispute this modest claim.
The stronger claim about AI doom is that doom is likely rather than merely possible. This substantial claim demands much stronger evidence than the weaker claim. The tension I previously raised addresses this stronger claim of probable AI doom (“AI doomerism”), not the weaker claim that advanced AI might be risky.
Many advocates of the strong claim of AI doom explicitly assert that their belief is backed by technical arguments, such as the counting argument for scheming behavior in SGD, among other arguments. However, if the premise of AI doom does not, in fact, rely on such technical arguments, then it is a mistake to argue about these ideas as if they are the key cruxes generating disagreement about AI doom.
I think the word “technical” is a red herring here. If someone tells me a flood is coming, I don’t much care how much they know about hydrodynamics, even if in principle this knowledge might allow me to model the threat with more confidence. Rather, I care about things like e.g. how sure they are about the direction from which the flood is coming, about the topography of our surroundings, etc. Personally, I expect I’d be much more inclined to make large/confident updates on the basis of information at levels of abstraction like these, than at levels about e.g. hydrodynamics or particle physics or so forth, however much more “technical,” or related-in-principle in some abstract reductionist sense, the latter may be.
I do think there are also many arguments beyond this simple one which clearly justify additional (and more confident) concern. But I try to assess such arguments based on how compelling they are, where “technical precision” is one, but hardly the only factor which might influence this; e.g., another is whether the argument even involves the relevant level of abstraction, or bears on the question at hand.