True, although I wish more people would engage with the common anti-AI-x-risk argument of “tech CEOs are exaggerating existential risk because they think it’ll make their products seem more important and potentially world changing, and so artificially boost hype”. Not saying I agree with this, but there’s at least some extent to which it’s true, and I think this community often fails to appropriately engage with and combat this argument.
In general, this is why “appeal to authority” arguments should generally be avoided if we’re talking about people who are widely seen as untrustworthy and having ulterior motives. At most I think people like Geoffrey Hinton are seen as reputable and not as morally compromised so serve as better subjects for an appeal to authority, but mostly rather than needing to appeal to authority at all we should just try and bring things back to the object-level arguments.
I think this community often fails to appropriately engage with and combat this argument.
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?
True, although I wish more people would engage with the common anti-AI-x-risk argument of “tech CEOs are exaggerating existential risk because they think it’ll make their products seem more important and potentially world changing, and so artificially boost hype”. Not saying I agree with this, but there’s at least some extent to which it’s true, and I think this community often fails to appropriately engage with and combat this argument.
In general, this is why “appeal to authority” arguments should generally be avoided if we’re talking about people who are widely seen as untrustworthy and having ulterior motives. At most I think people like Geoffrey Hinton are seen as reputable and not as morally compromised so serve as better subjects for an appeal to authority, but mostly rather than needing to appeal to authority at all we should just try and bring things back to the object-level arguments.
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?