I think this community often fails to appropriately engage with and combat this argument.
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?
What do you think that looks like? To me, that looks like “give object-level arguments for AI x-risk that don’t depend on what AI company CEOs say.” And I think the community already does quite a lot of that, although giving really persuasive arguments is hard (I hope the MIRI book succeeds).
here are some of my attempts at it, which I think stands out as unusual compared to how most respond; there are subverbal insights I haven’t yet nailed down in how I approached this, hence the link instead of explanation
I’d currently summarize the view not as “CEOs scare people” but as “any publicity seems to be good publicity, even when warning of extinction, as if the warnings of extinction are interpreted by most to be cynical lies even when backed up by argumentation”. I suspect that at least part of what’s going on is that when someone doesn’t comprehend the details of an argument, there’s some chance they interpret it as a human-intentional lie (or other type of falsehood, perhaps accidental-on-the-author’s-behalf-and-yet-valuable-to-the-egregore lie)?