I disagree strongly. To me it seems that AI safety has long punched below its weight because its proponents are unwilling to be confrontational, and are too reluctant to put moderate social pressure on people doing the activities which AI safety proponents hold to be very extremely bad. It is not a coincidence that among AI safety proponents, Eliezer is both unusually confrontational and unusually successful.
This isn’t specific to AI safety. A lot of people in this community generally believe that arguments which make people feel bad are counterproductive because people will be “turned off”.
This is false. There are tons of examples of disparaging arguments against bad (or “bad”) behavior that succeed wildly. Such arguments very frequently succeed in instilling individual values like e.g. conscientiousness or honesty. Prominent political movements which use this rhetoric abound. When this website was young, Eliezer and many others participated in an aggressive campaign of discourse against religious ideas, and this campaign accomplished many of its goals. I could name many many more large and small examples. I bet you can too.
Obviously this isn’t to say that confrontational and insulting argument is always the best style. Sometimes it’s truth-tracking and sometimes it isn’t. Sometimes it’s persuasive and sometimes it isn’t. Which cases are which is a difficult topic that I won’t get into here (except to briefly mention that it matters a lot whether the reasons given are actually good). Nor is this to say that the “turning people off” effect is completely absent; what I object to is the casual assumption that it outweighs any other effects. (Personally I’m turned off by the soft-gloved style of the parent comment, but I would not claim this necessarily means it’s inappropriate or ineffective—it’s not directed at me!) The point is that this very frequent claim does not match the evidence. Indeed, strong counterevidence is so easy to find that I suspect this is often not people’s real objection.
Deliberately phrasing things in confrontational or aggressive ways, in the hope that this makes your conversation partner “wake up” or something.
Choosing not to hide real, potentially-important beliefs you have about the world, even though those beliefs are liable to offend people, liable to be disagreed with, etc.
Either might be justifiable, but I’m a lot more wary of heuristics like “it’s never OK to talk about individuals’ relative proficiency at things, even if it feels very cruxy and important, because people just find the topic too triggering” than of heuristics like “it’s never OK to say things in ways that sound shouty or aggressive”. I think cognitive engines can much more easily get by self-censoring their tone than self-censoring what topics are permissible to think or talk about.
I disagree strongly. To me it seems that AI safety has long punched below its weight because its proponents are unwilling to be confrontational, and are too reluctant to put moderate social pressure on people doing the activities which AI safety proponents hold to be very extremely bad. It is not a coincidence that among AI safety proponents, Eliezer is both unusually confrontational and unusually successful.
This isn’t specific to AI safety. A lot of people in this community generally believe that arguments which make people feel bad are counterproductive because people will be “turned off”.
This is false. There are tons of examples of disparaging arguments against bad (or “bad”) behavior that succeed wildly. Such arguments very frequently succeed in instilling individual values like e.g. conscientiousness or honesty. Prominent political movements which use this rhetoric abound. When this website was young, Eliezer and many others participated in an aggressive campaign of discourse against religious ideas, and this campaign accomplished many of its goals. I could name many many more large and small examples. I bet you can too.
Obviously this isn’t to say that confrontational and insulting argument is always the best style. Sometimes it’s truth-tracking and sometimes it isn’t. Sometimes it’s persuasive and sometimes it isn’t. Which cases are which is a difficult topic that I won’t get into here (except to briefly mention that it matters a lot whether the reasons given are actually good). Nor is this to say that the “turning people off” effect is completely absent; what I object to is the casual assumption that it outweighs any other effects. (Personally I’m turned off by the soft-gloved style of the parent comment, but I would not claim this necessarily means it’s inappropriate or ineffective—it’s not directed at me!) The point is that this very frequent claim does not match the evidence. Indeed, strong counterevidence is so easy to find that I suspect this is often not people’s real objection.
I think there’s an important distinction between:
Deliberately phrasing things in confrontational or aggressive ways, in the hope that this makes your conversation partner “wake up” or something.
Choosing not to hide real, potentially-important beliefs you have about the world, even though those beliefs are liable to offend people, liable to be disagreed with, etc.
Either might be justifiable, but I’m a lot more wary of heuristics like “it’s never OK to talk about individuals’ relative proficiency at things, even if it feels very cruxy and important, because people just find the topic too triggering” than of heuristics like “it’s never OK to say things in ways that sound shouty or aggressive”. I think cognitive engines can much more easily get by self-censoring their tone than self-censoring what topics are permissible to think or talk about.
How is “success” measured among AI safety proponents?