I’m inclined to think that Eliezer’s confidence in his own importance (actually I’d prefer “expected importance”, measured as an expected quantity the usual way) is not really unwarranted (or too unwarranted), but I hope he does take away from this a greater sense of the importance of a “the customer is always right” attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer… justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer’s writings) that makes people think !!!CRAZY!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we’re trying to fix, and the solution can’t always just be “read the sequences until you’re rational enough to see why this makes sense”.
I realize it’s a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn’t crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer’s example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not have a strong enough grasp of its importance or inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an intuitive sense for who’s not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI should make a greater point of seeming non-weird long enough for smart outsiders to switch from “save time by evaluating surface weirdness” mode to “take seriously and evaluate arguments directly” mode.
(Meanwhile, I’m glad Eliezer says “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me”, and I hope he takes that very seriously.)
I’m inclined to think that Eliezer’s confidence in his own importance (actually I’d prefer “expected importance”, measured as an expected quantity the usual way) is not really unwarranted (or too unwarranted), but I hope he does take away from this a greater sense of the importance of a “the customer is always right” attitude in managing his image as a public-ish figure. Obviously the customer is not always right, but sometimes you have to act like they are if you want to get/keep them as your customer… justified or not, there seems to be something about this whole endeavour (including but not limited to Eliezer’s writings) that makes people think !!!CRAZY!!!, and even if is really they who are the crazy ones, they are nevertheless the people who populate this crazy world we’re trying to fix, and the solution can’t always just be “read the sequences until you’re rational enough to see why this makes sense”.
I realize it’s a balance; maybe this tone is good for attracting people who are already rational enough to see why this isn’t crazy and why this tone has no bearing on the validity of the underlying arguments, like Eliezer’s example of lecturing on rationality in a clown suit. Maybe the people who have a problem with it or are scared off by it are not the sort of people who would be willing or able to help much anyway. Maybe if someone is overly wary of associating with a low-status yet extremely important project, they do not have a strong enough grasp of its importance or inclination toward real altruism anyway. But reputation will still probably count for a lot toward what SIAI will eventually be able to accomplish. Maybe at the point of hearing and evaluating the arguments, seeming weird will only screen off people who would not have made important contributions anyway, but it does affect who will get far enough to hear the arguments in the first place. In a world full of physics and math and AI cranks promising imminent world-changing discoveries, reasonably smart people do tend to build up intuitive nonsense-detectors, build up an intuitive sense for who’s not even worth listening to or engaging with; if we want more IQ 150+ people to get involved in existential risk reduction, then perhaps SIAI should make a greater point of seeming non-weird long enough for smart outsiders to switch from “save time by evaluating surface weirdness” mode to “take seriously and evaluate arguments directly” mode.
(Meanwhile, I’m glad Eliezer says “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me”, and I hope he takes that very seriously.)