One of Eliezer’s favorite writing tools is framing things as a two-sided conflict: atheism vs religion, MWI vs Copenhagen, Bayes vs frequentism, and even when presenting his views about AI he was always riffing off the absurdity of opposing views. That’s fun to read and makes the reader care about the thing. I think it worked on us for basically the same reason that criticism-as-entertainment works.
My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.
Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.
I wonder if it negatively impacts the cohesiveness/teamwork ability of the resulting AI safety community by disproportionately attracting a certain type of person? It seems unlikely that everyone would enjoy this style
One of Eliezer’s favorite writing tools is framing things as a two-sided conflict: atheism vs religion, MWI vs Copenhagen, Bayes vs frequentism, and even when presenting his views about AI he was always riffing off the absurdity of opposing views. That’s fun to read and makes the reader care about the thing. I think it worked on us for basically the same reason that criticism-as-entertainment works.
Note that Eliezer has regrets about that:
I wonder if it negatively impacts the cohesiveness/teamwork ability of the resulting AI safety community by disproportionately attracting a certain type of person? It seems unlikely that everyone would enjoy this style