The eventual success of rationalists will result in the death of the label “rationalists.” Take “liberalism” as an example. In the eighteenth century, liberals were a specific political group that opposed absolute monarchy (I’m oversimplifying a bit). Now, mostly everyone opposes absolute monarchy. The “liberal” label has come to refer to left-wing instead. Similarly, if rationalist thought becomes mainstream in 200 years (assuming we all haven’t died), it won’t be called rationalism anymore. The stuff in the Sequences will just be obvious thinking habits.
So if we want to make less people hate rationalism and effective altruism, then we might have to ditch the labels. There’s a lot of mainstream animosity toward rationalists and EAs because news outlets like to portray them as crazy San Francisco doomer polycule vegans, not as a group of people who possess new but reasonable tenets of thinking. By “tenets of thinking” I don’t mean AI doomerism—I just mean things like “you should donate to effective charities” and “you should make beliefs pay rent.” It should be easier for people to accept these tenets without subscribing to stereotypical rationalist/EA beliefs on AI, animal welfare, etc. LessWrong isn’t optimal from this perspective, since it’s sort of an echo chamber, and it may overwhelm new users.
It will be a good day when some famous politician, perhaps from New York, publicly discusses how they form beliefs without mentioning the word “rationalism” once.
There’s a lot of mainstream animosity toward rationalists and EAs because news outlets like to portray them as crazy San Francisco doomer polycule vegans
Maybe I’m just a yeoman farmer, but I knew nothing about rationalists a year ago and I’m pretty sure 99% of my friend group knows nothing about rationalists. This is unfortunate because I think a lot of them would enjoy the writing in this movement. I read the news avidly and listened to economics/philosophy podcasts prior to finding LessWrong.
If the monicker has soured in the circles you care about, it can definitely make sense to find a new monicker. On the other hand, if you like the monicker and hope it catches on for the wider public, I think we still have a good shot of making it stick around in a positive way. I haven’t proven how much the public knows about “rationalists,” but I would say it’s easy to over-update based on the few public articles about it.
The stuff in the Sequences will just be obvious thinking habits.
I don’t understand how we are to achieve this. How plausible is it that some of the stuff was repeatedly discovered, succeeded in infecting a group of people and died out once it was no longer novel or once a broader tradition was lost? I remember a Sovietbook “Axioms of biology” popularising science and taking a jab at a mysterious answer to the origins of life and to the properties of opium causing the users to fall asleep.
without subscribing to stereotypical rationalist/EA beliefs on AI
I am sure that the part about beliefs related to AIs will be either forgotten (if alignment is solved) or inscribed into a global ban (if the doomers succeed in convincing enough others). How plausible is it that making donations to effective charities popular ends up requiring the new generations to keep becoming smarter, which they fail to do and are becoming stupider?
UPD: I encountered the term ‘Scout Mindset’ in use by a right-wing author who quoted a reference to Julia Galef’s book on the Scout Mindset(!)
The eventual success of rationalists will result in the death of the label “rationalists.” Take “liberalism” as an example. In the eighteenth century, liberals were a specific political group that opposed absolute monarchy (I’m oversimplifying a bit). Now, mostly everyone opposes absolute monarchy. The “liberal” label has come to refer to left-wing instead. Similarly, if rationalist thought becomes mainstream in 200 years (assuming we all haven’t died), it won’t be called rationalism anymore. The stuff in the Sequences will just be obvious thinking habits.
So if we want to make less people hate rationalism and effective altruism, then we might have to ditch the labels. There’s a lot of mainstream animosity toward rationalists and EAs because news outlets like to portray them as crazy San Francisco doomer polycule vegans, not as a group of people who possess new but reasonable tenets of thinking. By “tenets of thinking” I don’t mean AI doomerism—I just mean things like “you should donate to effective charities” and “you should make beliefs pay rent.” It should be easier for people to accept these tenets without subscribing to stereotypical rationalist/EA beliefs on AI, animal welfare, etc. LessWrong isn’t optimal from this perspective, since it’s sort of an echo chamber, and it may overwhelm new users.
It will be a good day when some famous politician, perhaps from New York, publicly discusses how they form beliefs without mentioning the word “rationalism” once.
Maybe I’m just a yeoman farmer, but I knew nothing about rationalists a year ago and I’m pretty sure 99% of my friend group knows nothing about rationalists. This is unfortunate because I think a lot of them would enjoy the writing in this movement. I read the news avidly and listened to economics/philosophy podcasts prior to finding LessWrong.
If the monicker has soured in the circles you care about, it can definitely make sense to find a new monicker. On the other hand, if you like the monicker and hope it catches on for the wider public, I think we still have a good shot of making it stick around in a positive way. I haven’t proven how much the public knows about “rationalists,” but I would say it’s easy to over-update based on the few public articles about it.
I don’t understand how we are to achieve this. How plausible is it that some of the stuff was repeatedly discovered, succeeded in infecting a group of people and died out once it was no longer novel or once a broader tradition was lost? I remember a Soviet book “Axioms of biology” popularising science and taking a jab at a mysterious answer to the origins of life and to the properties of opium causing the users to fall asleep.
I am sure that the part about beliefs related to AIs will be either forgotten (if alignment is solved) or inscribed into a global ban (if the doomers succeed in convincing enough others). How plausible is it that making donations to effective charities popular ends up requiring the new generations to keep becoming smarter, which they fail to do and are becoming stupider?
UPD: I encountered the term ‘Scout Mindset’ in use by a right-wing author who quoted a reference to Julia Galef’s book on the Scout Mindset(!)