I don’t think this really tracks. I don’t think I’ve seen many people want to “become part of the political right”, and it’s not even the case that many people voted for republicans in recent elections (indeed, my guess is fewer rationalists voted for republicans in the last three elections than previous ones).
I do think it’s the case that on a decade scale people have become more anti-left. I think some of that is explained by background shift. Wokeness is on the decline, and anti-wokeness is more popular, so baserates are shifting. Additionally, people tend to be embedded in coastal left-leaning communities, so they develop antibodies against wokeness.
Maybe this is what you were saying, but “out of sight, out of mind” implies a miscalibration about attitudes on the right here, where my sense is people are mostly reasonably calibrated about anti-intellectualism on the right, but approximately no one was considering joining that part of the right, or was that threatened by it on a personal level, and so it doesn’t come up very much.
Hmm. I have no doubt you are more personally familiar with and knowledgeable of the rationality community than I am, especially when it comes to the in-person community, so I think it’s appropriate for me to defer here a fair bit.
Nevertheless, I think I still disagree to some extent, or at least remain confused on a few matters about the whole “miscalibration about attitudes on the right” thing. I linked a Wei Dai post upthread titled “Have epistemic conditions always been this bad?” which begins (emphasis mine):
In the last few months, I’ve gotten increasingly alarmed by leftist politics in the US, and the epistemic conditions that it operates under and is imposing wherever it gains power. (Quite possibly the conditions are just as dire on the right, but they are not as visible or salient to me, because most of the places I can easily see, either directly or through news stories, i.e., local politics in my area, academia, journalism, large corporations, seem to have been taken over by the left.)
I have not seen corresponding posts or comments on LW worrying about cancellations from the political right (or of targeted harrassment of orgs that collaborated with the Biden administration or other opponents of Trump, etc., as we are currently seeing in practice).
I also recall seeing several “the EA case for Trump” posts, the most popular of which was written by prominent LW user Richard Ngo, who predicted the Trump administration would listen to right-wing tech elites like Musk, Thiel, (especially!) Vivek etc. (“over the next 5–10 years Silicon Valley will become the core of the Republicans”) and reinvigorate institutions in Washington, cleansing them of the draconian censorship regimes, bureaucracies that strangle economies, and catastrophic monocultures. This… does not seem to have panned out, in any of the areas I’ve just mentioned. Others are analyzed here; my personal contribution is that I know several rats who are Hanania fans (and voted for Trump) were very surprised that Trump 2.0 was not a mere continuation of Trump 1.0 and instead turned very hostile to free trade and free markets.
(I did not see any corresponding “Rats for Harris” or “EAs for Harris” posts; maybe that’s a selection effect problem on my end?)
Moreover, many of the plans written last year on this very site for how the AI safety community should either reach out to the executive branch either to communicate issues about AI risk or try to get them to implement governance strategies, etc, seemed… not to engage with the reality of what having actual Donald Trump in power would mean in this respect? Or for example, they did not engage with the possibility of having David Sacks be the official US AI Czar and dismiss everything that’s not maximally supportive of AI and tech bros? Maybe AI governance people in their private conversations are adding in stuff like “and let’s make sure we personally give an expensive gift to Trump through his lackies when we meet with the agency, otherwise we’ll be dismissed outright,” but I’m not seeing public acknowledgements of how to deal with Trump being the president from those whose plans and desires route through the US executive taking bold international action when it comes to AI.
Also, very many (definitely a majority of) users on the EA Forum, and even top brass at GiveWell, seemed shocked and entirely unprepared when USAID was shut down. I don’t have all the links handy right now, but this certainly seems to reflect a failure to predict what the Trump administration would do, even though Project 2025 talked a fair bit about how to restructure and crack down on USAID. Perhaps you wouldn’t consider the EA and rationality communities to be the same, but the overlap seems quite substantial to me.
(I did not see any corresponding “Rats for Harris” or “EAs for Harris” posts; maybe that’s a selection effect problem on my end?)
Are you somehow implying the community isn’t extremely predominantly left? If I remember the stats correctly, for US rationalists, it’s like 60% democrats, 30% libertarians, <10% republicans. The reason why nobody wrote a “Rats for Harris” post is because that would be a very weird framing with the large majority of the community voting pretty stably democratic.
Almost the entirety of my most recent comment is just about the “rationalists were/weren’t miscalibrated about the anti-intellectualism etc of the Trump campaign.”
Trump is good at making people see whatever they want to see in him, even if it is different things for different people. That’s what makes him a successful politician.
Many rationalists enjoy uncritical contrarianism: they say things that defy common sense to signal how much smarter they are, and even if that’s not the way to make best predictions, it is a way to occasionally make a weird prediction that turns out to be correct, so you can be proud of it and conveniently forget many other similar predictions that turned out to be wrong.
So yeah, this is a bad combination, because no matter how much evidence we get, the game of pretending that everything Trump does is a 5D-chess move is too enjoyable. Trump does things; if some of them happen to be good, it is “I told you so”, and if some of them happen to be bad, it is “just wait, I am sure this is all a part of a greater plan”. But the only plan is to get more power for Trump; the consequences for the economy, society, education, science, etc. are mere side effects. Anyone who still doesn’t get it is too addicted to wishful thinking.
(I wonder about Project 2025. I don’t know the details, but it wouldn’t surprise me to find out that even its authors are disappointed by Trump. At least this review on EA Forum sounds to me much smarter and more coherent than anything that Trump administration actually did.)
I don’t think this really tracks. I don’t think I’ve seen many people want to “become part of the political right”, and it’s not even the case that many people voted for republicans in recent elections (indeed, my guess is fewer rationalists voted for republicans in the last three elections than previous ones).
I do think it’s the case that on a decade scale people have become more anti-left. I think some of that is explained by background shift. Wokeness is on the decline, and anti-wokeness is more popular, so baserates are shifting. Additionally, people tend to be embedded in coastal left-leaning communities, so they develop antibodies against wokeness.
Maybe this is what you were saying, but “out of sight, out of mind” implies a miscalibration about attitudes on the right here, where my sense is people are mostly reasonably calibrated about anti-intellectualism on the right, but approximately no one was considering joining that part of the right, or was that threatened by it on a personal level, and so it doesn’t come up very much.
Hmm. I have no doubt you are more personally familiar with and knowledgeable of the rationality community than I am, especially when it comes to the in-person community, so I think it’s appropriate for me to defer here a fair bit.
Nevertheless, I think I still disagree to some extent, or at least remain confused on a few matters about the whole “miscalibration about attitudes on the right” thing. I linked a Wei Dai post upthread titled “Have epistemic conditions always been this bad?” which begins (emphasis mine):
I have not seen corresponding posts or comments on LW worrying about cancellations from the political right (or of targeted harrassment of orgs that collaborated with the Biden administration or other opponents of Trump, etc., as we are currently seeing in practice).
I also recall seeing several “the EA case for Trump” posts, the most popular of which was written by prominent LW user Richard Ngo, who predicted the Trump administration would listen to right-wing tech elites like Musk, Thiel, (especially!) Vivek etc. (“over the next 5–10 years Silicon Valley will become the core of the Republicans”) and reinvigorate institutions in Washington, cleansing them of the draconian censorship regimes, bureaucracies that strangle economies, and catastrophic monocultures. This… does not seem to have panned out, in any of the areas I’ve just mentioned. Others are analyzed here; my personal contribution is that I know several rats who are Hanania fans (and voted for Trump) were very surprised that Trump 2.0 was not a mere continuation of Trump 1.0 and instead turned very hostile to free trade and free markets.
(I did not see any corresponding “Rats for Harris” or “EAs for Harris” posts; maybe that’s a selection effect problem on my end?)
Moreover, many of the plans written last year on this very site for how the AI safety community should either reach out to the executive branch either to communicate issues about AI risk or try to get them to implement governance strategies, etc, seemed… not to engage with the reality of what having actual Donald Trump in power would mean in this respect? Or for example, they did not engage with the possibility of having David Sacks be the official US AI Czar and dismiss everything that’s not maximally supportive of AI and tech bros? Maybe AI governance people in their private conversations are adding in stuff like “and let’s make sure we personally give an expensive gift to Trump through his lackies when we meet with the agency, otherwise we’ll be dismissed outright,” but I’m not seeing public acknowledgements of how to deal with Trump being the president from those whose plans and desires route through the US executive taking bold international action when it comes to AI.
Also, very many (definitely a majority of) users on the EA Forum, and even top brass at GiveWell, seemed shocked and entirely unprepared when USAID was shut down. I don’t have all the links handy right now, but this certainly seems to reflect a failure to predict what the Trump administration would do, even though Project 2025 talked a fair bit about how to restructure and crack down on USAID. Perhaps you wouldn’t consider the EA and rationality communities to be the same, but the overlap seems quite substantial to me.
Are you somehow implying the community isn’t extremely predominantly left? If I remember the stats correctly, for US rationalists, it’s like 60% democrats, 30% libertarians, <10% republicans. The reason why nobody wrote a “Rats for Harris” post is because that would be a very weird framing with the large majority of the community voting pretty stably democratic.
Almost the entirety of my most recent comment is just about the “rationalists were/weren’t miscalibrated about the anti-intellectualism etc of the Trump campaign.”
Trump is good at making people see whatever they want to see in him, even if it is different things for different people. That’s what makes him a successful politician.
Many rationalists enjoy uncritical contrarianism: they say things that defy common sense to signal how much smarter they are, and even if that’s not the way to make best predictions, it is a way to occasionally make a weird prediction that turns out to be correct, so you can be proud of it and conveniently forget many other similar predictions that turned out to be wrong.
So yeah, this is a bad combination, because no matter how much evidence we get, the game of pretending that everything Trump does is a 5D-chess move is too enjoyable. Trump does things; if some of them happen to be good, it is “I told you so”, and if some of them happen to be bad, it is “just wait, I am sure this is all a part of a greater plan”. But the only plan is to get more power for Trump; the consequences for the economy, society, education, science, etc. are mere side effects. Anyone who still doesn’t get it is too addicted to wishful thinking.
(I wonder about Project 2025. I don’t know the details, but it wouldn’t surprise me to find out that even its authors are disappointed by Trump. At least this review on EA Forum sounds to me much smarter and more coherent than anything that Trump administration actually did.)