I’ve struggled to engage very productively with the rest of the AI safety community about politics—e.g. there are a lot of disagreeing votes and comments on this post. By and large people have been respectful and polite (as per the high upvote count), but right now it doesn’t feel like LW is a place where I can collaboratively make intellectual progress on this very important topic.
This is sad, so before I stop trying I want to attempt to go meta, by explaining how this seems analogous to the ways that the alignment community struggled to engage with mainstream ML researchers throughout the 2010s and early 2020s. My explanation of those dynamics ended up growing into this post; in this shortform I’ll discuss the analogy to politics specifically.
The main point is the following. With enough discussion, you could often get a mainstream ML researcher to admit that something like situational awareness or recursive self-improvement might in principle be possible. But it’s very hard to get them to take it seriously enough that it propagates through their ontology—and so after that one conversation they’d typically just go back to their standard ML research. This is for a bunch of reasons—in part because it’s genuinely hard to update one’s ontology, in part because their social incentives and identity push away from doing so, in part because they’re scared about the implications of propagating this belief.
Similarly, I think AI safety people (especially those in the LW cluster) are usually intellectually honest enough to be able to acknowledge that various heretical political beliefs might be true. However, there’s an additional step of propagating it through their ontology which typically doesn’t happen due to mental blocks.
For example, in this post Scott Alexander is willing to mention the possibility that heritable racial IQ gaps exist as one of four hypotheses for observed racial disparities. However, observe how he immediately distances himself from the position:
White people have average IQ 100, black people have average IQ 85, this IQ difference accurately predicts the different proportions of whites and blacks in most areas, most IQ differences within race are genetic, maybe across-race ones are genetic too. I love Hitler and want to marry him.
This is particularly striking because I’m pretty confident he himself believes some version of this hypothesis! So he’s basically calling himself a Nazi to defuse the discomfort of even raising a hypothesis this controversial (let alone endorsing it). This is objectively a very odd thing to do.
Now, Scott Alexander has historically been very brave in a bunch of ways that I wasn’t, so I don’t want to try to take any kind of moral high ground. I merely want to point out that it’s pretty obvious how this kind of mental block might make it hard to actually propagate your beliefs. Another example: when I talked to one of the most curious and polymathic people I know about a controversial topic, his immediate response was “but does that even matter?” On other topics, he would have followed his curiosity to play around with the ideas; on this one, he tried to block it off as quickly as possible.
I was in a pretty similar mindset for a long time, and required a strong sense of social safety before I managed to get myself out of it. My experience since then has been that, when you move from this kind of blocked partial acceptance of controversial beliefs, to a mindset where you’re actually able to follow them wherever they lead, there are a lot of important implications. I want to keep this post pretty meta so that more people feel comfortable engaging with it, but as one fairly milquetoast example: after seeing just how strong self-deception around taboos can be in humans, it seems pretty important to prevent anything similar from happening in AIs.
And just like with alignment, I think that all of this is giving us clues towards a whole new ontology that actually conveys very important principles about how the world works. Trying to do AI governance in the standard political ontology really feels to me like trying to do alignment research while stuck in the ML ontology. More on this in other posts, but for now I hope that this post helps other LWers better understand where I’m coming from.
It’s probably difficult to respond to this in a way that’s satisfying to you, because I and most other people are not paid independently to post on the internet, and so there’s limits to what we can say in public. But every man under the age of 30 that I’ve ever met at Lighthaven, that I’ve had the opportunity to speak with privately, has completely and totally integrated IQ differences into their ontology. There’s no hem and hawing, they’ve just accepted it as part of their worldview.
The reason I disagree voted with your post is not because it touches on ‘taboos’, it’s because it’s a vast inferential leap, developed piecemeal from assorted blogs and sociological studies you’ve gathered on the internet. Anybody that doesn’t both share your priors and information feed almost exactly will naturally end up disagreeing with large portions of it, even if they agree with you that the cause of black poverty is genetics. And IMO for good reason, because large portions of the post are generated from claims like “Elites are pro-Hamas” that are just literally and obviously false, and that you’re treating as background knowledge that I’m supposed to share.
has completely and totally integrated IQ differences into their ontology. There’s no hem and hawing, they’ve just accepted it as part of their worldview.
Just noting that these are different things, and I think Richard is attempting to point at the differences between them.
eg, he’s claiming that Scott probably believes that there are group differences in intelligence (it’s a part of his worldview), but is also flinching away from propagating all the implications.
(However this doesn’t bear on the main thrust of your comment.
eg, he’s claiming that Scott probably believes that there are group differences in intelligence (it’s a part of his worldview), but is also flinching away from propagating all the implications.
Right, and what I’m saying is that they’re very explicit about it and propagate the implications.
Some of these topics are just unsafe to openly discuss (both reputationally and partially because of externalities). In fact, in the post you’re referring to, you avoid making any direct or clear point about racial IQ differences or ethnonationalism (there’s a lot of “certain obvious facts” style phrasing) which means you also don’t include any convincing supporting evidence! This is certainly understandable, but it seems strange to blame the resulting unproductivity of the conversation mainly on lesswrong. The post is actually hard to productively engage with.
(Personally, I’d like to know what you’re actually saying and whether you’re right/wrong, but it seems hard to find out in the current epistemic environment, meaning the world not just lesswrong. Feel free to continue this conversation over DMs)
it seems strange to blame the resulting unproductivity of the conversation mainly on lesswrong
I didn’t intend to assign blame. If I had a different intellectual style (e.g. if I were more methodical about building up chains of logic) then I agree it’d be much easier for people to productively engage with me.
It seems to me like an important thing about individual taboo facts is that it’s not particularly advantageous to be correct on them, most of the time, because you can’t publicly work with others on the basis of those facts.
Are you arguing that there’s a political ontology which can only be understood by considering the fact that [redacted] is true, or one which can only be understood by considering the dynamics which lead to [redacted] being taboo despite being true? If it’s the latter, I can much more easily see how that ontology could be productive for acting in the world.
E.g. suppose you’re in Salem and know that there are no witches. You can gain some small benefit by e.g. not spending money on counter-curses, but that’s about it. Trying to spread and organize on the knowledge that there are no witches just gets you burned, but understanding why the trials are going on might actually help you survive.
It seems to me like an important thing about individual taboo facts is that it’s not particularly advantageous to be correct on them, most of the time, because you can’t publicly work with others on the basis of those facts.
Compared with most other facts that are similarly not widely believed, but are not taboo, it seems clear that taboo facts are far more advantageous to know. That’s because most uncontroversial not-widely-believed facts are very unimportant, while taboo facts tend to be highly important and have wide implications (just consider atheism, or belief in evolution, when those where highly taboo: it made everyone act on the assumption that they were false), even if it is often not easy to directly act on them. Individual knowledge of those facts is a necessary requirement for un-tabooing them and making them common knowledge.
I think you might be over-updating from your original post. You had a lot of somewhat unrelated and potentially politically sensitive statements (ethnonationalism, IQ, managerial class, ethics, government debt, taboos, egalitarianism, AI stuff). Even if one agrees with the majority of your points, it is tempting to agreement-downvote due to the minority, especially as they have high valency due to sensitive nature.
I don’t think it’s specific to sensitive topics, Richard just does a lot of sloppy thinking when he tries to engage with politics. His post/talk on more mundane political topics also led to a lot of people on LW & the EA Forum pointing out things he got wrong.
For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.
I regret giving the example of the disagree-votes, it’s not that important to me, and I agree there are all sorts of reasons you might want to disagree-vote my previous post. I’m trying to point at a broader dynamic (and elaborate more on it in this reply to Raemon).
I didn’t know we are allowed to discuss politics here. I thought that was banned? Sorry I didn’t see the previous post. Anyway, here are some intellectual contributions:
For much of the twentieth century mass media was policed by The Fairness Doctrine. The Fairness Doctrine was a policy that required media outlets to cover controversial issues in a balanced manner that represented both sides of the argument. This was used against populist broadcasters like Carl McIntire (like a more Catholic version of Bill O’Reilly or Rush Limbaugh who also opposed the Civil Rights Movement).
The fairness doctrine of the United States Federal Communications Commission (FCC), introduced in 1949, was a policy that required the holders of broadcast licenses both to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.[1] In 1987, the FCC abolished the fairness doctrine,[2] prompting some to urge its reintroduction through either Commission policy or congressional legislation.[3] The FCC removed the rule that implemented the policy from the Federal Register in August 2011.[4]
Rush Limbaugh’s radio program went national shortly after Reagan abolished the policy in 1987:
The FCC’s repeal of the fairness doctrine—which had required that stations provide free air time for responses to any controversial opinions that were broadcast—on August 5, 1987, meant stations could broadcast editorial commentary without having to present opposing views. Daniel Henninger wrote, in a Wall Street Journal editorial, “Ronald Reagan tore down this wall [the fairness doctrine] in 1987 … and Rush Limbaugh was the first man to proclaim himself liberated from the East Germany of liberal media domination.”[32]
Media outlets have incentives to sensationalize the news to agree with their audiences’ preconceived notions, amplifying political polarization. There was a period in the 20th century when The New York Times brought the rigor of academic science to news reportage by literally hiring an astrophysicist to be the editor (Carr Van Anda was hired by Adolf Ochs soon after Ochs’s purchase of the NYT), but that was a lucky aberration and needed to be sustained for much of the twentieth century by The Fairness Doctrine (and funding through advertisers interested in selling products rather than subscriptions from readers that want to have their beliefs confirmed).
While attempts to stifle intellectual discussion on sensitive topics has been present for a long time on the extreme political Left the last approximately fifteen years represent a sudden increase in the prevalence and force of these attempts. This Tablet article by Zach Goldberg documents statistical evidence of a cultural shift at some point near the middle of the previous decade. Usage of terms related to racism in major newspapers increased as much as 1500% in this period. While Goldberg does not mention it in the article, the triggering events for this phenomena in the data seem to definitely be the formation of Black Lives Matter and their first major actions in 2014. In particular, the fact that the Eric Garner incident took place in New York and the New York Times in addition to being highly influential is also one of the papers included in Goldberg’s analysis likely means this is the source of the spike in coverage and larger cultural shift.
In July 2013, the movement began with the use of the hashtag #BlackLivesMatter on social media after the acquittal of George Zimmerman in the shooting death of African-American teen Trayvon Martin 17 months earlier, in February 2012. The movement became nationally recognized for street demonstrations following the 2014 deaths of two African Americans: Michael Brown—resulting in protests and unrest in Ferguson, Missouri, a city near St. Louis—and Eric Garner in New York City.
James Douglas Bennett argues persuasively that this shift in news coverage was due to a change in the financial model of the NYT, specifically that the shift to paywalled subscription demanded playing to the audience’s political leanings:
It became one of Dean Baquet’s frequent mordant jokes that he missed the old advertising-based business model, because, compared with subscribers, advertisers felt so much less sense of ownership over the journalism. I recall his astonishment, fairly early in the Trump administration, after Times reporters conducted an interview with Trump. Subscribers were angry about the questions the Times had asked. It was as if they’d only be satisfied, Baquet said, if the reporters leaped across the desk and tried to wring the president’s neck. The Times was slow to break it to its readers that there was less to Trump’s ties to Russia than they were hoping, and more to Hunter Biden’s laptop, that Trump might be right that covid came from a Chinese lab, that masks were not always effective against the virus, that shutting down schools for many months was a bad idea.
There are also evident cultural shifts with regards to free speech in the general population in this same time period which seem to be a result of this elite-led shift. The Tablet article by Goldberg linked above briefly mentions data from the General Social Survey (GSS). The GSS gathers data on a set of questions measuring “Free Speech Values” among the American populace. The phenomenon seems to be limited to race issues; the GSS also collects data on Free Speech tolerance of Communist, Militarist, Homosexual, and Muslim expression and it is exclusively with Free Speech tolerance of Racism that we see a notable decrease of tolerance starting sometime between 2012 and 2014. It is also notable that this decrease in tolerance was especially pronounced among the most educated respondents. In 2012 college educated respondents favored removal of a racist book from libraries at a rate of 26%, the preference for removal peaked in 2022 at 43% although in 2024 (the most recent available data) it slightly ticked back to 41%.
(The data is under the “free speech” subheading under “Civil Liberties,” you must use the drop-down menu to get to these questions. Direct links are unfortunately not possible on the contemporary GSS site. Of course this is also in the raw data.)
I’d also be remiss if I don’t mention the association of wealth inequality with political extremism. There seems to be a regular historical coincidence between periods of high economic inequality and political extremism, such as the early part of the twentienth century. Both communism and the KKK had periods of high popularity in early 20th century America (and of course Europe had its own more consequential extremism at that time as well). The mechanisms for this association are more difficult to trace than for direct media regulation or behavior.
So my argument is that the change in political culture observed recently from the more openly liberal popular discussion of the mid-twentieth century is due to abolishing The Fairness Doctrine which allowed media outlets to pander to their audiences’ worst most self-absorbed flaws for profit. This directly leads to a change in wider culture. Furthermore, we should be concerned about economic inequality as this is likely an important contributing factor to political extremism.
I didn’t know we are allowed to discuss politics here.
Politics in the American political horse-race sense (for non-Americans this means political party competition between Republicans and Democrats) and of the “yay political ingroup, boo political outgroup” in general remains at least taboo, and rightly would be downvoted into oblivion even where it isn’t strictly banned.
But the fully general “politics is the mindkiller” blanket ban on even mentioning politically connected subjects hasn’t been true for years. Among other things, AI is now a highly political subject, but analyzing administrative actions surrounding it, policy consequences, even things like actual political campaigns aimed at legislation (like the vetoed California law) are completely fair game. Direct political science style questions have been popular for years.
At this point, I claim our norms are strong enough that political topics are fine, provided they otherwise follow the norms. I would still support the mods nuking any posts that come pre-mindkilled, of course.
I don’t see it happen often, which is probably a combination of pre-mindkilled people not being attracted to LessWrong, our reputation for being intolerant of it proceeding us, and our mod team being very good.
What would it look like if you were getting the engagement you wanted?
I’m not sure how much you care about that vs “this seems like it highlights something wrong/sad with the LW community”, but, this part feels like an easier problem to solve.
The sort of thing I actually expect to work (for purposes of LW/rationalsphere being useful like a useful intellectual ground for you) is more like finding a few people who on a similar page as you, exploring the ideas proactively. So far I think your posts have felt like “okay, Richard is on some journey where he figures stuff out, and someday he might have more concrete takeaways or I might find myself working on problems where I think his frame is helpful.”
Fwiw I have had some of your models in the background as I think about US politics right now. I see how, if one has those models, you might look at the situation fairly differently than most of my friends are looking at it. But, I think there are some details you believe that I don’t believe that lead to pretty different assessments on what to do. (idk if it makes sense to get into the object level here). I don’t know if you think your writing or linked references thus far should be sufficient to change my strategic frame.
Thanks for asking. I think the underlying issue here is that I’m in a period of boggling at wtf is going on with society. I have a sense that there’s a bunch of insane stuff happening all over the place. Funnily enough, one of the people who’s most sharply articulated a similar sense is Eliezer, when he wrote (I think in some glowfic) about how Earth is fractally disequilibriated, and the whole planet is made out of coordination failure.
But I think Eliezer and many rationalists maybe just take “the world is inadequate” as some kind of brute fact that doesn’t really have a clear socio-historical-political explanation. Like, we used to be able to do Manhattan projects, and now the US govt is nowhere near coordinated enough to do that, but… eh, that’s just how entropy works. Whereas it seems to me that actually it might be possible to trace the historical forces that contributed towards this, and the social principles that maintain it, and so on, to develop a fairly principled understanding of the situation.
However, this is a sufficiently ambitious project that my default strategy is to do a lot of exploration in a bunch of directions, which then leads to a lot of individual claims that people think are sloppy, which then leads to the kind of engagement that’s frustrating on both sides—where to them it feels like I’m just throwing out crazy takes, and to me it feels like they’re not trying to engage with the core ideas. (I don’t think these ideas should be sufficient to change people’s strategic frame, yet, but I do think they should be sufficient to make people confused.)
As one example: in response to this shortform, a bunch of people have commented about why they disagree-voted my previous post, how I should interpret that, and so on. But literally zero people have mentioned either my ML analogy, or the thing where Scott Alexander is calling himself a Nazi, which to me were the two most substantive parts of the shortform, that are pointing at an extremely important dynamic. So in hindsight it feels like even just mentioning my previous post derailed this one, and the move of going meta was insufficient to defuse this.
Basically I think the kind of engagement I want is more like “riff with me”, but that’s just unrealistic to expect from a community in public on controversial topics (at least without requiring me to put a level of care into phrasing things that would make it no longer “riffing” on my end).
Whereas it seems to me that actually it might be possible to trace the historical forces that contributed towards this, and the social principles that maintain it, and so on, to develop a fairly principled understanding of the situation.
As an example, I think the influence of the Soviet Union is underrated on the loss of American confidence / positive-self-conception. (Like I think it’s obvious to Europeans that WWI / WWII did a lot to destroy European confidence / positive-self-conception, but I really don’t think it had the same impact on the US, and our psychic collapse came much later and for different reasons.) This itself is hard to talk about because it’s deliberate enemy action, which includes it attempting to disguise itself / prevent consensus-creation on its existence (and source). And, like, the USSR collapsed, so how much value is there in litigating the historical source rather than the current facts?
[IMO a nontrivial amount; I think there’s a correlated updates thing where it’s worth invalidating the cache and recalculating a lot of things. But that recalculation is probably better done from the standpoint of a positive vision rather than a negative one, and that’s it’s own project...]
I think I’ve seen you in two modes around your more controversial opinions:
Demurring / ‘not wanting to get into it’ / ‘expecting to be attacked’
Aggressively shoe-horning in political examples without making their relevance over less-charged examples clear.
If you continue to hold this set of beliefs, my hope is that you come to feel less persecuted, such that you can unselfconsciously weave the true-to-you version of them into conversation, without drawing too much or too little attention.
However, I don’t know if I’m capable of perceiving a third thing, given my beliefs. Like, maybe there is no ‘sweet spot’ and it will always seem to me that you’re either being an evasive crypto fascist or awkwardly insisting on centering a -phobic/-ist line of argument, because my brain is broken. Afaict, this is what you believe about (people like) me, and I’m not really sure how to rule it out.
curious as to the strong negative response here; usually when I get downvoted a lot I kind of expect it ahead of time, but this one was surprising!
If anyone has guesses as to why this was so unpopular, I’d be interested to hear them.
Edit: the above comment was at −12 within 15 minutes of posting. My best guess now is 1 or 2 strong downvotes? Currently at −2. (I don’t care terminally; I just really don’t know if the comment is bad vs somehow narrowly offensive)
Edit 2: swung up to +17 (not totally sure that’s the exact number), and now down to +2. Glad to be controversial, I suppose. Renewing my bid for anyone to tell me why this comment would be divisive. Genuinely confused.
I think the median position in rationalist circles is probably the following: There’s no reason to care about heritable IQ gaps, and good reason to not publicly discuss them. E.g., in this comment.
If one was to survey all the frontpage articles on Lesswrong over the last 6⁄9 months, how many turn on the heritable IQ gap? Very few, as far as I can tell.
Showing I am wrong in this assessment (e.g., in a short post collecting 4-5 highly upvoted posts which shows how omitting heritable IQ from their world model has caused confusion or mistake), is more likely to succeed than introducing it as a new current of debate.
Thanks, good comment. The quick low-effort version that doesn’t require actually writing the posts is that without taking heritable IQ into account, I think you will be confused about:
Why Israel is so good at defending itself even against far larger countries surrounding it (and the last few centuries of Jewish history more generally).
Why the growth curves for East Asia and Africa looked so different over the last century.
One of the big issues with IQ discussions is that you end up taking the public’s struggle with statistics which is already terrible even when they’re trying and then add on a whole bunch of bad faith intention and discourse on top.
Like this is just anecdotal but the amount of people I’ve seen in this and similar discussion seemingly unable to recognize that averages aren’t necessarily that meaningful to any individual experience is crazy. A group with an average of 95 IQ is still going to have lots of individuals within it that are over >100! If you take a random person from average 95 IQ group and a random person from average 100 IQ group, there’s a not that far from equal chance that the former person is higher than the latter! Less than the 50⁄50, but it’s close. “Less than a coin flip” is not equivalent to rare.
Like, more generally—if we’re going to discuss unwelcome features of what is, I’d like to see us establish some sort of way to have some confidence that we’re going at all a similar direction about what should be.
I had trouble figuring out which part of your post was intended to be the main question. I’ve left a few comments responding to various parts of it.
Re what outcomes I’m aiming for, honestly a lot of what motivates my thinking is how much I care about cooperation. I just expect that for cooperation to work at large scales and over the long term, you need to do a bunch of exclusion/separation at smaller scales.
I had trouble figuring out which part of your post was intended to be the main question. I’ve left a few comments responding to various parts of it.
Appreciated! I have a longer comment replying to your reply over there I’ll send in about a day, after some editing to remove my signature word vomit style. It will be more specific than this comment.
a lot of what motivates my thinking is how much I care about cooperation
This is a promising thing to say! It slightly reduces my probability somewhat that your followup statement is quite as concerning as it sounds.
you need to do a bunch of exclusion/separation at smaller scales.
I could buy this literal statement, probably in a different form than you mean it. I’ve thought for a long time that, if we get an aligned overwhelming superintelligence and can worry about mundane things like this, then people who want human monocultures (eg, a culture with only 6-fingered people, to make up an example) may need to move to isolated locations to get their monocultures, though that would have difficulties relating to ensuring newly made kids can give informed consent. And to spell it out, policies like that in the presence of an overwhelming superintelligence would look quite different than similar-sounding ones implemented today.
My impression is that a lot of current bad social situations have no separation-based winning moves available that don’t involve worsening conflict or screwing many people over at once, moving people around being generally wildly expensive to do well—I imagine something like $10k/person conditional on move being a lower bound on how expensive doing it morally might be—and thus historically basically universally happening in ways that range from “kinda bad” to “top 5 worst things that have ever happened”. So, like, no relocation-based policy seems like it could be good.
I do think people who want to prevent that from happening again would be more successful in doing so if they’re able to discuss controversial topics without causing meltdowns.
On the thread-starter shortform: Right now, my impression is that politics is a pretty heavily loaded game in a way it wasn’t even recently, and that there are many otherwise ordinary things it’s not safe to say, to a much greater degree than was true two years ago. I’ve been avoiding politics topics in general, except for ones where I think it’s very important to make a comment because it’s unlikely anyone else tries to bridge worldviews in a way I’d like to; your comments on the topic have risen to that standard for me. I’ll have to think about whether there’s anything more specific I want to say.
but right now it doesn’t feel like LW is a place where I can collaboratively make intellectual progress on this very important topic.
Looking at the post in question, I think this was basically true, but not for the reason you seem to imply. On a ultra short post about the decline of Western civilization using very high level historic hot takes, ~no first level comments cite non obvious historic facts. Instead, it’s mostly vibes based rationalism.
But to some extent, you do not seem to engage much with takes that do exhibit substance? E.g. there was one comment that pointed out that the IQ+race debate is pretty American and does not apply to Britain; similarly there was a comment with a pretty detailed criticism of your description of the gold standard’s effect, but you explicitly chose not to engage with the objective level part of that comment and focused on the meta point of whether ethnonationalism is okay to discuss
This is particularly striking because I’m pretty confident he himself believes some version of this hypothesis! So he’s basically calling himself a Nazi to defuse the discomfort of even raising a hypothesis this controversial (let alone endorsing it). This is objectively a very odd thing to do.
I think it’s quite plausible that he believes that half of the difference is genetic and thus he does not hold the position that most of the difference is genetic. I think he probably justified it to himself by saying that the claim “most” is more extreme than what he holds.
When it comes to the discourse around IQ the social norms seem to be strong enough that even Elon’s Grok says that there are no racial IQ differences.
By and large people have been respectful and polite (as per the high upvote count), but right now it doesn’t feel like LW is a place where I can collaboratively make intellectual progress on this very important topic.
Where do you plan to write about politics, etc. then?
But most of my collaboration on this stuff is via 1:1 discussions, reading groups, etc. I host a politics reading group which has been very productive for me (separate from the 21civ.com groups, which have also been interesting).
Yes, there is reticence about lots topics, not just intelligence differences. But the fact that some people are unwilling to fully engage with the implications of that, I guess as an explanation of problems happening in society, does not make that point more valid.
Mental blocks generally happen when people avoid thinking about bad scenarios, when there is discomfort (e.g. not having enough money for X, or a plan going bad, or a bridge breaking, or social pressure). I guess the trick you did is removing some bad scenario that allowed you to further explore certain topics; this does not mean you are immune from ‘bad’ scenarios blocking you on that same topic (e.g. reversing to the mean of that topic).
As a personal observation, I keep noticing on this forum the frequency of your eloquent “Pindaric” flights to nudge conversation towards IQ. I wonder why. But I will refrain from commenting further, only suggesting that it might be good to travel around the world and to engage with real people.
I’ve struggled to engage very productively with the rest of the AI safety community about politics—e.g. there are a lot of disagreeing votes and comments on this post. By and large people have been respectful and polite (as per the high upvote count), but right now it doesn’t feel like LW is a place where I can collaboratively make intellectual progress on this very important topic.
This is sad, so before I stop trying I want to attempt to go meta, by explaining how this seems analogous to the ways that the alignment community struggled to engage with mainstream ML researchers throughout the 2010s and early 2020s. My explanation of those dynamics ended up growing into this post; in this shortform I’ll discuss the analogy to politics specifically.
The main point is the following. With enough discussion, you could often get a mainstream ML researcher to admit that something like situational awareness or recursive self-improvement might in principle be possible. But it’s very hard to get them to take it seriously enough that it propagates through their ontology—and so after that one conversation they’d typically just go back to their standard ML research. This is for a bunch of reasons—in part because it’s genuinely hard to update one’s ontology, in part because their social incentives and identity push away from doing so, in part because they’re scared about the implications of propagating this belief.
Similarly, I think AI safety people (especially those in the LW cluster) are usually intellectually honest enough to be able to acknowledge that various heretical political beliefs might be true. However, there’s an additional step of propagating it through their ontology which typically doesn’t happen due to mental blocks.
For example, in this post Scott Alexander is willing to mention the possibility that heritable racial IQ gaps exist as one of four hypotheses for observed racial disparities. However, observe how he immediately distances himself from the position:
This is particularly striking because I’m pretty confident he himself believes some version of this hypothesis! So he’s basically calling himself a Nazi to defuse the discomfort of even raising a hypothesis this controversial (let alone endorsing it). This is objectively a very odd thing to do.
Now, Scott Alexander has historically been very brave in a bunch of ways that I wasn’t, so I don’t want to try to take any kind of moral high ground. I merely want to point out that it’s pretty obvious how this kind of mental block might make it hard to actually propagate your beliefs. Another example: when I talked to one of the most curious and polymathic people I know about a controversial topic, his immediate response was “but does that even matter?” On other topics, he would have followed his curiosity to play around with the ideas; on this one, he tried to block it off as quickly as possible.
I was in a pretty similar mindset for a long time, and required a strong sense of social safety before I managed to get myself out of it. My experience since then has been that, when you move from this kind of blocked partial acceptance of controversial beliefs, to a mindset where you’re actually able to follow them wherever they lead, there are a lot of important implications. I want to keep this post pretty meta so that more people feel comfortable engaging with it, but as one fairly milquetoast example: after seeing just how strong self-deception around taboos can be in humans, it seems pretty important to prevent anything similar from happening in AIs.
And just like with alignment, I think that all of this is giving us clues towards a whole new ontology that actually conveys very important principles about how the world works. Trying to do AI governance in the standard political ontology really feels to me like trying to do alignment research while stuck in the ML ontology. More on this in other posts, but for now I hope that this post helps other LWers better understand where I’m coming from.
It’s probably difficult to respond to this in a way that’s satisfying to you, because I and most other people are not paid independently to post on the internet, and so there’s limits to what we can say in public. But every man under the age of 30 that I’ve ever met at Lighthaven, that I’ve had the opportunity to speak with privately, has completely and totally integrated IQ differences into their ontology. There’s no hem and hawing, they’ve just accepted it as part of their worldview.
The reason I disagree voted with your post is not because it touches on ‘taboos’, it’s because it’s a vast inferential leap, developed piecemeal from assorted blogs and sociological studies you’ve gathered on the internet. Anybody that doesn’t both share your priors and information feed almost exactly will naturally end up disagreeing with large portions of it, even if they agree with you that the cause of black poverty is genetics. And IMO for good reason, because large portions of the post are generated from claims like “Elites are pro-Hamas” that are just literally and obviously false, and that you’re treating as background knowledge that I’m supposed to share.
Do you have a discussion about iq differences with every man under 30 you meet at Lighthaven?
Just noting that these are different things, and I think Richard is attempting to point at the differences between them.
eg, he’s claiming that Scott probably believes that there are group differences in intelligence (it’s a part of his worldview), but is also flinching away from propagating all the implications.
(However this doesn’t bear on the main thrust of your comment.
Right, and what I’m saying is that they’re very explicit about it and propagate the implications.
Some of these topics are just unsafe to openly discuss (both reputationally and partially because of externalities). In fact, in the post you’re referring to, you avoid making any direct or clear point about racial IQ differences or ethnonationalism (there’s a lot of “certain obvious facts” style phrasing) which means you also don’t include any convincing supporting evidence! This is certainly understandable, but it seems strange to blame the resulting unproductivity of the conversation mainly on lesswrong. The post is actually hard to productively engage with.
(Personally, I’d like to know what you’re actually saying and whether you’re right/wrong, but it seems hard to find out in the current epistemic environment, meaning the world not just lesswrong. Feel free to continue this conversation over DMs)
I didn’t intend to assign blame. If I had a different intellectual style (e.g. if I were more methodical about building up chains of logic) then I agree it’d be much easier for people to productively engage with me.
It seems to me like an important thing about individual taboo facts is that it’s not particularly advantageous to be correct on them, most of the time, because you can’t publicly work with others on the basis of those facts.
Are you arguing that there’s a political ontology which can only be understood by considering the fact that [redacted] is true, or one which can only be understood by considering the dynamics which lead to [redacted] being taboo despite being true? If it’s the latter, I can much more easily see how that ontology could be productive for acting in the world.
E.g. suppose you’re in Salem and know that there are no witches. You can gain some small benefit by e.g. not spending money on counter-curses, but that’s about it. Trying to spread and organize on the knowledge that there are no witches just gets you burned, but understanding why the trials are going on might actually help you survive.
Compared with most other facts that are similarly not widely believed, but are not taboo, it seems clear that taboo facts are far more advantageous to know. That’s because most uncontroversial not-widely-believed facts are very unimportant, while taboo facts tend to be highly important and have wide implications (just consider atheism, or belief in evolution, when those where highly taboo: it made everyone act on the assumption that they were false), even if it is often not easy to directly act on them. Individual knowledge of those facts is a necessary requirement for un-tabooing them and making them common knowledge.
I think you might be over-updating from your original post. You had a lot of somewhat unrelated and potentially politically sensitive statements (ethnonationalism, IQ, managerial class, ethics, government debt, taboos, egalitarianism, AI stuff). Even if one agrees with the majority of your points, it is tempting to agreement-downvote due to the minority, especially as they have high valency due to sensitive nature.
I don’t think it’s specific to sensitive topics, Richard just does a lot of sloppy thinking when he tries to engage with politics. His post/talk on more mundane political topics also led to a lot of people on LW & the EA Forum pointing out things he got wrong.
For the record: I do agree that a bunch of my political thinking is sloppy. Right now it feels like I’m facing a tradeoff between speed of conceptual progress and precision of thinking, and I’m optimizing primarily for the former.
One reason I discussed the analogy to ML above is because I hoped it would help people understand why I’m making this tradeoff. For example, I suspect that many LWers remember their thinking about AGI being called sloppy by the mainstream ML community because it didn’t have equations. I think in hindsight it was the correct choice for LW to focus on this kind of “sloppy” exploratory thinking.
Having said that, it’s clearly possible to go too far in this direction, and I regret giving the EAG talk in particular. More generally, there’s a difference between doing sloppy thinking with intellectual collaborators vs broadcasting sloppy thinking to the world. Part of what I’m trying to figure out is the extent to which I should think of LW posts as the former vs the latter.
I regret giving the example of the disagree-votes, it’s not that important to me, and I agree there are all sorts of reasons you might want to disagree-vote my previous post. I’m trying to point at a broader dynamic (and elaborate more on it in this reply to Raemon).
I didn’t know we are allowed to discuss politics here. I thought that was banned? Sorry I didn’t see the previous post. Anyway, here are some intellectual contributions:
For much of the twentieth century mass media was policed by The Fairness Doctrine. The Fairness Doctrine was a policy that required media outlets to cover controversial issues in a balanced manner that represented both sides of the argument. This was used against populist broadcasters like Carl McIntire (like a more Catholic version of Bill O’Reilly or Rush Limbaugh who also opposed the Civil Rights Movement).
https://en.wikipedia.org/wiki/Fairness_doctrine
Rush Limbaugh’s radio program went national shortly after Reagan abolished the policy in 1987:
https://en.wikipedia.org/wiki/Rush_Limbaugh
Media outlets have incentives to sensationalize the news to agree with their audiences’ preconceived notions, amplifying political polarization. There was a period in the 20th century when The New York Times brought the rigor of academic science to news reportage by literally hiring an astrophysicist to be the editor (Carr Van Anda was hired by Adolf Ochs soon after Ochs’s purchase of the NYT), but that was a lucky aberration and needed to be sustained for much of the twentieth century by The Fairness Doctrine (and funding through advertisers interested in selling products rather than subscriptions from readers that want to have their beliefs confirmed).
While attempts to stifle intellectual discussion on sensitive topics has been present for a long time on the extreme political Left the last approximately fifteen years represent a sudden increase in the prevalence and force of these attempts. This Tablet article by Zach Goldberg documents statistical evidence of a cultural shift at some point near the middle of the previous decade. Usage of terms related to racism in major newspapers increased as much as 1500% in this period. While Goldberg does not mention it in the article, the triggering events for this phenomena in the data seem to definitely be the formation of Black Lives Matter and their first major actions in 2014. In particular, the fact that the Eric Garner incident took place in New York and the New York Times in addition to being highly influential is also one of the papers included in Goldberg’s analysis likely means this is the source of the spike in coverage and larger cultural shift.
https://en.wikipedia.org/wiki/Black_Lives_Matter
James Douglas Bennett argues persuasively that this shift in news coverage was due to a change in the financial model of the NYT, specifically that the shift to paywalled subscription demanded playing to the audience’s political leanings:
https://www.economist.com/1843/2023/12/14/when-the-new-york-times-lost-its-way
There are also evident cultural shifts with regards to free speech in the general population in this same time period which seem to be a result of this elite-led shift. The Tablet article by Goldberg linked above briefly mentions data from the General Social Survey (GSS). The GSS gathers data on a set of questions measuring “Free Speech Values” among the American populace. The phenomenon seems to be limited to race issues; the GSS also collects data on Free Speech tolerance of Communist, Militarist, Homosexual, and Muslim expression and it is exclusively with Free Speech tolerance of Racism that we see a notable decrease of tolerance starting sometime between 2012 and 2014. It is also notable that this decrease in tolerance was especially pronounced among the most educated respondents. In 2012 college educated respondents favored removal of a racist book from libraries at a rate of 26%, the preference for removal peaked in 2022 at 43% although in 2024 (the most recent available data) it slightly ticked back to 41%.
https://gssdataexplorer.norc.org/trends
(The data is under the “free speech” subheading under “Civil Liberties,” you must use the drop-down menu to get to these questions. Direct links are unfortunately not possible on the contemporary GSS site. Of course this is also in the raw data.)
I’d also be remiss if I don’t mention the association of wealth inequality with political extremism. There seems to be a regular historical coincidence between periods of high economic inequality and political extremism, such as the early part of the twentienth century. Both communism and the KKK had periods of high popularity in early 20th century America (and of course Europe had its own more consequential extremism at that time as well). The mechanisms for this association are more difficult to trace than for direct media regulation or behavior.
So my argument is that the change in political culture observed recently from the more openly liberal popular discussion of the mid-twentieth century is due to abolishing The Fairness Doctrine which allowed media outlets to pander to their audiences’ worst most self-absorbed flaws for profit. This directly leads to a change in wider culture. Furthermore, we should be concerned about economic inequality as this is likely an important contributing factor to political extremism.
Politics in the American political horse-race sense (for non-Americans this means political party competition between Republicans and Democrats) and of the “yay political ingroup, boo political outgroup” in general remains at least taboo, and rightly would be downvoted into oblivion even where it isn’t strictly banned.
But the fully general “politics is the mindkiller” blanket ban on even mentioning politically connected subjects hasn’t been true for years. Among other things, AI is now a highly political subject, but analyzing administrative actions surrounding it, policy consequences, even things like actual political campaigns aimed at legislation (like the vetoed California law) are completely fair game. Direct political science style questions have been popular for years.
At this point, I claim our norms are strong enough that political topics are fine, provided they otherwise follow the norms. I would still support the mods nuking any posts that come pre-mindkilled, of course.
I don’t see it happen often, which is probably a combination of pre-mindkilled people not being attracted to LessWrong, our reputation for being intolerant of it proceeding us, and our mod team being very good.
Well that’s it. I guess I am making this a frontpage post now.
It isn’t banned, it’s discouraged. Politics is the Mind-Killer so we try to avoid it.
What would it look like if you were getting the engagement you wanted?
I’m not sure how much you care about that vs “this seems like it highlights something wrong/sad with the LW community”, but, this part feels like an easier problem to solve.
The sort of thing I actually expect to work (for purposes of LW/rationalsphere being useful like a useful intellectual ground for you) is more like finding a few people who on a similar page as you, exploring the ideas proactively. So far I think your posts have felt like “okay, Richard is on some journey where he figures stuff out, and someday he might have more concrete takeaways or I might find myself working on problems where I think his frame is helpful.”
Fwiw I have had some of your models in the background as I think about US politics right now. I see how, if one has those models, you might look at the situation fairly differently than most of my friends are looking at it. But, I think there are some details you believe that I don’t believe that lead to pretty different assessments on what to do. (idk if it makes sense to get into the object level here). I don’t know if you think your writing or linked references thus far should be sufficient to change my strategic frame.
Thanks for asking. I think the underlying issue here is that I’m in a period of boggling at wtf is going on with society. I have a sense that there’s a bunch of insane stuff happening all over the place. Funnily enough, one of the people who’s most sharply articulated a similar sense is Eliezer, when he wrote (I think in some glowfic) about how Earth is fractally disequilibriated, and the whole planet is made out of coordination failure.
But I think Eliezer and many rationalists maybe just take “the world is inadequate” as some kind of brute fact that doesn’t really have a clear socio-historical-political explanation. Like, we used to be able to do Manhattan projects, and now the US govt is nowhere near coordinated enough to do that, but… eh, that’s just how entropy works. Whereas it seems to me that actually it might be possible to trace the historical forces that contributed towards this, and the social principles that maintain it, and so on, to develop a fairly principled understanding of the situation.
However, this is a sufficiently ambitious project that my default strategy is to do a lot of exploration in a bunch of directions, which then leads to a lot of individual claims that people think are sloppy, which then leads to the kind of engagement that’s frustrating on both sides—where to them it feels like I’m just throwing out crazy takes, and to me it feels like they’re not trying to engage with the core ideas. (I don’t think these ideas should be sufficient to change people’s strategic frame, yet, but I do think they should be sufficient to make people confused.)
As one example: in response to this shortform, a bunch of people have commented about why they disagree-voted my previous post, how I should interpret that, and so on. But literally zero people have mentioned either my ML analogy, or the thing where Scott Alexander is calling himself a Nazi, which to me were the two most substantive parts of the shortform, that are pointing at an extremely important dynamic. So in hindsight it feels like even just mentioning my previous post derailed this one, and the move of going meta was insufficient to defuse this.
Basically I think the kind of engagement I want is more like “riff with me”, but that’s just unrealistic to expect from a community in public on controversial topics (at least without requiring me to put a level of care into phrasing things that would make it no longer “riffing” on my end).
As an example, I think the influence of the Soviet Union is underrated on the loss of American confidence / positive-self-conception. (Like I think it’s obvious to Europeans that WWI / WWII did a lot to destroy European confidence / positive-self-conception, but I really don’t think it had the same impact on the US, and our psychic collapse came much later and for different reasons.) This itself is hard to talk about because it’s deliberate enemy action, which includes it attempting to disguise itself / prevent consensus-creation on its existence (and source). And, like, the USSR collapsed, so how much value is there in litigating the historical source rather than the current facts?
[IMO a nontrivial amount; I think there’s a correlated updates thing where it’s worth invalidating the cache and recalculating a lot of things. But that recalculation is probably better done from the standpoint of a positive vision rather than a negative one, and that’s it’s own project...]
I think I’ve seen you in two modes around your more controversial opinions:
Demurring / ‘not wanting to get into it’ / ‘expecting to be attacked’
Aggressively shoe-horning in political examples without making their relevance over less-charged examples clear.
If you continue to hold this set of beliefs, my hope is that you come to feel less persecuted, such that you can unselfconsciously weave the true-to-you version of them into conversation, without drawing too much or too little attention.
However, I don’t know if I’m capable of perceiving a third thing, given my beliefs. Like, maybe there is no ‘sweet spot’ and it will always seem to me that you’re either being an evasive crypto fascist or awkwardly insisting on centering a -phobic/-ist line of argument, because my brain is broken. Afaict, this is what you believe about (people like) me, and I’m not really sure how to rule it out.
curious as to the strong negative response here; usually when I get downvoted a lot I kind of expect it ahead of time, but this one was surprising!
If anyone has guesses as to why this was so unpopular, I’d be interested to hear them.
Edit: the above comment was at −12 within 15 minutes of posting. My best guess now is 1 or 2 strong downvotes? Currently at −2. (I don’t care terminally; I just really don’t know if the comment is bad vs somehow narrowly offensive)
Edit 2: swung up to +17 (not totally sure that’s the exact number), and now down to +2. Glad to be controversial, I suppose. Renewing my bid for anyone to tell me why this comment would be divisive. Genuinely confused.
you pierced the veil. the iron law around here is not to mention the author’s intent/motivation, but only the content of their essay.
This framing is helpful.
I think the median position in rationalist circles is probably the following: There’s no reason to care about heritable IQ gaps, and good reason to not publicly discuss them. E.g., in this comment.
If one was to survey all the frontpage articles on Lesswrong over the last 6⁄9 months, how many turn on the heritable IQ gap? Very few, as far as I can tell.
Showing I am wrong in this assessment (e.g., in a short post collecting 4-5 highly upvoted posts which shows how omitting heritable IQ from their world model has caused confusion or mistake), is more likely to succeed than introducing it as a new current of debate.
Thanks, good comment. The quick low-effort version that doesn’t require actually writing the posts is that without taking heritable IQ into account, I think you will be confused about:
Various ways in which post-apartheid South Africa is a bad place to live.
Why so many countries have market-dominant minorities.
Why Israel is so good at defending itself even against far larger countries surrounding it (and the last few centuries of Jewish history more generally).
Why the growth curves for East Asia and Africa looked so different over the last century.
One of the big issues with IQ discussions is that you end up taking the public’s struggle with statistics which is already terrible even when they’re trying and then add on a whole bunch of bad faith intention and discourse on top.
Like this is just anecdotal but the amount of people I’ve seen in this and similar discussion seemingly unable to recognize that averages aren’t necessarily that meaningful to any individual experience is crazy. A group with an average of 95 IQ is still going to have lots of individuals within it that are over >100! If you take a random person from average 95 IQ group and a random person from average 100 IQ group, there’s a not that far from equal chance that the former person is higher than the latter! Less than the 50⁄50, but it’s close. “Less than a coin flip” is not equivalent to rare.
I’d still be interested to see your response to the question I asked!
Like, more generally—if we’re going to discuss unwelcome features of what is, I’d like to see us establish some sort of way to have some confidence that we’re going at all a similar direction about what should be.
I had trouble figuring out which part of your post was intended to be the main question. I’ve left a few comments responding to various parts of it.
Re what outcomes I’m aiming for, honestly a lot of what motivates my thinking is how much I care about cooperation. I just expect that for cooperation to work at large scales and over the long term, you need to do a bunch of exclusion/separation at smaller scales.
Why not just talk about this instead?
Appreciated! I have a longer comment replying to your reply over there I’ll send in about a day, after some editing to remove my signature word vomit style. It will be more specific than this comment.
This is a promising thing to say! It slightly reduces my probability somewhat that your followup statement is quite as concerning as it sounds.
I could buy this literal statement, probably in a different form than you mean it. I’ve thought for a long time that, if we get an aligned overwhelming superintelligence and can worry about mundane things like this, then people who want human monocultures (eg, a culture with only 6-fingered people, to make up an example) may need to move to isolated locations to get their monocultures, though that would have difficulties relating to ensuring newly made kids can give informed consent. And to spell it out, policies like that in the presence of an overwhelming superintelligence would look quite different than similar-sounding ones implemented today.
My impression is that a lot of current bad social situations have no separation-based winning moves available that don’t involve worsening conflict or screwing many people over at once, moving people around being generally wildly expensive to do well—I imagine something like $10k/person conditional on move being a lower bound on how expensive doing it morally might be—and thus historically basically universally happening in ways that range from “kinda bad” to “top 5 worst things that have ever happened”. So, like, no relocation-based policy seems like it could be good.
I do think people who want to prevent that from happening again would be more successful in doing so if they’re able to discuss controversial topics without causing meltdowns.
On the thread-starter shortform: Right now, my impression is that politics is a pretty heavily loaded game in a way it wasn’t even recently, and that there are many otherwise ordinary things it’s not safe to say, to a much greater degree than was true two years ago. I’ve been avoiding politics topics in general, except for ones where I think it’s very important to make a comment because it’s unlikely anyone else tries to bridge worldviews in a way I’d like to; your comments on the topic have risen to that standard for me. I’ll have to think about whether there’s anything more specific I want to say.
Looking at the post in question, I think this was basically true, but not for the reason you seem to imply. On a ultra short post about the decline of Western civilization using very high level historic hot takes, ~no first level comments cite non obvious historic facts. Instead, it’s mostly vibes based rationalism.
But to some extent, you do not seem to engage much with takes that do exhibit substance? E.g. there was one comment that pointed out that the IQ+race debate is pretty American and does not apply to Britain; similarly there was a comment with a pretty detailed criticism of your description of the gold standard’s effect, but you explicitly chose not to engage with the objective level part of that comment and focused on the meta point of whether ethnonationalism is okay to discuss
I think it’s quite plausible that he believes that half of the difference is genetic and thus he does not hold the position that most of the difference is genetic. I think he probably justified it to himself by saying that the claim “most” is more extreme than what he holds.
When it comes to the discourse around IQ the social norms seem to be strong enough that even Elon’s Grok says that there are no racial IQ differences.
Where do you plan to write about politics, etc. then?
Twitter + my blog (mindthefuture.info).
But most of my collaboration on this stuff is via 1:1 discussions, reading groups, etc. I host a politics reading group which has been very productive for me (separate from the 21civ.com groups, which have also been interesting).
Yes, there is reticence about lots topics, not just intelligence differences. But the fact that some people are unwilling to fully engage with the implications of that, I guess as an explanation of problems happening in society, does not make that point more valid.
Mental blocks generally happen when people avoid thinking about bad scenarios, when there is discomfort (e.g. not having enough money for X, or a plan going bad, or a bridge breaking, or social pressure). I guess the trick you did is removing some bad scenario that allowed you to further explore certain topics; this does not mean you are immune from ‘bad’ scenarios blocking you on that same topic (e.g. reversing to the mean of that topic).
As a personal observation, I keep noticing on this forum the frequency of your eloquent “Pindaric” flights to nudge conversation towards IQ. I wonder why. But I will refrain from commenting further, only suggesting that it might be good to travel around the world and to engage with real people.