Book Review: How Minds Change

In 2009, Eliezer Yudkowsky published Raising the Sanity Waterline. It was the first article in his Craft and the Community sequence about the rationality movement itself, and this first article served as something of a mission statement. The rough thesis behind this article—really, the thesis behind the entire rationalist movement—can be paraphrased as something like this:

We currently live in a world where even the smartest people believe plainly untrue things. Religion is a prime example: its supernatural claims are patently untrue, and yet a huge number of people at the top of our institutions—scholars, scientists, leaders—believe otherwise.

But religion is just a symptom. The real problem is humanity’s lack of rationalist skills. We have bad epistemology, bad meta-ethics, and we don’t update our beliefs based on evidence. If we don’t master these skills, we’re doomed to just replace religion with something equally as ridiculous.

We have to learn these skills, hone them, and teach them to others, so that people can make accurate decisions and predictions about the world without getting caught up in the fallacies so typical of human reasoning.

The callout of religion dates it: it was from the era where the early English-speaking internet was a battlefield between atheism and religion. Religion has slowly receded from public life since then, but the rationality community stuck around, in places like this site and SSC/​ACX and the Effective Altruism community.

I hope you’ll excuse me, then, if I say that the rationalist community has been a failure.

Sorry! Put down your pitchforks. That’s not entirely true. There’s a very real sense in which it’s been a success. The community has spread and expanded to immense levels. Billions of dollars flow through Effective Altruist organizations to worthy causes. Rationalist and rationalist-adjacent people have written several important and influential books. And pockets of the Bay Area and other major cities have self-sustaining rationalist social circles, filled with amazing people doing ambitious and interesting things.

But that wasn’t the point of the community. At least not the entire point. From Less Wrong’s account of its own history:

After failed attempts at teaching people to use Bayes’ Theorem, [Yudkowsky] went largely quiet from [his transhumanist mailing list] to work on AI safety research directly. After discovering he was not able to make as much progress as he wanted to, he changed tacts to focus on teaching the rationality skills necessary to do AI safety research until such time as there was a sustainable culture that would allow him to focus on AI safety research while also continuing to find and train new AI safety researchers.

In short: the rationalist community was intended as a way of preventing the rise of unfriendly AI.

The results on this goal have been mixed to say the least.

The ideas of AI Safety have made their way out there. Many people that are into AI have heard of ideas like a paperclip maximizer. Several AI Safety organizations have been founded, and a nontrivial chunk of Effective Altruists are actively trying to tackle this problem.

But the increased promulgation of the idea of transformational AI also caught the eye of some powerful and rich people, some of which proceeded to found OpenAI. Most people of a Yudkowskian bent consider this to be a major “own goal”: although it’s good to have one of the world’s leading AI labs be a sort-of-non-profit that openly says that they care about AI Safety, they’ve also created a race to AGI, accelerating AI timelines like never before.

https://​​twitter.com/​​ESYudkowsky/​​status/​​1446562238848847877

And it’s not just AI. Outside of the rationalist community, the sanity waterline hasn’t gotten much better. Sure, religion has retreated, but just as predicted, it’s been replaced by things that are at least as ridiculous if not worse. Politics, in both the US and abroad, has gone insane and become more polarized than ever. Arguments are soldiers and debate is war. Making it worse, unlike religions, which are explicitly beliefs about the supernatural, these are beliefs about reality and the natural world that have about as much rigor in them as religious dogma. I’ll leave it up to you to decide which outgroup belief you think this is a dogwhistle for.

This isn’t where the community is supposed to have ended up. If rationality is systematized winning, then the community has failed to be rational.


How did it end up here? How could the community go so well in some ways, but so poorly in others?

I think the answer to this is that the community was successful by means of selection. The kind of people that flocked to the community had high intellectual curiosity, were willing to tolerate weird ideas, and, if I had to guess, maybe had trouble finding like-minded people who’d understand them outside the community.

Surveys on Less Wrong put them at an average IQ of 138, and even the 2022 ACX survey had an average IQ of 137 (among people who filled that question out). That’s higher than 99.3% of the world population. Even if you, like Scott Alexander, assume that’s inflated by about 10–15 points, that’s still higher than around 95% of the population. Having read Yudkowsky’s writing, I’m not surprised: I consider myself smart (who doesn’t?), but his writing is dense in a way that makes it sometimes hard even for me to grasp what he’s saying until I read it over multiple times.

All this led to a community which is influential but insular. Its communication is laser-focused to a particular kind of person, and has been extraordinarily successful at getting that kind of person on board, while leaving the rest of the world behind.

Ideas have filtered out, sure, but they’re distorted when they do, and it’s happening slower than needed in order to make progress in AI Safety. Some people are better than others at getting ideas out of the bubble, like Scott Alexander, but it’s worth remembering he’s kept himself grounded among more average people via his day job in psychiatry and spent his teenage years learning how to manipulate consensus reality.

Not only that, but anecdotally it feels like the insularity has exacerbated bad tendencies in some people. They’ve found their tribe, and they feel that talking to people outside it is pointless. Even in my minimal IRL interactions with the community, I’ve heard people deriding “normies”, or recklessly spending their weirdness points in a way that throws up red flags for anyone who’s worried that they may have stumbled into a cult.

On the AI front, I’ve seen some people, including one person who works as a career advisor at 80,000 Hours, assume that most important people involved in AI capabilities research either already understand the arguments made by Yudkowsky et al, or wouldn’t be receptive to them anyway, and thus there’s minimal point in trying to reach or convince them.

Are they right? Are most people beyond hope? Is all this pointless?


David McRaney started out as a blogger in 2009, the same year LessWrong was founded.

Although he doesn’t (to my knowledge) consider himself a part of the rationalist community, his blog, You Are Not So Smart, focused on a sort of pop-culture form of some of the same ideas that were floating around Less Wrong and related communities. The thrust of it was to highlight a bunch of ways in which your brain could fool itself and act irrationally, everything from confirmation bias, to deindividuation in crowds and riots, to procrastination, to inattention blindness and the invisible gorilla.

In 2011, he published much of this into a book of the same name. In 2012, his blog became a podcast that’s been running continuously ever since. In 2014, he published a follow-up book titled You Are Now Less Dumb. But in those early years, the thrust of it remained more or less the same: people are dumb in some very specific and consistent ways, including you. Let’s have some fun pointing all this out, and maybe learn something in the process.

He didn’t think people could be convinced. He’d grown up in Mississippi prior to the rise of the internet, where, in his words:

The people in movies and television shows seemed to routinely disagree with the adults who told us the South would rise again, homosexuality was a sin, and evolution was just a theory. Our families seemed stuck in another era. Whether the issue was a scientific fact, a social norm, or a political stance, the things that seemed obviously true to my friends, the ideas reaching us from far away, created a friction in our home lives and on the holidays that most of us learned to avoid. There was no point in trying to change some people’s minds.

And then he learned about Charlie Veitch.

Charlie was one of several 9/​11 truthers—people who thought that the US government had planned and carried out the September 11th attacks—starring on a BBC documentary called Conspiracy Road Trip. He wasn’t just a conspiracy theorist: he was a professional conspiracy theorist, famous online among the conspiracy community.

The premise of the BBC show was to take a handful of conspiracy theorists and bring them to see various experts around the world who would patiently answer their questions, giving them information and responding to every accusation that the conspiracy theorists threw at them.

For Charlie and his other 9/​11 truthers, they put them in front of experts in demolition, officials from the Pentagon, and even put them in a commercial flight simulator. Still, at the end of the show, every single one of the conspiracy theorists had dismissed everything the experts had told them, and refused to admit they were wrong.

Except Charlie.

Charlie was convinced, and admitted to the BBC that he had changed his mind.

This wasn’t without cost. After the episode aired, Charlie became a pariah among the conspiracy community. People attacked him and his family, accusing him of being paid off by the BBC and FBI. Someone made a YouTube channel called “Kill Charlie Vietch”. Someone else found pictures of his two young nieces and photoshopped nudity onto them. Alex Jones claimed Charlie was a double agent all along, and urged his fans to stay vigilant.

Charlie kept to his newfound truth, and left the community for good.

David did not understand why Charlie had changed his mind. As he writes in the book:

For one thing, from writing my previous books, I knew the idea that facts alone could make everyone see things the same way was a venerable misconception. […]

In science communication, this used to be called the information deficit model, long debated among frustrated academics. When controversial research findings on everything from the theory of evolution to the dangers of leaded gasoline failed to convince the public, they’d contemplate how to best tweak the model so the facts could speak for themselves. But once independent websites, then social media, then podcasts, then YouTube began to speak for the facts and undermine the authority of fact-based professionals like journalists, doctors, and documentary filmmakers, the information deficit model was finally put to rest. In recent years, that has led to a sort of moral panic.

As described in the book, this set David on a new path of research. He infiltrated the hateful Westboro Baptist Church to attend a sermon, and later talked to an ex-member. He interviewed a Flat Earther live on stage. He even met with Charlie Vietch himself.

At the same time, he started interviewing various people who were trying to figure out what caused people to change their minds, as well as experts who were investigating why disagreements happen in the first place, and why attempts to change people’s minds sometimes backfire.

He documented a lot of this in his podcast. Ultimately, he wrapped this all up in a book published just last year, How Minds Change.


I hesitate to summarize the book too much. I’m concerned about oversimplifying the points it’s trying to make, and leaving you with the false sense that you’ve gotten a magic bullet for convincing anyone of anything you believe in.

If you’re interested in this at all, you should just read the whole thing. (Or listen to the audiobook, narrated by David himself, where put his skills developed over years of hosting his podcast to good use.) It’s a reasonably enjoyable read, and David does a good job of basing each chapter around a story, interview, or personal experience of his to help frame the science.

But I’ll give you a taste of some of what’s inside.


He talks to neuroscientists from NYU about how people end up with different beliefs in the first place. The neuroscientists tie it back to “The Dress”, pictured above. Is it blue and black, or is it white and gold? There’s a boring factual answer to that question which is “it’s blue and black in real life”, but the more interesting question is why did people disagree on the colors in the picture in the first place?

The answer is that people have different priors. If you spend a lot of time indoors, where lighting tends to be yellow, you’d likely see the dress as black and blue. If you spend more time outdoors, where lighting is bluer, you’d be more likely to see the dress as white and gold. The image left it ambiguous as to what the lighting actually was, so people interpreted the same factual evidence in different ways.

The researchers figured this out so well that they were able to replicate the effects in a new image. Are the crocs below gray, and the socks green? Or are the crocs pink, and the socks white, but illuminated by green light? Which one you perceive depends on how used you are to white tube socks!

This generalizes beyond just images: every single piece of factual information you consume is interpreted in light of your life experience and the priors encoded from it. If your experience differs from someone else’s, it can seem like they’re making a huge mistake. Maybe even a mistake so obvious that it can only be a malicious lie!


Then, he explores what causes people to defend incorrect beliefs. This probably won’t surprise you, but it has to do with how much of their lives, social life, and identity are riding on those beliefs. The more those are connected, and the more those are perceived as being under threat, the more strongly people will be motivated to avoid changing their minds.

As one expert quoted in the book said, when they put people in an MRI and then challenged them on a political wedge issue, “The response in the brain that we see is very similar to what would happen if, say, you were walking through the forest and came across a bear.”

For a slight spoiler, this is what happened to Charlie Vietch. He’d started getting involved in an unrelated new-age-spiritualist sort of social scene, which meant dropping 9/​11 trutherism was much less costly to him.

This is also what happened to the ex-Westboro members he talked to: they left the church for a variety of reasons first, established social connections and communication with those outside, and only then changed their minds about key church doctrine like attitudes towards LGBT folks.

To put it in Yudkowskian terms, these were people who, for one reason or another, had a Line of Retreat.


Then he talks about the mechanics of debate and persuasion.

He dives into research that shows that people are way better at picking apart other people’s reasoning than their own. This suggests that reasoning can be approached as a social endeavor. Other people Babble arguments, and you Prune them, or vice-versa.

He then interviews some experts about the “Elaboration-Likelihood Model”. It describes two modes of persuasion:

  1. The “Peripheral Route”, in which people don’t think too hard about an argument and instead base their attitudes on quick, gut feelings like whether the person is attractive, famous, or confident-sounding.

  2. The “Central Route”, which involves a much slower and more effortful evaluation of the content of the argument.

To me, this sounds a lot like System 1 and System 2 thinking. But the catch is it takes motivation to get people to expend the effort to move to the Central Route. Nobody’s ever motivated to think too hard about ads, for example, so they almost always use the Peripheral Route. But if you’re actually trying to change people’s minds, especially on something they hold dear, you need to get them to the Central Route.


Then there are the most valuable parts of the book: techniques you can employ to do this for people in your own life.

David interviews three different groups of people that independently pioneered three surprisingly similar techniques:

  • Deep Canvassing is a technique pioneered by the Los Angeles LGBT Centre. Volunteers developed it as a way to do door-to-door canvassing in support of LGBT rights and same-sex marriage in the wake of California’s 2008 Prop 8 amendment which made same-sex marriage illegal.

  • Street Epistemology is an outgrowth of the New Atheist movement, who went from trying to convince people that religion is false, to a much more generalized approach to helping people explore the underpinning behind any of their beliefs and whether those underpinnings are solid.

  • Smart Politics is a technique developed by Dr. Karen Tamerius of the progressive-advocacy organization of the same name, and based on a therapy technique called Motivational Interviewing.

(Sorry, it doesn’t look like the conservatives have caught on to this kind of approach yet.)

The key ingredients in all these techniques are kindness, patience, empathy, and humility. They’re all meant to be done in a one-on-one, real-time conversation. Each of them has some differences in focus:

Among the persuasive techniques that depend on technique rebuttal, street epistemology seems best suited for beliefs in empirical matters like whether ghosts are real or airplanes are spreading mind control agents in their chemtrails. Deep canvassing is best suited for attitudes, emotional evaluations that guide our pursuit of confirmatory evidence, like a CEO is a bad person or a particular policy will ruin the country. Smart Politics is best suited for values, the hierarchy of goals we consider most important, like gun control or immigration reform. And motivational interviewing is best suited for motivating people to change behaviors, like getting vaccinated to help end a pandemic or recycling your garbage to help stave off climate change.

But all of them follow a similar structure. Roughly speaking, it’s something like this:

  1. Establish rapport. Make the other person feel comfortable, assure them you’re not out to shame or attack them, and ask for consent to work through their reasoning.

  2. Ask them for their belief. For best results, ask them for a confidence level from one to ten. Repeat it back to them in your own words to confirm you understand what they’re saying. Clarify any terms they’re using.

  3. Ask them why they believe what they do. Why isn’t their confidence lower or higher?

  4. Repeat their reasons back to them in your own words. Check that you’ve done a good job summarizing.

  5. Continue like this for as long as you like. Listen, summarize, repeat.

  6. Wrap up, thank them for their time, and wish them well. Or, suggest that you can continue the conversation later.

Each technique has slightly different steps, which you can find in full in Chapter 9.

David also suggests adding a “Step 0” above all those, which is to ask yourself why you want to change this person’s mind. What are your goals? Is it for a good reason?

The most important step is step 1, establishing rapport. If they say their confidence level is a one or a ten, that’s a red flag that they’re feeling threatened or uncomfortable and you should probably take a step back and help them relax. If done right, this should feel more like therapy than debate.

Why is this so important? Partly to leave them a line of retreat. If you can make them your friend, if you can show them that yes, even if you abandon this belief, there is still room for you to be accepted and respected for who you are, then they’ll have much less reason to hold on to their previous beliefs.


Several things about these techniques are exciting to me.

First off is that they’ve been shown to work. Deep Canvassing in particular was studied academically in Miami. The results, even with inexperienced canvassers and ten-minute conversations:

When it was all said and done, the overall shift Broockman and Kalla measured in Miami was greater than “the opinion change that occurred from 1998 to 2012 towards gay men and lesbians in the United States.” In one conversation, one in ten people opposed to transgender rights changed their views, and on average, they changed that view by 10 points on a 101-point “feelings thermometer,” as they called it, catching up to and surpassing the shift that had taken place in the general public over the last fourteen years.

If one in ten doesn’t sound like much, you’re neither a politician nor a political scientist. It is huge. And before this research, after a single conversation, it was inconceivable. Kalla said a mind change of much less than that could easily rewrite laws, win a swing state, or turn the tide of an election.

The second is that in these techniques, people have to persuade themselves. You don’t just dump a bunch of information on people and expect them to take it in: instead, they’re all focused on Socratic questioning.

This makes the whole conversation much more of a give-and-take: because you’re asking people to explain their reasoning, you can’t perform these techniques without letting people make their arguments at you. That means they have a chance to persuade you back!

This makes these techniques an asymmetric weapon in the pursuit of truth. You have to accept that you might be wrong, and that you’ll be the one whose opinion is shifted instead. You should embrace this, because that which can be destroyed by the truth, should be!

The last is that the people using these techniques have skin in the game.

These weren’t developed by armchair academics in a lab. They were developed by LGBT activists going out there in the hot California sun trying to desperately advocate for their rights, and New Atheists doing interviews with theists that were livestreamed to the whole internet. If their techniques didn’t work, they’d notice, same as a bodybuilder might notice their workouts aren’t helping them gain muscle.

And the last is that there’s the potential for driving social change.

The last chapter in the book talks about how individual mind changes turn into broader social change. It’s not as gradual as you’d think: beliefs seem to hit a tipping point after which the belief cascades and becomes the norm.


As you probably expect, I’ve been a long-time fan of David’s from back in the early days. As a result, it’s a bit hard to do an unbiased review. I don’t want to oversell this book: he’s a journalist, not a psychology expert, and relies on second-hand knowledge from those he interviews. Similarly, I’m not a psychology expert myself, so it’s hard for me to do a spot-check on the specific neurological and psychological details he brings up.

But I still think this would be valuable for more people in the rationalist community to read. David isn’t as smart as Yudkowsky—few people are—but I do think he caught on to some things that Yudkowsky missed.

There’s an undercurrent in Yudkowsky’s writing where I feel like he underestimates more average people, like he has trouble differentiating the middle-to-left of the IQ bell-curve. Take this excerpt from his famous (fan)fiction, Harry Potter and the Methods of Rationality (Ch. 90):

“That’s what I’d tell you if I thought you could be responsible for anything. But normal people don’t choose on the basis of consequences, they just play roles. […] People like you aren’t responsible for anything, people like me are, and when we fail there’s no one else to blame.”

I know Yudkowsky doesn’t necessarily endorse everything his fictional characters say, but I can’t help but wonder whether that’s an attitude he still keeps beneath the surface.

It’s not just his fiction. Recently he went on what he thought was a low-stakes crypto podcast and was surprised that the hosts wanted to actually hear him out when he said we were all going to die soon:

https://​​twitter.com/​​ESYudkowsky/​​status/​​1632140760828235777

The other thing that seemed to surprise him was the popularity of the episode. This led to him doing more podcasts and more press, including a scathing article in Time magazine that’s been getting a lot of attention lately.

I’m sorry to do this to you Eliezer, but in your own words, surprise is a measure of a poor hypothesis. I think you have some flaws in your model of the public.

In short: I don’t believe that the average AI researcher, let alone the public at large, fully grasps the arguments for near-to-mid-term AI extinction risk (e.g. within the next 10–50 years). I also don’t think that trying to reach them is hopeless. You just have to do it right.

This isn’t just based on hearsay either. I’ve personally tried these techniques on a small number of AI capabilities researchers I’ve gotten to know. Results have been somewhat mixed, but not as universally negative as you’d expect. I haven’t gotten to do much of it yet—there’s only so much emotional energy I can spend meditating on the non-trivial chance that neither me nor anyone I know will have our children live to adulthood—but I’ve had at least one person that was grateful for the time and space I gave them to think through this kind of stuff, time that they normally wouldn’t get otherwise, and came out mostly agreeing with me.

The book, and my own experiences, have started to make me think that rationality isn’t a solo sport. Maybe Eliezer and people like him can sit down alone and end up with new conclusions. But most of the rest of the world benefits from having someone sit with them and probe their thoughts and their reasons for believing what they believe.

If you’re the kind of person who’s been told you’re a good listener, if you’re the kind of person that ends up as your friends’ informal therapist, or if you’re the kind of person that is more socially astute, even if it means you’re more sensitive to the weirder aspects of the rationalist movement, those are all likely signs that you might be particularly well-suited to playing this kind of role for people. Even if you aren’t, you could try anyway.

And you might have some benefit in doing it soon. That last part, about social change? With AI chatbots behaving badly around the world, and the recent open letter calling for a pause in AI training making the news, I think we’re approaching a tipping point on AI existential risk.

The more people we have that understand the risks, be they AI researchers or otherwise, the better that cascade will go. The Overton window is shifting, and we should be ready for it.