Give Me Your Data: The Rationalist Mind Meld
I don’t want your rationality. I can supply my own, thank you very much. I want your data. If you spot a logical error in my thinking, then please point it out. But short of that, among mostly-rational people, I think most disagreements come down to a difference of intuitions, which are rooted in a difference in the data people have been exposed to, and instead of presenting a logical counter-argument, you’re better off doing a “Rationalist mind meld” where you share your data.
I find myself making this mistake a lot. Let’s say I came across this (fake) comment:
There aren’t really classes in America anymore, because there’s no legal class distinctions and everyone has the same opportunities. And class mobility is very high.
They’re wrong and I know it. Maybe my instinct would be to reply like this:
Actually, a lack of a legally enforced class system doesn’t imply there are no classes. There is a lot of wealth inequality in America, and children born to poorer families don’t have the same opportunities as richer families. Class mobility is low in America, and classes are hugely significant.
This is some combination of:
Asserting what I believe without justifying it.
Justifying what I believe based on simple facts everyone in the conversation already knows.
Pointing out logical errors (“lack of legally enforced classes doesn’t imply there are no classes”)
The problem with this kind of response is that my own beliefs about class were not formed by this kind of logic. My own beliefs about class were largely informed by this Slate Star Codex essay on class, the linked essays about class, and Paul Fussel’s book Class. All this material describes in detail how the different classes act, look, earn their money, and how easy it is to move between classes. I’m expecting the person I’m talking to to believe what I believe about class while having heard almost none of what I’ve heard about class!
In the Rationalist community, there’s the concept of a crux, which is essentially the core reason why you disagree, and double-crux, a pattern where two people search for each other’s cruxes to find out the source of their disagreement. As has been pointed out, this doesn’t work so well in practice because disagreements often cannot be “traced to a single underlying consideration”. In the case of class above, I don’t disagree with them because of one specific fact or belief. I disagree because I have formed a complex and robust world model about class that the person I’m talking to just doesn’t have.
The correct way to resolve this is not with logical arguments, but with a mind meld!
In Star Trek, Vulcans like Spock can create a telepathic link between themselves and someone else to exchange thoughts and memories directly. In the show this is used for various purposes including coming to a better understanding of the perspectives and desires of hostile alien species.
If I was to do a mind meld with the person who thinks class doesn’t exist in America, it might look like this:
My intuition is different than yours, so I’d appreciate if we could mind meld here. For my part, my intuition mostly comes from this Slate Star Codex essay on class, the linked essays about class, and Paul Fussel’s book Class. Probably the Slate Star Codex essay alone should be enough to give you a general idea. Are there any sources informing your intuition that you’d like to share?
Bam! MIND MELD!
It doesn’t have to be explicit like that, by the way. You don’t have to use the term “mind meld”. And it doesn’t have to be just essays. It can be any information that contributed to your understanding, whether in the form of scientific studies, books, blog posts, videos, datasets, podcasts, or even descriptions of personal experiences you’ve had. The latter is actually quite common on Hacker News. People there often provide little anecdotes instead of directly commenting on the main post. Here’s a short one from the many I saw just today:
Of all the impressive software developers I had pleasure to meet before 2012, I never met one who did it for money. They loved their work, the craft, the science, and sheer joy of the creative process. That culture ended quickly pretty around 2012-15, but I never figured out why.
Or this one:
This was a nice trick to protect text from copying. For instance, student assignments. Students could still use digital camera on CRT display, but 20 years ago cameras were costly and students did not have them. And typing text from scratch was a tedious job. So online served assignments were not shared too fast.
While you can’t necessarily trust a single anecdote, that’s not the point. The point is that instead of dealing solely with logical arguments — which have their place, such as if someone has visibly committed an error in logic — you’re also experiencing a sort of gradual mind meld with the whole community.
Most people don’t form their beliefs on the basis of pure logic. Instead, belief formation often looks like this:
People expose themselves to a large amount of data.
That creates an intuition.
People generate logical reasons to try to explain why their intuition is true.
The third step is the easy part. It mostly exists so you can communicate with others, and as a sort of sanity check on your intuition. Your intuition can lead you astray, and your logical mind exists to correct things when that happens. Think of your logical mind as your CPU and your intuition as your GPU. Your main goal is to train your GPU software to be rational, and your CPU exists to facilitate that training. You can’t rely on your CPU for many things because it’s too weak. No amount of reasoning about the rules of chess will allow you to beat me after I’ve played a few hundred games. No amount of reading analyses of fashion can compare to looking at 1000 pictures of well-dressed people. To train your GPU, you need to find good, high quality training data, and that’s where the mind meld comes in. If we focus not only on pointing out failures in logic, but also in sharing our training data, we’ll all end up more rational in the end.
Agreed that more people should share anecdotes.
We don’t have to bring logic into it; I think logical reasoning is good and possible and there’s no need to insist that “most people don’t do it” (and thus that we shouldn’t either??)
Anecdotes are way better than arguments because they point to the history of how someone came to believe a thing (causally, why, how come you believe that) rather than focusing on the legitimacy of believing that thing.
If I want to understand your perspective, and figure out what I think about it, I can suss that out more efficiently by understanding what examples or details motivated you. Maybe the anecdotes will be enough to change my mind. Maybe I’ll be like “oh, ok, i’m familiar with those and ALSO many other things that point in the opposite direction, so my opinion is unchanged.” Definitely, if the claim being made is an abstract one like “class is important”, motivating examples help narrow down in what sense the person means they think class is important. You just get more new information faster, in most cases, if someone is honestly tracing the origin of their beliefs, instead of trying to convince you of them.
This approach allows you to share intuitions in a subject where you aren’t an expert but have read a few articles but it doesn’t allow you to share intuitions in a subject where you have actual expertise.
When it comes to seeking medical advise, a fellow rationalist has a easy time reading a handful of articles about the topic and forming their opinion based on those articles. If they have good source management, they can tell you about the articles. At the same time, they don’t have the same expertise that a doctor has. The intuition of the doctor comes from having spends years in medical school, their internship and treating patients.
I work in medical research and know many healthcare practitioners. They often share anonymized stories about their patients and higher level summaries of patterns they see across their patient population or in their institution.
I couldn’t learn to be a doctor from these occasional stories, but I understand the intimate details of their work much better than I would from articles, especially the social side.
For example, my geneticist friend’s complaints about companies selling unregulated genetic tests helped me understand why doctors are so much more conservative than researchers when it comes to new and unregulated medical tech. Researchers see developing new tests as innovation, doctors as often injecting more noise and confusion into an already overwhelming system.
That was a crucial insight for me as a biomedical researcher thinking about how to make a clinical impact.
Sometimes your beliefs can’t be traced to a few specific sources, I agree. You just have this complex world model formed by years of study, and you’re not sure what specific info is leading to your intuition. And it’s not like you can mind meld an entire medical degree. But if your opinion is really based on a deep, complex, irreducible expertise, you definitely can’t convince someone with a logical argument either, because that also won’t transmit your deep expertise to them. At that point, there’s not much you can do but either try your best to mind meld, or just move on.
But I think most online debates don’t have this problem. There’s usually some specific topic (e.g. circadian rhythm disorders) and even an expert should be able to trace which parts of their education and experience are most relevant and share them (e.g. a specific study, a specific patient, or a common experience they’ve had several times with patients).
I have had this problem a lot with consulting clients. I’ll start a project and be 90% sure within an hour or two of what the shape of the overall result will be, but it takes hundreds of hours of work to collect and present the information in a way that will (or should be) convincing to a third party. Partly it’s a matter of needing to articulate the source of intuitions. Partly it’s a matter of needing a few high-quality pieces of clear, unambiguous data, whereas the intuition is built on many individually ambiguous pieces of data.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.
I feel like sometimes I have a hard time keeping track of the experiences that form my intuitive beliefs. Sometimes I want to explain an abstract idea/situation and I would like to bring up some examples… and often I have a hard time of thinking of any? Even though I know the belief was formed by encountering multiple such situations in real life. It would be cool if my brain could list the “top 5 most relevant examples” that influenced a certain intuitive belief, but, in the language of this article, it seems to just throw away the training data after it trained on it.
Case in point: I cannot easily think of a past situation right now where I tried to explain some belief and failed to come up with examples...
Communities like HN and some subreddits that have a mind meld culture are wonderful resources. I bookmark those comment sections for technologies I’m considering using or ideas about how to code, and consider the comment section a critical component of the post they’re discussing.
Counterpoint, I’m usually pretty skeptical of people who say something like, “just read this book, it explains it better than I can.” Telling me you read a book and didn’t particularly understand it isn’t a great sell. I also wasn’t interested in doing labor to argue your point of view when I didn’t even think you’re right in the first place.
It’s probably still better to have that convo though, if your alternative is to argue nonsensically.
In practice the resolution is probably “here’s what informed me, I understand you may not be compelled to read it, but if you do and want to discuss it let me know.”
Better I think would be to talk about a few of the points from the book that you thought were most important. This shows you understood the book and which bits might be most interesting to your interlocutor.
That’s a great point. I also don’t like that. It’s like an “isolated demand for labor” to get someone off your back. A tactic I have definitely noticed a few people using on purpose, too. Maybe citing a whole book is just too much for most blog post conversations.
I think humans also have some natural variance in how they form intuitions in response to the same evidence. speculation: In evolution, if everyone did that the same, there’d be correlated failures in critical situations (though also correlated successes), and in general people would correlatedly try the same things; I think this sort of thing is why human minds vary so much.
I think I sense or get a vibe of another’s intuition-forming after enough communication from them, though it’s hard for me to be sure that I can really detect this.