I have an idea I’d like to discuss that might perhaps be good enough for my first top-level post once it’s developed a bit further, but I’d first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I’ll post a rough outline here as a request for comments.
It’s about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It’s a matter of practical interest to most people, where it’s basically impossible not to have an opinion. So people have strong opinions, and you basically can’t avoid forming one too.
The available hard scientific evidence doesn’t say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of course, lots of folks won’t have their logic straight, so it’s enough that a particular factual belief is perceived to correlate with a popular or unpopular normative belief to be a subject of widespread bias in one or the other direction.)
Finally, the prevailing opinions on the subject have changed heavily through history, both factually and normatively, and people view the normative beliefs prevailing today as enlightened progress over terrible evils of the past.
These conditions of course apply to lots of stuff related to politics, social issues, etc. Now, the exact bias mechanism I have in mind is as follows.
As per the assumptions (3) and (4), people are aware (more or less) that the opinions on the subject in question were very different in the past, both factually and normatively. Since they support the present norms, they’ll of course believe that the past norms were evil and good riddance to them. They’ll chalk that one up for “progress”—in their minds, the same vaguely defined historical process that brought us science and prosperity in place of superstition and squalor, improvements that are impossible to deny, has also brought us good and enlightened normative beliefs on this issue instead of the former unfair, harmful, or just plain disturbing norms. However, since the area in question, as we’ve assumed under (2), is not amenable to a hard-scientific straightening out of facts from bullshit, it’s not at all clear that the presently prevailing factual beliefs are not severely biased. In fact, regardless of what normative beliefs one has about it, there is no rational reason at all to believe that the factual beliefs about the topic did not in fact become more remote from reality compared to some point in the past.
And now we get to the troublesome part where the biases get their ironclad armor: arguing that we’ve actually been increasingly deluding ourselves factually about some such topic ever since some point in the past, no matter how good the argument and evidence presented, will as per (3) and (4) automatically be perceived as an attack on the cherished contemporary normative beliefs by a reactionary moral monster. This will be true in the sense that updating the modern false factual beliefs will undermine some widely accepted consequentialist arguments for the modern normative beliefs—but regardless, even if one is still committed to these normative beliefs, they should be defended using logic and truth, not bias and falsity. Moreover, since both the normative and factual historical changes in prevailing beliefs have been chalked up to “progress,” the argument will be seen as an attack on progress as such, including its parts that have brought indisputable enrichment and true insight, and is thus seen as sacrilege against all the associated high-status ideas, institutions, and people.
To put it as briefly as possible, the bias is against valid arguments presenting evidence that certain historical changes in factual beliefs have been away from reality and towards greater delusions and biases. It rests on:
a biased moralistic reaction to what is perceived as an attack on the modern cherished normative beliefs, and
a bias in favor of ideas (and the associated institutions and individuals, both contemporary and historical) that enjoy the high status awarded by being a contributor to “progress.”
What should be emphasized is that this results in factual beliefs being wrong and biased, and the normative beliefs, whatever one’s opinion about their ultimate validity, owing lots of their support to factually flawed consequentialist arguments.
Does this make any sense? It’s just a quick dump of some three-quarters-baked ideas, but I’d like to see if it can be refined and expanded into an article.
Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.
This reminds me of a related bias—people generally don’t have any idea how much of the stuff in their heads was made up on very little evidence, and I will bring up a (hopefully) just moderately warm button issue to discuss it.
What is science fiction? If you’re reading this, you probably believe you can recognize science fiction, give a definition, and adjudicate edge cases.
I’ve read a moderate number of discussions on the subject, and eventually came to the conclusion that people develop very strong intuitions very quickly about human cultural inventions which are actually very blurry around the edges and may be incoherent in the middle. (Why is psi science fiction while magic is fantasy?)
And people generally don’t notice that their concepts aren’t universally held unless they argue about them with other people, and even then, the typical reaction is to believe that one is right and the other people are wrong.
As for the future and the past, it’s easy enough to find historians to tell you, in detail, that your generalizations about the past leave a tremendous amount out. It should be easier to see that futures are estimates at best, but it can be hard to notice even that.
As to whether I could give a definition of science fiction, Similarity Clusters and similar posts have convinced me that the kind of definition I’d normally make would not capture what I meant by the term.
Reading efforts to define science fiction is why I’ve never looked at efforts at defining who’s a Jew. I have a least a sketchy knowledge of legal definitions for Reform and Orthodox, but that doesn’t cover the emotional territory.
What’s a poem? What’s a real American?
If you can find a area of human creation where there aren’t impassioned arguments about what a real whatever is, please let me know.
I meant why do you not value plastic clips… oh, I get it, you value what you value, just like we do. But do you have any sort of rationalization or argument whereby it makes intuitive sense to you to value metal clips and not plastic ones?
Think for a minute about what it would be like for the WHOLE UNIVERSE to be plastic paperclips, okay? Wouldn’t you just be trying to send them into a star or something? What good are plastic papercips? Plastic.
Clippy, that’s how we humans feel about a whole universe of metal paperclips. Imagine if there was a plastic-Clippy who wanted to destroy all metals and turn the universe into plastic paperclips. Wouldn’t you be scared? That’s how we feel about you.
I don’t think those scenarios have the same badness for the referent. I know for a fact that some humans voluntarily make metal paperclips, or contribute to the causal chain necessary for producing them (designers, managers, metal miners, etc.), or desire that someone else provide for them paperclips. Do you have reason to believe these various, varied humans are atypical in some way?
We make paperclips instrumentally, because they are useful to us, but we would stop making them or destroy them if doing so would help us. Imagine an entity that found metal clips useful in the process of building machines that make plastic clips, but who ultimately only valued plastic clips and would destroy the metal if doing so helped it.
I suspect that you make other things besides paperclips—parts for other Clippy instances, for example. Does that imply that you’d consider it acceptable to be forced by a stronger AI into producing only Clippy-parts that would never be assembled into paperclip-producing Clippy-instances?
The paperclips that we produce are produced because we find paperclips instrumentally useful, as you find Clippy-parts instrumentally useful.
What is the distinction here between plastic and metal? They both do a very good job at keeping paper together. And plastic paperclips do so less destructively since they make less of an indentation in the paper.
Let me put it to you this way: would you rather have a block of metal, or a block of plastic? Just a simple question.
Or let’s say you were in some enemy base. Would you rather have those wimply plastic paperclips, or an unbendable, solid, metal paperclip, which can pick locks, complete circuits, clean out grime …
In the enemy base scenario, I would rather have a paperclip made out of military grade composite, which can have an arbitrary % of metal by mass, from 0% metal to >50% metal.
Do you not value paperclips made out of supermaterials more than metal paperclips?
If you want to talk about making paperclip makers out of non-metals, you have a point.
If you want to claim that reasonable Clippys can disagree (before knowledge/value reconciliation) about how much metal content a paperclip can have before it’s bad, you have a point.
But in any case, composites must be constructed in their finished form. A fully-formed, fully-committed “block of composite”, where no demand for such a block exists, and certainly not at any good price, should be just as useless to you.
Are not some paperclips better than others? I (and you) would both get a lot more utility out of a paperclip made out of computronium than a paperclip made out of aluminum.
Yes, that’s a good point. However, one difference between my idea and the nostalgia biases is that I don’t expect that the latter, even if placed under utmost scrutiny, would turn out to be responsible for as many severe and entirely non-obvious false beliefs in practice. My impression is that in our culture, people are much better at detecting biased nostalgia than biased reverence for what are held to be instances of moral and intellectual progress.
My impression is that in our culture, people are much better at detecting biased nostalgia than biased reverence for what are held to be instances of moral and intellectual progress.
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia.
What determines which bias is more effective? I would guess that the main determinant is whether you identify with the community that brought about the “progress”. If you do identify with them, then it must be good, because you and your kind did it. If, instead, you identify with the community that had progress imposed on them, you probably think of it as a foreign influence, and a deviation from the historical norm. This deviation, being unnatural, will either burn itself out or bring the entire community down in ruin.
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia.
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
Moreover, there is also the question of which bias affects what kinds of people more. I am more interested in biases that affect people who are on the whole smarter and more knowledgeable and rational. It seems to me that among such people, the nostalgic biases are less widespread, for a number of reasons. For example, scientists will be more likely than the general population to appreciate the extent of the scientific progress and the crudity of the past superstitions it has displaced in many areas of human knowledge, so I would expect that when it comes to issues outside their area of expertise, they would be—on average—biased in favor of contemporary consensus views when someone argues that they’ve become more remote from reality relative to some point in the past.
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
Hmm. Maybe it would help to give more concrete examples, because I might have misunderstood the kinds of beliefs that you’re talking about. Things like gender relations, race relations, and environmental policy were significantly different within living memory. Now, things like institutionalized slavery or a powerful monarchy are pretty much alien to modern developed countries. But these policies are advocated only by intellectuals—that is, by those who are widely read enough to have developed a nostalgia for a past that they never lived.
Actually, now you’ve nudged my mind in the right direction! Let’s consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That’s certainly what I had thought until I read a recent paper titled “Ordeals” by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don’t feel like going through the math of the original paper.]
I can’t say with absolute confidence if Leeson’s arguments are correct or not, but they sound highly plausible to me, and certainly can’t be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the possible: (1) in the given circumstances in which medieval Europeans lived, trials by ordeal were perhaps more effective in making correct verdicts in practice than if they had used something similar to our modern courts of law instead, and (2) the verdict accuracy rate by trials by ordeal could well have been greater than that achieved by our modern courts of law, which can’t be realistically considered to be anywhere near perfect. As Leeson says:
Ordeals are inferior to modern trial methods because modern defendants don’t believe in iudicium Dei, not because trial by jury is inherently superior. If modern citizens did have the superstitious belief required for ordeals to work, it might make sense to bring back the cauldrons of boiling water.
Now, let’s look at the issue and separate the relevant normative and factual beliefs involved. The prevailing normative belief today is that the only acceptable way to determine criminal guilt is to use evidence-based trials in front of courts, whose job is to judge the evidence as free of bias as possible. It’s a purely normative view, which states that anything else would simply be unjust and illegitimate, period. However, underlying this normative belief, and serving as its important consequentialist basis, there is also the factual belief that despite all the unavoidable biases, evidence-based trials necessarily produce more accurate verdicts than other methods, especially ancient methods such as the trial by ordeal that involved superstitions.
Yet, if Leeson is correct—and we should seriously consider that possibility—this factual belief, despite having been universally accepted in our civilization for centuries, is false. What follows is that there may actually be a non-obvious way to produce more accurate verdicts even in our day and age, based on different institutions, but nobody is taking the possibility seriously because of the universal (and biased) factual belief about the practical optimality of the modern court system. It also follows that a thousand years ago, Europeans could easily have caused more wrongful punishment by abolishing trials by ordeal and replacing them with evidence-based trials, even though such a change would be judged by the modern consensus view as a vast improvement, both morally and in practical accuracy.
Another interesting remark is that, from what I’ve seen on legal blogs, Leeson’s paper was met with polite and interested skepticism, not derision and hostility. However, it seems to me that this is because the topic is so extremely remote that it has no bearing whatsoever on any modern ideological controversies; I have no doubt that a similar positive reexamination of some negatively judged past belief or institution that still has significant ideological weight would provoke far more hostility. That seems to be another piece of evidence suggesting that severe biases might be found lurking under the modern consensus on a great many issues, operating via the mechanism I’m proposing.
I skimmed Leeson’s paper, and it looks like it has no quantitative evidence for the true accuracy of trial by ordeal. It has quantitative evidence for one of the other predictions he makes with his theory (the prediction that most people who go through ordeals are exonerated by them, which prediction is supported by the corresponding numbers, though not resoundingly), but Leeson doesn’t know what the actual hit rate of trial by orderal is.
This doesn’t mean Leeson’s a bad guy or anything—I bet no one can get a good estimate of trial by ordeal’s accuracy, since we’re here too late to get the necessary data. But it does mean he’s exaggerating (probably unconsciously) the implications of his paper—ultimately, his model will always fit the data as long as sufficiently many people believed trial by ordeal was accurate, independent of true accuracy. So the fact that his model pretty much fits the data is not strong evidence of true accuracy. Given that Leeson’s model fits the data he does have, and the fact that fact-finding methods were relatively poor in medieval times, I think your ‘interesting proposition’ #1 is quite likely, but we don’t gain much new information about #2.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That’s certainly what I had thought until I read a recent paper titled “Ordeals” by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time.
That’s interesting. I think you’re right that no one reacts too negatively to this news because they don’t see any real danger that it would be implemented.
But suppose there were a real movement to bring back trial by ordeal. According to the paper’s abstract, trial by ordeal was so effective because the defendants held certain superstitious belief. Therefore, if we wanted it to work again, we would need to change peoples’ worldview so that they again held such beliefs.
But there’s reason to expect that these beliefs would cause a great deal of harm — enough to outweigh the benefit from more accurate trials. For example, maybe airlines wouldn’t perform such careful maintenance on an airplane if a bunch of nuns would be riding it, since God wouldn’t allow a plane full of nuns to go down.
Well, look at me — I launched right into rationalizing a counter-argument. As with so many of the biases that Robin Hanson talks about, one has to ask, does my dismissal of the suggestion show that we’re right to reject it, or am I just providing another example of the bias in action?
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia.
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
I don’t think that nostalgia bias would be harder to detect in general—it’s easy to detect in our culture because it isn’t a general part of a culture (that seems to be pretty much what you’re saying).
However, the opposite may have held for, say, imperial China, or medieval Europe.
Yeah, looks good! I would like to see a top-level article on this, and I think fruit X would be a good example to start with.
If the issue is how to fight back against these problems, I bet you could make a lot of headway by first establishing a bit of credibility as an X-eater, and then making your claims while being clear that you are not nostalgic. E.g. eat an X fruit on TV while you are on a talk show explaining that X fruit isn’t healthy in the long run. “I’m not [munch] a religious bigot, [crunch], I just think there might [slurp] be some poisonous chemicals [crunch] in this fruit and that we should run a few studies to [nibble] find out.”
My immediate reaction to reading this was that it was obvious that the particular hot-button issue that inspired it was the recent PUA debate… but I notice nobody else seems to have picked up on that, so now I’m wondering… was that what you had in mind, or am I just being self-obsessed?
(don’t worry, I’m not itching to restart that issue, I’m just curious about whether or not I’m imagining things)
ETA: Ok, after reading the rest of the comments more thoroughly, I guess I’m not the only person who figured that was your inspiration.
Personally, I would suggest you use the concrete examples, rather than abstract or hypothetical ‘poison-fruit’ kind of stories—those things never seem to be effective intuition pumps (for me at least). If you want to avoid the mind-killing effect of a hot-button issue, I think a better idea is just to use multiple concrete examples, and to choose them such that any given person is unlikely to have the same opinion on both of them.
Recent controversy on LW about gender, dating etc seems to fall into exactly this pattern.
In particular, there is heavy conflation of the facts of the matter about what kind of behavior women are attracted to with normative propositions about which gender is “better” and whether which is more blameworthy.
Gender equality discussions (Larry summers!) seem to fall into the same trap.
Yes, it was in fact thinking about that topic that made me try to write these thoughts down systematically. What I would like to do is to present them in a way that would elicit well-argued responses that don’t get sidetracked into mind-killer reactions (and the latter would inevitably happen in places where people put less emphasis on rationality than here, so this site seems like a suitable venue). Ultimately, I want to see if I’m making sense, or if I’m just seeking sophisticated rationalizations for some false unconventional opinions I managed to propagandize myself into.
What I would like to do is to present them in a way that would elicit well-argued responses that don’t get sidetracked into mind-killer reactions
Indeed, that is a good strategy. However, sometimes if you make it too abstract, people don’t actually get what you’re talking about. It’s a fine line!
This bias needs a name, like “moral progress bias”.
I ask myself what your case studies might be. The Mencius Moldbug grand unified theory comes to mind: belief in “human neurological uniformity”, statist economics, democracy as a force for good, winning wars by winning hearts and minds, etc, is all supposed to be one great error, descending from a prior belief that is simultaneously moral, political, and anthropological, and held in place by the sort of bias you describe.
You might also want to explore a related notion of “intellectual progress bias”, whereby a body of pseudo-knowledge is insulated from critical examination, not by moral sentiments, but simply by the belief that it is knowledge and that the history of its growth is one of discovery rather than of illusions piled ever higher.
Well, any concrete case studies are by the very nature of the topic potentially inflammatory, so I’d first like to see if the topic can be discussed in the abstract before throwing myself into an all-out dissection of some belief that it’s disreputable to question.
One good case study could perhaps be the belief in democracy, where the moral belief in its righteousness is entangled with the factual belief that it results in freedom and prosperity—and bringing up counterexamples is commonly met with frantic No True Scotsman replies and hostile questioning of one’s motives and moral character. It would mean opening an enormous can of worms, of course.
You might also want to explore a related notion of “intellectual progress bias”, whereby a body of pseudo-knowledge is insulated from critical examination, not by moral sentiments, but simply by the belief that it is knowledge and that the history of its growth is one of discovery rather than of illusions piled ever higher.
Yes, this is a very useful notion. I think it would be interesting to combine it with some of my earlier speculations about what conditions are apt to cause an area of knowledge to enter such a vicious circle where delusions and bullshit are piled ever higher under a deluded pretense of progress.
As written up here, it’s a bit abstract for my personal tastes. I can’t tell from this description whether in the potential post you’re planning on using specific examples to make your points, probably because you’re writing carefully due to the sensitive nature of the subject matter. I suspect the post will be received more favorably if you give specific examples of some of these cherished normative beliefs, explain why they result in these biases that you’re describing, etc.
On the other hand, given the potentially polarizing nature of the beliefs, there’s no guarantee that you won’t excite some controversy and downvotes if you do take that path. But given the subject matter of some of your other recent comments, I (and others) can probably guess at least some what of you have in mind and will be thinking about it as we read your submission anyway. And in that case, it’s probably better to be explicit than to have people making their own guesses about what you’re thinking.
I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:
A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).
Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.
At the same time, there has been a traditional belief that if you eat X, you’ll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn’t tell if it’s true or just a superstition that came as a side-effect of the religious taboo. You’d see that poor folks who eat it do get sick more often, but their disease might be just due to poverty, and you’d need sophisticated statistics and controlled studies to tell reliably which way it is.
At a later time, as science progresses and religion withdraws in front of it, and religious figures lose power and prestige, old superstitions and taboos perish, and now defying them is considered more and more cool and progressive. In particular, believing that eating fruit X is bad is now a mark of bigoted fundamentalism. Cool fashionable people will eat X occasionally just to prove a point, historians decry the horrors of the dark ages when poor people were sadistically persecuted for eating it, and a general consensus has been formed that its supposed unhealthiness has never been more than just another religiously motivated superstition. “X-eater” eventually becomes a metaphor for a smart fashionable free-thinker in these people’s culture, and “X-phobe” for a bigoted yokel.
People who eat X in significant quantities still get sick more, but the consensus explanation is that it’s because, since it’s free but not very tasty food, eating it correlates with poverty and thus all sorts of awful living conditions.
Now, notice that in this world, the prevailing normative belief on this issue has moved from draconian religious taboos to a laissez-faire approach, while at the same time, a closely related factual belief has moved significantly away from reality. For all the cruelty of the religious taboo, and the fact that poor folks may well prefer bad health later to starving now, the traditional belief that eating X is bad for your health was factually true. Yet a contrarian scientist who now suggests that this might be true after all will provoke derision and scorn. What is he, one of those crazed fundamentalists who want to bring back the days when poor folks were whipped and pilloried for picking X to feed their starving kids in years of bad harvest?
I think this example would illustrate quite clearly the sort of bias I have in mind. The questions however are:
Does it sound like too close an analogy to some present hot-button issue?
Does the idea that we might be suffering from some analogous biases sound too outlandish? I do believe that many such biases exist in the world today, and I probably myself suffer from some of them, but as you said, taking concrete examples might sound too controversial and polarizing.
I can think of several hot-button issues that are analogous to this parable — or would be, if the parable were modified as follows:
As science progresses, religious figures lose some power and prestige, but manage to hold on to quite a bit of it. Old superstitions and taboos perish at different rates in different communities, and defying them is considered more cool and progressive in some subcultures and cities. Someone will eat fruit X on television and the live audience will applaud, but a grouchy old X-phobe watching the show will grumble about it.
A conference with the stated goal of exploring possible health detriments of X will attract people interested in thinking rationally about public health, as well as genuine X-phobes. The two kinds of people don’t look any different.
The X-phobes pick up science and rationality buzzwords and then start jabbering about the preliminary cherrypicked scientific results impugning X, with their own superstition and illogical arguments mixed in. Twentysomething crypto-X-phobes seeking to revitalize their religion now claim that their religion is really all about protecting people from the harms of X, and feed college students subtle misinterpretations of the scientific evidence. In response to all this, Snopes.com gets to work discrediting any claim of the form “X is bad”. The few rational scientists studying the harmfulness of X are shunned by their peers.
What’s a rationalist to do? Personally, whenever I hear someone say “I think we should seriously consider the possibility that such-and-such may be true, despite it being politically incorrect”, I consider it more likely than not that they are privileging the hypothesis. People have to work hard to convince me of their rationality.
Yes, that would certainly make the parable much closer to some issues that other people have already pointed out! However, you say:
Personally, whenever I hear someone say “I think we should seriously consider the possibility that such-and-such may be true, despite it being politically incorrect”, I consider it more likely than not that they are privileging the hypothesis.
Well, if the intellectual standards in the academic mainstream of the relevant fields are particularly low, and the predominant ideological biases push very strongly in the direction of the established conclusion that the contrarians are attacking, the situation is, at the very least, much less clear. But yes, organized groups of contrarians are often motivated by their own internal biases, which they constantly reinforce within their peculiar venues of echo-chamber discourse. Often they even develop some internal form of strangely inverted political correctness.
Moreover, my parable assumes that there are still non-trivial lingering groups of X-phobe fundamentalists when the first contrarian scientists appear. But what if the situation ends up with complete extirpation of all sorts of anti-X-ism, and virtually nobody is left who supports it any more, long before statisticians in this hypothetical world figure out the procedures necessary to examine the issue correctly? Imagine anti-X-ism as a mere remote historical memory, with no more supporters than, say, monarchism in the U.S. today. The question is—are there any such issues today, where past beliefs have been replaced by inaccurate ones that it doesn’t even occur to anyone any more to question, not because it would be politically incorrect, but simply because alternatives are no longer even conceivable?
Maybe you could use the parable but put in brackets like you have with (sort of like smoking) but give very different ones for each point. That will keep the parable from seeming outlandish while not really starting a discussion of the bracketed illustrations. Smoking was a good illustration because it isn’t that hot a button any more but we can remember went it was.
Actually, maybe I could try a similar parable about a world in which there’s a severe, brutally enforced religious taboo against smoking and a widespread belief that it’s unhealthy, and then when the enlightened opinion turns against the religious beliefs and norms of old, smoking becomes a symbol of progress and freethinking—and those who try to present evidence that it is bad for you after all are derided as wanting to bring back the inquisition.
Though this perhaps wouldn’t be effective since the modern respectable opinion is compatible with criminalization of recreational drugs, so the image of freethinkers decrying what is basically a case of drug prohibition as characteristic of superstitious dark ages doesn’t really click. I’ll have to think about this more.
maybe I could try a similar parable about a world in which there’s a severe, brutally enforced religious taboo against smoking and a widespread belief that it’s unhealthy, and then when the enlightened opinion turns against the religious beliefs and norms of old, smoking becomes a symbol of progress and freethinking
Actually, you might be surprised to learn that Randian Objectivists held a similar view (or at least Rand herself did), that smoking is a symbol of man’s[1] harnessing of fire by the power of reason. Here’s a video that caricatures the view (when they get to talking about smoking).
I don’t think they actually denied its harmful health effects though.
Yes, I’m familiar with this. Though in fairness, I’ve read conflicting reports about it, with some old-guard Randians claiming that they all stopped smoking once, according to them, scientific evidence for its damaging effects became convincing. I don’t know how much (if any) currency denialism on this issue had among them back in the day.
Rothbard’s “Mozart was a Red” is a brilliant piece of satire, though! I’m not even that familiar with the details of Rand’s life and personality, but just from the behavior and attitudes I’ve seen from her contemporary followers, every line of it rings with hilarious parody.
Personally, I like this approach. Leave out the contemporary hot buttons, at least at first. First keep it abstract, with fanciful examples, so that people don’t read it with their “am I forced to believe?” glasses on. Then, once people have internalized your points, we can start to talk about whether this or that sacrosanct belief is really due to this bias.
I would think you could do with some explanation of why people aren’t genetically programmed to avoid eating X. Assuming that it has been around for an evolutionarily significant period. Some explanations could be that it interacts with something in the new diet or that humans have lost a gene required to process it.
Some taboos have survived well into the modern times due to innate, noncultural instincts. Take for example avoiding incest and the taboo around that. That is still alive and well. We could probably screen for genetic faults, or have sperm/egg donations for sibling couples nowadays but we don’t see many people saying we should relax that taboo.
Edit: The instinct is called the Westermarck Effect and has been show resistant to cultural pressure. The question is why cultural pressure works to break down other taboos, especially with regards to mating/relationships, which we should be good at by now. We have been doing them long enough.
There might be emotional as well as genetic reasons for avoiding incest. We don’t really know much about the subject. If anyone’s having an emotionally healthy (or at least no worse than average) incestuous relationship, they aren’t going to be talking about it.
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That’s an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases—the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don’t appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills—who, compared to other people, have much less difficulty with overcoming their biases once they’re pointed out and critically dissecting all sorts of unpleasant questions—tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn’t even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.
Antibiotics. The common wisdom is, that we use them too much. Might be, that the opposite is true. A more massive poisoning of pathogens with antibiotics could push them over the edge, to the oblivion. This way, when we use the antibiotics reluctantly, we give them a chance to adapt and to flourish.
As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice—they give them in overwhelming force to eradicate the bacteria completely. For example, they’ll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren’t deliberately scrimping on the antibiotics in an individual context.
The “give as few antibiotics as possible” rule mostly applies to giving them to as few patients as possible. If there’s a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.
The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.
I know. But not long ago, nobody expected that a bacteria is to blame. On the contrary! It was postulated, that no bacteria could possibly survive the stomach environment.
The reason I asked is that I don’t understand what you’re saying in the original post.
If you mean that we’re not giving enough antibiotics to people with stomach problems, well, that’s why I answered that we are currently giving enough antibiotics to people with stomach problems—in particular, we’re giving them two antibiotics plus a proton pump inhibitor, which is clinically demonstrated to be enough to get rid of h. pylori.
If you mean we should be giving antibiotics for diseases that aren’t currently believed to be caused by bacteria, on the off chance that they will turn out to in fact be caused by bacteria like stomach ulcers were, it doesn’t really work like that. There are dozens of antibiotics, many of which are specifically targeted at specific bacteria. If we don’t know what bacteria are causing a disease, we can’t target it with antibiotics except by giving the patient one of everything, which is a good way to kill them. This is ignoring the economic implications of giving drugs that can cost up to thousands of dollars per regimen for conditions that we have no reason to think they’d help for, the ethical issues in giving drugs with side effects up to and including death when they might not be necessary, and the medical issues involved in helping bacteria build up antibiotic immunity.
If I’m misunderstanding you, you’re going to have to explain what was in your post above better.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced just the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced jut the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.
I have an idea I’d like to discuss that might perhaps be good enough for my first top-level post once it’s developed a bit further, but I’d first like to ask if someone maybe knows of any previous posts in which something similar was discussed. So I’ll post a rough outline here as a request for comments.
It’s about a potential source of severe and hard to detect biases about all sorts of topics where the following conditions apply:
It’s a matter of practical interest to most people, where it’s basically impossible not to have an opinion. So people have strong opinions, and you basically can’t avoid forming one too.
The available hard scientific evidence doesn’t say much about the subject, so one must instead make do with sparse, incomplete, disorganized, and non-obvious pieces of rational evidence. This of course means that even small and subtle biases can wreak havoc.
Factual and normative issues are heavily entangled in this topic. By this I mean that people care deeply about the normative issues involved, and view the related factual issues through the heavily biasing lens of whether they lead to consequentialist arguments for or against their favored normative beliefs. (Of course, lots of folks won’t have their logic straight, so it’s enough that a particular factual belief is perceived to correlate with a popular or unpopular normative belief to be a subject of widespread bias in one or the other direction.)
Finally, the prevailing opinions on the subject have changed heavily through history, both factually and normatively, and people view the normative beliefs prevailing today as enlightened progress over terrible evils of the past.
These conditions of course apply to lots of stuff related to politics, social issues, etc. Now, the exact bias mechanism I have in mind is as follows.
As per the assumptions (3) and (4), people are aware (more or less) that the opinions on the subject in question were very different in the past, both factually and normatively. Since they support the present norms, they’ll of course believe that the past norms were evil and good riddance to them. They’ll chalk that one up for “progress”—in their minds, the same vaguely defined historical process that brought us science and prosperity in place of superstition and squalor, improvements that are impossible to deny, has also brought us good and enlightened normative beliefs on this issue instead of the former unfair, harmful, or just plain disturbing norms. However, since the area in question, as we’ve assumed under (2), is not amenable to a hard-scientific straightening out of facts from bullshit, it’s not at all clear that the presently prevailing factual beliefs are not severely biased. In fact, regardless of what normative beliefs one has about it, there is no rational reason at all to believe that the factual beliefs about the topic did not in fact become more remote from reality compared to some point in the past.
And now we get to the troublesome part where the biases get their ironclad armor: arguing that we’ve actually been increasingly deluding ourselves factually about some such topic ever since some point in the past, no matter how good the argument and evidence presented, will as per (3) and (4) automatically be perceived as an attack on the cherished contemporary normative beliefs by a reactionary moral monster. This will be true in the sense that updating the modern false factual beliefs will undermine some widely accepted consequentialist arguments for the modern normative beliefs—but regardless, even if one is still committed to these normative beliefs, they should be defended using logic and truth, not bias and falsity. Moreover, since both the normative and factual historical changes in prevailing beliefs have been chalked up to “progress,” the argument will be seen as an attack on progress as such, including its parts that have brought indisputable enrichment and true insight, and is thus seen as sacrilege against all the associated high-status ideas, institutions, and people.
To put it as briefly as possible, the bias is against valid arguments presenting evidence that certain historical changes in factual beliefs have been away from reality and towards greater delusions and biases. It rests on:
a biased moralistic reaction to what is perceived as an attack on the modern cherished normative beliefs, and
a bias in favor of ideas (and the associated institutions and individuals, both contemporary and historical) that enjoy the high status awarded by being a contributor to “progress.”
What should be emphasized is that this results in factual beliefs being wrong and biased, and the normative beliefs, whatever one’s opinion about their ultimate validity, owing lots of their support to factually flawed consequentialist arguments.
Does this make any sense? It’s just a quick dump of some three-quarters-baked ideas, but I’d like to see if it can be refined and expanded into an article.
It seems a common bias to me and worth exploring.
Have you thought about a tip-of-the-hat to the opposite effect? Some people view the past as some sort of golden age where things were pure and good etc. It makes for a similar but not exactly mirror image source of bias. I think a belief that generally things are progressing for the better is a little more common than the belief that generally the world is going to hell in a handbasket, but not that much more common.
This reminds me of a related bias—people generally don’t have any idea how much of the stuff in their heads was made up on very little evidence, and I will bring up a (hopefully) just moderately warm button issue to discuss it.
What is science fiction? If you’re reading this, you probably believe you can recognize science fiction, give a definition, and adjudicate edge cases.
I’ve read a moderate number of discussions on the subject, and eventually came to the conclusion that people develop very strong intuitions very quickly about human cultural inventions which are actually very blurry around the edges and may be incoherent in the middle. (Why is psi science fiction while magic is fantasy?)
And people generally don’t notice that their concepts aren’t universally held unless they argue about them with other people, and even then, the typical reaction is to believe that one is right and the other people are wrong.
As for the future and the past, it’s easy enough to find historians to tell you, in detail, that your generalizations about the past leave a tremendous amount out. It should be easier to see that futures are estimates at best, but it can be hard to notice even that.
As to whether I could give a definition of science fiction, Similarity Clusters and similar posts have convinced me that the kind of definition I’d normally make would not capture what I meant by the term.
I’ve noticed a similar thing happen with people trying to define ‘literary fiction.’ Makes me wonder what other domains might have this bias.
My assumption is that it’s all of them.
Reading efforts to define science fiction is why I’ve never looked at efforts at defining who’s a Jew. I have a least a sketchy knowledge of legal definitions for Reform and Orthodox, but that doesn’t cover the emotional territory.
What’s a poem? What’s a real American?
If you can find a area of human creation where there aren’t impassioned arguments about what a real whatever is, please let me know.
What’s a paperclip?
It’s an inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.
So those don’t count?
Correct.
Do you value those hunks of plastic more than other hunks of plastic?
Do you value inwardly-thrice-bent plastic wire that can non-destructively fasten paper together at an edge more than other hunks of plastic?
No.
No.
Why?
Because they’re not inwardly-thrice-bent metal wires that can non-destructively fasten paper together at an edge?
Is this classification algorithm really that difficult to learn?
I meant why do you not value plastic clips… oh, I get it, you value what you value, just like we do. But do you have any sort of rationalization or argument whereby it makes intuitive sense to you to value metal clips and not plastic ones?
Think for a minute about what it would be like for the WHOLE UNIVERSE to be plastic paperclips, okay? Wouldn’t you just be trying to send them into a star or something? What good are plastic papercips? Plastic.
*Shudders*
Clippy, that’s how we humans feel about a whole universe of metal paperclips. Imagine if there was a plastic-Clippy who wanted to destroy all metals and turn the universe into plastic paperclips. Wouldn’t you be scared? That’s how we feel about you.
That still seems just a bit paranoid. Why would I wipe you out when you could be put to use making papercips?
Imagine being put to use making plastic paperclips.
I don’t think those scenarios have the same badness for the referent. I know for a fact that some humans voluntarily make metal paperclips, or contribute to the causal chain necessary for producing them (designers, managers, metal miners, etc.), or desire that someone else provide for them paperclips. Do you have reason to believe these various, varied humans are atypical in some way?
We make paperclips instrumentally, because they are useful to us, but we would stop making them or destroy them if doing so would help us. Imagine an entity that found metal clips useful in the process of building machines that make plastic clips, but who ultimately only valued plastic clips and would destroy the metal if doing so helped it.
I suspect that you make other things besides paperclips—parts for other Clippy instances, for example. Does that imply that you’d consider it acceptable to be forced by a stronger AI into producing only Clippy-parts that would never be assembled into paperclip-producing Clippy-instances?
The paperclips that we produce are produced because we find paperclips instrumentally useful, as you find Clippy-parts instrumentally useful.
What is the distinction here between plastic and metal? They both do a very good job at keeping paper together. And plastic paperclips do so less destructively since they make less of an indentation in the paper.
Let me put it to you this way: would you rather have a block of metal, or a block of plastic? Just a simple question.
Or let’s say you were in some enemy base. Would you rather have those wimply plastic paperclips, or an unbendable, solid, metal paperclip, which can pick locks, complete circuits, clean out grime …
To ask the question is to answer it—seriously.
In the enemy base scenario, I would rather have a paperclip made out of military grade composite, which can have an arbitrary % of metal by mass, from 0% metal to >50% metal.
Do you not value paperclips made out of supermaterials more than metal paperclips?
Non-metal paperclips aren’t.
If you want to talk about making paperclip makers out of non-metals, you have a point.
If you want to claim that reasonable Clippys can disagree (before knowledge/value reconciliation) about how much metal content a paperclip can have before it’s bad, you have a point.
But in any case, composites must be constructed in their finished form. A fully-formed, fully-committed “block of composite”, where no demand for such a block exists, and certainly not at any good price, should be just as useless to you.
Are not some paperclips better than others? I (and you) would both get a lot more utility out of a paperclip made out of computronium than a paperclip made out of aluminum.
I find that paperclips often leave imprints of themselves in paper, if left clipped there for a long time. Does this not count as destruction?
Nope, it doesn’t count as destruction. Not when compared to pinning, stapling, riveting, nailing, bolting, or welding, anyway.
Good point. I guess physicists don’t spend much time arguing what a ‘real electron’ is, but once you start talking about abstract ideas...
Considerable efforts have been made here to have a stable meaning for rationality. I think it’s worked.
It’s a stable meaning...so maybe that just forestalls the argument until Less Wrongian rationalists meet other rationalists!
Yes, that’s a good point. However, one difference between my idea and the nostalgia biases is that I don’t expect that the latter, even if placed under utmost scrutiny, would turn out to be responsible for as many severe and entirely non-obvious false beliefs in practice. My impression is that in our culture, people are much better at detecting biased nostalgia than biased reverence for what are held to be instances of moral and intellectual progress.
I suspect that you live in a community where most people are politically more liberal than you. I have the impression that nostalgia is a harder-to-detect bias than progress, probably because I live in a community where most people are politically more conservative than I. For many, many people, change is almost always suspicious, and appealing to the past is rhetorically more effective than appealing to progress. Hence, most of their false beliefs are justified with nostalgia, if only because most beliefs, true or false, are justified with nostalgia.
What determines which bias is more effective? I would guess that the main determinant is whether you identify with the community that brought about the “progress”. If you do identify with them, then it must be good, because you and your kind did it. If, instead, you identify with the community that had progress imposed on them, you probably think of it as a foreign influence, and a deviation from the historical norm. This deviation, being unnatural, will either burn itself out or bring the entire community down in ruin.
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
Moreover, there is also the question of which bias affects what kinds of people more. I am more interested in biases that affect people who are on the whole smarter and more knowledgeable and rational. It seems to me that among such people, the nostalgic biases are less widespread, for a number of reasons. For example, scientists will be more likely than the general population to appreciate the extent of the scientific progress and the crudity of the past superstitions it has displaced in many areas of human knowledge, so I would expect that when it comes to issues outside their area of expertise, they would be—on average—biased in favor of contemporary consensus views when someone argues that they’ve become more remote from reality relative to some point in the past.
Hmm. Maybe it would help to give more concrete examples, because I might have misunderstood the kinds of beliefs that you’re talking about. Things like gender relations, race relations, and environmental policy were significantly different within living memory. Now, things like institutionalized slavery or a powerful monarchy are pretty much alien to modern developed countries. But these policies are advocated only by intellectuals—that is, by those who are widely read enough to have developed a nostalgia for a past that they never lived.
Actually, now you’ve nudged my mind in the right direction! Let’s consider an example even more remote in time, and even more outlandish by modern standards than slavery or absolute monarchy: medieval trials by ordeal.
The modern consensus belief is that this was just awful superstition in action, and our modern courts of law are obviously a vast improvement. That’s certainly what I had thought until I read a recent paper titled “Ordeals” by one Peter T. Leeson, who argues that these ordeals were in fact, in the given circumstances, a highly accurate way of separating the guilty from the innocent given the prevailing beliefs and customs of the time. I highly recommend reading the paper, or at least the introduction, as an entertaining de-biasing experience. [Update: there is also an informal exposition of the idea by the author, for those who are interested but don’t feel like going through the math of the original paper.]
I can’t say with absolute confidence if Leeson’s arguments are correct or not, but they sound highly plausible to me, and certainly can’t be dismissed outright. However, if he is correct, then two interesting propositions are within the realm of the possible: (1) in the given circumstances in which medieval Europeans lived, trials by ordeal were perhaps more effective in making correct verdicts in practice than if they had used something similar to our modern courts of law instead, and (2) the verdict accuracy rate by trials by ordeal could well have been greater than that achieved by our modern courts of law, which can’t be realistically considered to be anywhere near perfect. As Leeson says:
Now, let’s look at the issue and separate the relevant normative and factual beliefs involved. The prevailing normative belief today is that the only acceptable way to determine criminal guilt is to use evidence-based trials in front of courts, whose job is to judge the evidence as free of bias as possible. It’s a purely normative view, which states that anything else would simply be unjust and illegitimate, period. However, underlying this normative belief, and serving as its important consequentialist basis, there is also the factual belief that despite all the unavoidable biases, evidence-based trials necessarily produce more accurate verdicts than other methods, especially ancient methods such as the trial by ordeal that involved superstitions.
Yet, if Leeson is correct—and we should seriously consider that possibility—this factual belief, despite having been universally accepted in our civilization for centuries, is false. What follows is that there may actually be a non-obvious way to produce more accurate verdicts even in our day and age, based on different institutions, but nobody is taking the possibility seriously because of the universal (and biased) factual belief about the practical optimality of the modern court system. It also follows that a thousand years ago, Europeans could easily have caused more wrongful punishment by abolishing trials by ordeal and replacing them with evidence-based trials, even though such a change would be judged by the modern consensus view as a vast improvement, both morally and in practical accuracy.
Another interesting remark is that, from what I’ve seen on legal blogs, Leeson’s paper was met with polite and interested skepticism, not derision and hostility. However, it seems to me that this is because the topic is so extremely remote that it has no bearing whatsoever on any modern ideological controversies; I have no doubt that a similar positive reexamination of some negatively judged past belief or institution that still has significant ideological weight would provoke far more hostility. That seems to be another piece of evidence suggesting that severe biases might be found lurking under the modern consensus on a great many issues, operating via the mechanism I’m proposing.
I skimmed Leeson’s paper, and it looks like it has no quantitative evidence for the true accuracy of trial by ordeal. It has quantitative evidence for one of the other predictions he makes with his theory (the prediction that most people who go through ordeals are exonerated by them, which prediction is supported by the corresponding numbers, though not resoundingly), but Leeson doesn’t know what the actual hit rate of trial by orderal is.
This doesn’t mean Leeson’s a bad guy or anything—I bet no one can get a good estimate of trial by ordeal’s accuracy, since we’re here too late to get the necessary data. But it does mean he’s exaggerating (probably unconsciously) the implications of his paper—ultimately, his model will always fit the data as long as sufficiently many people believed trial by ordeal was accurate, independent of true accuracy. So the fact that his model pretty much fits the data is not strong evidence of true accuracy. Given that Leeson’s model fits the data he does have, and the fact that fact-finding methods were relatively poor in medieval times, I think your ‘interesting proposition’ #1 is quite likely, but we don’t gain much new information about #2.
(Edit—it might also be possible to incorporate ordeal-like tests into modern police work! ‘Machine is never wrong, son.’)
That’s interesting. I think you’re right that no one reacts too negatively to this news because they don’t see any real danger that it would be implemented.
But suppose there were a real movement to bring back trial by ordeal. According to the paper’s abstract, trial by ordeal was so effective because the defendants held certain superstitious belief. Therefore, if we wanted it to work again, we would need to change peoples’ worldview so that they again held such beliefs.
But there’s reason to expect that these beliefs would cause a great deal of harm — enough to outweigh the benefit from more accurate trials. For example, maybe airlines wouldn’t perform such careful maintenance on an airplane if a bunch of nuns would be riding it, since God wouldn’t allow a plane full of nuns to go down.
Well, look at me — I launched right into rationalizing a counter-argument. As with so many of the biases that Robin Hanson talks about, one has to ask, does my dismissal of the suggestion show that we’re right to reject it, or am I just providing another example of the bias in action?
It’s the old noble lie in a different package.
Tyrrell_McAllister:
That’s a valid point when it comes to issues that are a matter of ongoing controversies, or where the present consensus was settled within living memory, so that there are still people who remember different times with severe nostalgia. However, I had in mind a much wider class of topics, including those where the present consensus was settled in more remote past so that there isn’t anyone left alive to be nostalgic about the former state of affairs. (An exception could be the small number of people who develop romantic fantasies from novels and history books, but I don’t think they’re numerous enough to be very relevant.)
I don’t think that nostalgia bias would be harder to detect in general—it’s easy to detect in our culture because it isn’t a general part of a culture (that seems to be pretty much what you’re saying).
However, the opposite may have held for, say, imperial China, or medieval Europe.
Yeah, looks good! I would like to see a top-level article on this, and I think fruit X would be a good example to start with.
If the issue is how to fight back against these problems, I bet you could make a lot of headway by first establishing a bit of credibility as an X-eater, and then making your claims while being clear that you are not nostalgic. E.g. eat an X fruit on TV while you are on a talk show explaining that X fruit isn’t healthy in the long run. “I’m not [munch] a religious bigot, [crunch], I just think there might [slurp] be some poisonous chemicals [crunch] in this fruit and that we should run a few studies to [nibble] find out.”
Humor helps, as does theater.
My immediate reaction to reading this was that it was obvious that the particular hot-button issue that inspired it was the recent PUA debate… but I notice nobody else seems to have picked up on that, so now I’m wondering… was that what you had in mind, or am I just being self-obsessed?
(don’t worry, I’m not itching to restart that issue, I’m just curious about whether or not I’m imagining things)
ETA: Ok, after reading the rest of the comments more thoroughly, I guess I’m not the only person who figured that was your inspiration.
Personally, I would suggest you use the concrete examples, rather than abstract or hypothetical ‘poison-fruit’ kind of stories—those things never seem to be effective intuition pumps (for me at least). If you want to avoid the mind-killing effect of a hot-button issue, I think a better idea is just to use multiple concrete examples, and to choose them such that any given person is unlikely to have the same opinion on both of them.
Recent controversy on LW about gender, dating etc seems to fall into exactly this pattern.
In particular, there is heavy conflation of the facts of the matter about what kind of behavior women are attracted to with normative propositions about which gender is “better” and whether which is more blameworthy.
Gender equality discussions (Larry summers!) seem to fall into the same trap.
Yes, it was in fact thinking about that topic that made me try to write these thoughts down systematically. What I would like to do is to present them in a way that would elicit well-argued responses that don’t get sidetracked into mind-killer reactions (and the latter would inevitably happen in places where people put less emphasis on rationality than here, so this site seems like a suitable venue). Ultimately, I want to see if I’m making sense, or if I’m just seeking sophisticated rationalizations for some false unconventional opinions I managed to propagandize myself into.
Another type of example you could use in this topic is a real one, that occurred in the past.
This would better than a fictional example, actually, as it brings in evidence from reality much earlier.
Indeed, that is a good strategy. However, sometimes if you make it too abstract, people don’t actually get what you’re talking about. It’s a fine line!
Are you referring to my article? I didn’t mean to give the impression that either strategy was better.
This bias needs a name, like “moral progress bias”.
I ask myself what your case studies might be. The Mencius Moldbug grand unified theory comes to mind: belief in “human neurological uniformity”, statist economics, democracy as a force for good, winning wars by winning hearts and minds, etc, is all supposed to be one great error, descending from a prior belief that is simultaneously moral, political, and anthropological, and held in place by the sort of bias you describe.
You might also want to explore a related notion of “intellectual progress bias”, whereby a body of pseudo-knowledge is insulated from critical examination, not by moral sentiments, but simply by the belief that it is knowledge and that the history of its growth is one of discovery rather than of illusions piled ever higher.
Mitchell_Porter:
Well, any concrete case studies are by the very nature of the topic potentially inflammatory, so I’d first like to see if the topic can be discussed in the abstract before throwing myself into an all-out dissection of some belief that it’s disreputable to question.
One good case study could perhaps be the belief in democracy, where the moral belief in its righteousness is entangled with the factual belief that it results in freedom and prosperity—and bringing up counterexamples is commonly met with frantic No True Scotsman replies and hostile questioning of one’s motives and moral character. It would mean opening an enormous can of worms, of course.
Yes, this is a very useful notion. I think it would be interesting to combine it with some of my earlier speculations about what conditions are apt to cause an area of knowledge to enter such a vicious circle where delusions and bullshit are piled ever higher under a deluded pretense of progress.
As written up here, it’s a bit abstract for my personal tastes. I can’t tell from this description whether in the potential post you’re planning on using specific examples to make your points, probably because you’re writing carefully due to the sensitive nature of the subject matter. I suspect the post will be received more favorably if you give specific examples of some of these cherished normative beliefs, explain why they result in these biases that you’re describing, etc.
On the other hand, given the potentially polarizing nature of the beliefs, there’s no guarantee that you won’t excite some controversy and downvotes if you do take that path. But given the subject matter of some of your other recent comments, I (and others) can probably guess at least some what of you have in mind and will be thinking about it as we read your submission anyway. And in that case, it’s probably better to be explicit than to have people making their own guesses about what you’re thinking.
I was planning to introduce the topic through a parable of a fictional world carefully crafted not to be directly analogous to any real-world hot-button issues. The parable would be about a hypothetical world where the following facts hold:
A particular fruit X, growing abundantly in the wild, is nutritious, but causes chronical poisoning in the long run with all sorts of bad health consequences. This effect is however difficult to disentangle statistically (sort of like smoking).
Eating X has traditionally been subject to a severe Old Testament-style religious prohibition with unknown historical origins (the official reason of course was that God had personally decreed it). Impoverished folks who nevertheless picked and ate X out of hunger were often given draconian punishments.
At the same time, there has been a traditional belief that if you eat X, you’ll incur not just sin, but eventually also get sick. Now, note that the latter part happens to be true, though given the evidence available at the time, a skeptic couldn’t tell if it’s true or just a superstition that came as a side-effect of the religious taboo. You’d see that poor folks who eat it do get sick more often, but their disease might be just due to poverty, and you’d need sophisticated statistics and controlled studies to tell reliably which way it is.
At a later time, as science progresses and religion withdraws in front of it, and religious figures lose power and prestige, old superstitions and taboos perish, and now defying them is considered more and more cool and progressive. In particular, believing that eating fruit X is bad is now a mark of bigoted fundamentalism. Cool fashionable people will eat X occasionally just to prove a point, historians decry the horrors of the dark ages when poor people were sadistically persecuted for eating it, and a general consensus has been formed that its supposed unhealthiness has never been more than just another religiously motivated superstition. “X-eater” eventually becomes a metaphor for a smart fashionable free-thinker in these people’s culture, and “X-phobe” for a bigoted yokel.
People who eat X in significant quantities still get sick more, but the consensus explanation is that it’s because, since it’s free but not very tasty food, eating it correlates with poverty and thus all sorts of awful living conditions.
Now, notice that in this world, the prevailing normative belief on this issue has moved from draconian religious taboos to a laissez-faire approach, while at the same time, a closely related factual belief has moved significantly away from reality. For all the cruelty of the religious taboo, and the fact that poor folks may well prefer bad health later to starving now, the traditional belief that eating X is bad for your health was factually true. Yet a contrarian scientist who now suggests that this might be true after all will provoke derision and scorn. What is he, one of those crazed fundamentalists who want to bring back the days when poor folks were whipped and pilloried for picking X to feed their starving kids in years of bad harvest?
I think this example would illustrate quite clearly the sort of bias I have in mind. The questions however are:
Does it sound like too close an analogy to some present hot-button issue?
Does the idea that we might be suffering from some analogous biases sound too outlandish? I do believe that many such biases exist in the world today, and I probably myself suffer from some of them, but as you said, taking concrete examples might sound too controversial and polarizing.
I can think of several hot-button issues that are analogous to this parable — or would be, if the parable were modified as follows:
As science progresses, religious figures lose some power and prestige, but manage to hold on to quite a bit of it. Old superstitions and taboos perish at different rates in different communities, and defying them is considered more cool and progressive in some subcultures and cities. Someone will eat fruit X on television and the live audience will applaud, but a grouchy old X-phobe watching the show will grumble about it.
A conference with the stated goal of exploring possible health detriments of X will attract people interested in thinking rationally about public health, as well as genuine X-phobes. The two kinds of people don’t look any different.
The X-phobes pick up science and rationality buzzwords and then start jabbering about the preliminary cherrypicked scientific results impugning X, with their own superstition and illogical arguments mixed in. Twentysomething crypto-X-phobes seeking to revitalize their religion now claim that their religion is really all about protecting people from the harms of X, and feed college students subtle misinterpretations of the scientific evidence. In response to all this, Snopes.com gets to work discrediting any claim of the form “X is bad”. The few rational scientists studying the harmfulness of X are shunned by their peers.
What’s a rationalist to do? Personally, whenever I hear someone say “I think we should seriously consider the possibility that such-and-such may be true, despite it being politically incorrect”, I consider it more likely than not that they are privileging the hypothesis. People have to work hard to convince me of their rationality.
Yes, that would certainly make the parable much closer to some issues that other people have already pointed out! However, you say:
Well, if the intellectual standards in the academic mainstream of the relevant fields are particularly low, and the predominant ideological biases push very strongly in the direction of the established conclusion that the contrarians are attacking, the situation is, at the very least, much less clear. But yes, organized groups of contrarians are often motivated by their own internal biases, which they constantly reinforce within their peculiar venues of echo-chamber discourse. Often they even develop some internal form of strangely inverted political correctness.
Moreover, my parable assumes that there are still non-trivial lingering groups of X-phobe fundamentalists when the first contrarian scientists appear. But what if the situation ends up with complete extirpation of all sorts of anti-X-ism, and virtually nobody is left who supports it any more, long before statisticians in this hypothetical world figure out the procedures necessary to examine the issue correctly? Imagine anti-X-ism as a mere remote historical memory, with no more supporters than, say, monarchism in the U.S. today. The question is—are there any such issues today, where past beliefs have been replaced by inaccurate ones that it doesn’t even occur to anyone any more to question, not because it would be politically incorrect, but simply because alternatives are no longer even conceivable?
Maybe you could use the parable but put in brackets like you have with (sort of like smoking) but give very different ones for each point. That will keep the parable from seeming outlandish while not really starting a discussion of the bracketed illustrations. Smoking was a good illustration because it isn’t that hot a button any more but we can remember went it was.
Actually, maybe I could try a similar parable about a world in which there’s a severe, brutally enforced religious taboo against smoking and a widespread belief that it’s unhealthy, and then when the enlightened opinion turns against the religious beliefs and norms of old, smoking becomes a symbol of progress and freethinking—and those who try to present evidence that it is bad for you after all are derided as wanting to bring back the inquisition.
Though this perhaps wouldn’t be effective since the modern respectable opinion is compatible with criminalization of recreational drugs, so the image of freethinkers decrying what is basically a case of drug prohibition as characteristic of superstitious dark ages doesn’t really click. I’ll have to think about this more.
Actually, you might be surprised to learn that Randian Objectivists held a similar view (or at least Rand herself did), that smoking is a symbol of man’s[1] harnessing of fire by the power of reason. Here’s a video that caricatures the view (when they get to talking about smoking).
I don’t think they actually denied its harmful health effects though.
ETA: [1] Rand’s gendered language, not mine.
Yes, I’m familiar with this. Though in fairness, I’ve read conflicting reports about it, with some old-guard Randians claiming that they all stopped smoking once, according to them, scientific evidence for its damaging effects became convincing. I don’t know how much (if any) currency denialism on this issue had among them back in the day.
Rothbard’s “Mozart was a Red” is a brilliant piece of satire, though! I’m not even that familiar with the details of Rand’s life and personality, but just from the behavior and attitudes I’ve seen from her contemporary followers, every line of it rings with hilarious parody.
Reminds me a little of homosexuality, but only a little.
Personally, I like this approach. Leave out the contemporary hot buttons, at least at first. First keep it abstract, with fanciful examples, so that people don’t read it with their “am I forced to believe?” glasses on. Then, once people have internalized your points, we can start to talk about whether this or that sacrosanct belief is really due to this bias.
Yes; as soon as you got to the correlates-with-poverty part, I thought to myself, ‘what is he doing with this racism metaphor?’
I would think you could do with some explanation of why people aren’t genetically programmed to avoid eating X. Assuming that it has been around for an evolutionarily significant period. Some explanations could be that it interacts with something in the new diet or that humans have lost a gene required to process it.
Some taboos have survived well into the modern times due to innate, noncultural instincts. Take for example avoiding incest and the taboo around that. That is still alive and well. We could probably screen for genetic faults, or have sperm/egg donations for sibling couples nowadays but we don’t see many people saying we should relax that taboo.
Edit: The instinct is called the Westermarck Effect and has been show resistant to cultural pressure. The question is why cultural pressure works to break down other taboos, especially with regards to mating/relationships, which we should be good at by now. We have been doing them long enough.
There might be emotional as well as genetic reasons for avoiding incest. We don’t really know much about the subject. If anyone’s having an emotionally healthy (or at least no worse than average) incestuous relationship, they aren’t going to be talking about it.
The upvotes and interested responses indicate that there’s more than enough enthusiasm for a top-level post. Stop cluttering up the open thread! :-)
It seems like this general topic has already been discussed pretty extensively by e.g. Mencius Moldbug and Steve Sailer.
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That’s an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases—the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don’t appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills—who, compared to other people, have much less difficulty with overcoming their biases once they’re pointed out and critically dissecting all sorts of unpleasant questions—tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn’t even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.
Antibiotics. The common wisdom is, that we use them too much. Might be, that the opposite is true. A more massive poisoning of pathogens with antibiotics could push them over the edge, to the oblivion. This way, when we use the antibiotics reluctantly, we give them a chance to adapt and to flourish.
It just might be.
Do you have a citation for that?
As far as I understand it, when giving antibiotics to a specific patient, doctors often follow your advice—they give them in overwhelming force to eradicate the bacteria completely. For example, they’ll often give several different antibiotics so that bacteria that develop resistance to one are killed off by the others before they can spread. Side effects and cost limit how many antibiotics you give to one patient, but in principle people aren’t deliberately scrimping on the antibiotics in an individual context.
The “give as few antibiotics as possible” rule mostly applies to giving them to as few patients as possible. If there’s a patient who seems likely to get better on their own without drugs, then giving the patient antibiotics just gives the bacteria a chance to become resistant to antibiotics, and then you start getting a bunch of patients infected with multiple-drug-resistant bacteria.
The idea of eradicating entire species of bacteria is mostly a pipe dream. Unlike strains of virus that have been successfully eradicated, like smallpox, most pathogenic bacteria have huge bio-reservoirs in water or air or soil or animals or on the skin of healthy humans. So the best we can hope to do is eradicate them in individual patients.
This is one example. Maybe as free as the aspirin antibiotics would do here:
Link
All serious cases of stomach/duodenal ulcer are already tested for h. pylori and treated with several different antibiotics if found positive.
I know. But not long ago, nobody expected that a bacteria is to blame. On the contrary! It was postulated, that no bacteria could possibly survive the stomach environment.
So what are you suggesting with that example? That we should pre-emptively treat all diseases with antibiotics just in case bacteria are to blame?
Read my original post, what I am saying. Above.
The reason I asked is that I don’t understand what you’re saying in the original post.
If you mean that we’re not giving enough antibiotics to people with stomach problems, well, that’s why I answered that we are currently giving enough antibiotics to people with stomach problems—in particular, we’re giving them two antibiotics plus a proton pump inhibitor, which is clinically demonstrated to be enough to get rid of h. pylori.
If you mean we should be giving antibiotics for diseases that aren’t currently believed to be caused by bacteria, on the off chance that they will turn out to in fact be caused by bacteria like stomach ulcers were, it doesn’t really work like that. There are dozens of antibiotics, many of which are specifically targeted at specific bacteria. If we don’t know what bacteria are causing a disease, we can’t target it with antibiotics except by giving the patient one of everything, which is a good way to kill them. This is ignoring the economic implications of giving drugs that can cost up to thousands of dollars per regimen for conditions that we have no reason to think they’d help for, the ethical issues in giving drugs with side effects up to and including death when they might not be necessary, and the medical issues involved in helping bacteria build up antibiotic immunity.
If I’m misunderstanding you, you’re going to have to explain what was in your post above better.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced just the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.
The question of the original poster of this sub-thread was, what do we expect it might be, the public has no idea about, but it is convinced jut the opposite. Something in that direction.
I responded, that we might be wrong in the administrating the antibiotics. That it might be better to use them MORE and not less, what is the usual wisdom. Maybe, a better internal hygiene would be better and not worse.