More thoughts on assertions

Response to: The “show, don’t tell” nature of argument

Morendil says not to trust simple assertions. He’s right, for the certain class of simple assertions he’s talking about. But in order to see why, let’s look at different types of assertions and see how useful it is to believe them.

Summary:
- Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
- Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don’t believe the proposition.
- An assertion by a leading authority is stronger than an assertion by someone else.
- An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.

Unsupported assertions about non-controversial topics

Consider my assertion: “The Wikipedia featured article today is on Uriel Sebree”. Even if you haven’t checked Wikipedia today and have no evidence on this topic, you’re likely to believe me. Why would I be lying?

This can be nicely modeled in Bayesian terms—you start with a prior evenly distributed across Wikipedia topics, the probability of me saying this conditional on it being false is pretty low, and the probability of me saying it conditional on it being true is pretty high. So noting that I said it nicely concentrates probability mass in the worlds where it’s true. You’re totally justified in believing it. The key here is that you have no reason to believe there’s a large group of people who go around talking about Uriel Sebree being on Wikipedia regardless of whether or not he really is.

Unsupported assertions about controversial topics

The example given in Morendil’s post is that some races are biologically less intelligent than others. Let’s say you have no knowledge of this whatsoever. You’re so naive you don’t even realize it might be controversial. In this case, someone who asserts “some races are biologically less intelligent than others” is no less believable than someone who asserts “some races have slightly different frequencies of pancreatic cancer than others.” You’d accept the second as the sort of boring but reliable biological fact that no one is particularly prone to lie about, and you’d do the same with the first.

Now let’s say you’re familiar with controversies in sociology and genetics, you already know that some people believe some races are biologically more intelligent, and other people don’t. Let’s say you gauge the people around you and find that about 25% of people agree with the statement and 75% disagree.

This survey could be useful. You have to ask yourself—is this statement about race and genetics more likely to have the support of a majority of people in a world where it’s true than in a world where it’s false? “No” is a perfectly valid answer here—you might think people are so interested in signalling that they’re not racist that they’ll completely suspend their rational faculties. But “yes” is also a valid answer here if you think that the people around you have reasonably intelligent opinions on the issue. This would be a good time to increase your probability that it’s true.

Now I, a perfectly average member of the human race, make the assertion that I believe that statement. But from your survey, you already have information that negates any evidence from my belief—that given that the statement is false and there’s a 25% belief rate, there’s a 25% chance I would agree with it, and given that the statement is true and there’s a 25% belief rate, there’s a 25% chance I would agree with it. If you’ve already updated on your survey, my assertion is equally likely in both conditions and doesn’t shift probability one way or the other.

Unsupported assertions on extremely unusual topics

There is a case, I think, in which a single person asserting ze believes something can increase your probability. Imagine that I say, truthfully, that I believe that a race of otter-people from Neptune secretly controls the World Cup soccer tournament. If you’ve never heard this particular insane theory before, your estimate of the number of people who believed it was probably either zero, or so low that you wouldn’t expect anyone you actually meet (even for values of “meet” including online forums) to endorse it. My endorsing it actually raises your estimate of the percent of the human race who endorse it, and this should raise your probability of it being true. Clearly, it should not raise it very much, and it need not necessarily raise it at all to the degree that you can prove that I have reasons other than truth for making the assertion (in this case, most of the probability mass generated by the assertion would leak off into the proposition that I was insane) but it can raise it a little bit.

Unsupported assertions by important authorities

This effect becomes more important when the person involved has impressive credentials. If someone with a Ph.D in biology says that race plays a part in intelligence, this could shift your estimate. In particular, it would shift it if you previously thought the race-intelligence connection was such a fringe theory that they would be unlikely to get even one good biologist on their side. But if you already knew that this theory was somewhat mainstream and had at least a tiny bit of support from the scientific community, it would be giving no extra information. Consider this the Robin Hanson Effect, because a lot of the good Robin Hanson does comes from being a well-credentialed guy with a Ph.D willing to endorse theories that formerly sounded so crazy that people would not have expected even one Ph.D to endorse them.

In cases of the Hanson Effect, the way you found out about the credentialled supporter is actually pretty important. If you Googled “Ph.D who supports transhumanism” and found Robin’s name, then all it tells you is that there is at least one Ph.D who supports transhumanism. But if you were at a bar, and you found out the person next to you was a Ph.D, and you asked zir out of the blue if ze supported transhumanism, and ze said yes, then you know that there are enough Ph.Ds who support transhumanism that randomly running into one at the bar is not that uncommon an event.

An extreme case of the Hanson Effect is hearing that the world’s top expert supports something. If there’s only one World’s Top Expert, then that person’s opinion is always meaningful. This is why it was such a big deal when Watson came out in favor of a connection between race and intelligence. Now, I don’t know if Watson actually knows anything about human genetic variation. He could have just had one clever insight about biochemistry way back when, and be completely clueless around the rest of the field. But if we imagine he really is the way his celebrity status makes him seem—the World’s Top Expert in the field of genetics—then his opinion carries special weight for two reasons: first of all, it’s the only data point we have in the field of “what the World’s Top Expert thinks”, and second, it suggests that a large percentage of the rest of the scientific community agrees with him (his status as World’s Top Expert makes him something of a randomly chosen data point, and it would be very odd if we randomly pick the only data point that shares this opinion).

Assertions supported by unsupported claims of “evidence”

So much for completely unsupported assertions. Seeing as most people are pretty good at making up “evidence” that backs their pet beliefs, does it add anything to say ”...and I arrived at this conclusion using evidence” if you refuse to say what the evidence is?

Well, it’s a good signal for sanity. Instead of telling you only that at least one person believes in this hypothesis, you now know that at least one person who is smart enough to understand that ideas require evidence believes it.

This is less useful than it sounds. Disappointingly, there are not too many ideas that are believed solely by stupid people. As mentioned before, even creationism can muster a list of Ph.Ds who support it. When I was much younger, I was once quite impressed to hear that there were creationist Ph.Ds with a long list of scientific accomplishments in various fields. Since then, I learned about compartmentalization. So all that this ”...and I have evidence for this proposition” can do on a factual level is highlight the existence of compartmentalization for people who weren’t already aware of it.

But on a nonfactual level...again, it signals sanity. The difference betwee “I believe some races are less intelligent than others” and “I believe some races are less intelligent than others, and I arrived at this conclusion using evidence” is that the second person is trying to convince you ze’s not some random racist with an axe to grind, ze’s an amateur geneticist addressing an interesting biological question. I don’t evaluate the credibility of the two statements any differently, but I’d much rather hang out with the person who made the second one (assuming ze wasn’t lying or trying to hide real racism behind a scientific veneer).

Keep in mind that most communication is done not to convince anyone of anything, but to signal the character of the person arguing (source: I arrived at this conclusion using evidence). One character signal may interfere with other character signals, and “I arrived at this belief through evidence” can be a powerful backup. I have a friend who’s a physics Ph.D, an evangelical Christian with an strong interest in theology, and an American living abroad. If he tries to signal that he’s an evangelical Christian, he’s very likely to get shoved into the “redneck American with ten guns and a Huckabee bumper sticker” box unless he immediately adds something like “and I base this belief on sound reasoning.” That is one very useful signal there, and if he hadn’t given it, I probably would have never bothered talking to him further. It’s not a signal that his beliefs are actually based on sound reasoning, but it’s a signal that he’s the kind of guy who realizes beliefs should be based on that sort of thing and is probably pretty smart.

You can also take this the opposite way. There’s a great Dilbert cartoon where Dilbert’s date says something like “I know there’s no scientific evidence that crystals can heal people, but it’s my point of view that they do.” This is a different signal; something along the lines of “I’d like to signal my support for New Agey crystal medicine, but don’t dock me points for ignoring the scientific evidence against it.” This is more of a status-preserving manuever than the status-claiming “I have evidence for this” one, but astoundingly it seems to work pretty well (except on Dilbert, who responded, “When did ignorance become a point of view?”)