How to use “philosophical majoritarianism”

The majority of people would hold more accurate beliefs if they simply believed the majority. To state this in a way that doesn’t risk information cascades, we’re talking about averaging impressions and coming up with the same belief.

To the degree that you come up with different averages of the impressions, you acknowledge that your belief was just your impression of the average, and you average those metaimpressions and get closer to belief convergence. You can repeat this until you get bored, but if you’re doing it right, your beliefs should get closer and closer to agreement, and you shouldn’t be able to predict who is going to fall on which side.

Of course, most of us are atypical cases, and as good rationalists, we need to update on this information. Even if our impressions were (on average) no better than the average, there are certain cases where we know that the majority is wrong. If we’re going to selectively apply majoritarianism, we need to figure out the rules for when to apply it, to whom, and how the weighting works.

This much I think has been said again and again. I’m gonna attempt to describe how.

Imagine for a moment that you are a perfectly rational Bayesian, and you just need data.

First realize that “duplicate people” don’t count double. If you make a maximum precision copy of someone, that doesn’t make him any more likely to be right- clearly we can do better than averaging over all people with equal weighting. By the same idea, finding out that a certain train of thought leading to a certain belief is common shouldn’t make you proportionally more confident in that idea. The only reason it might make you any more confident in it is the possibility that its truth leads to its proliferation and therefore its popularity is (weak) evidence.

This explains why we can dismiss the beliefs of the billions of theists. First of all, their beliefs are very well correlated so that all useful information can be learned through only a handful of theists. Second of all, we understand their arguments and we understand how they formed their beliefs-and have already taken them into account. The reason they continue to disagree is because the situation isn’t symmetric—they don’t understand the opposing arguments or the causal path that leads one to be a reductionist atheist.

No wonder “majoritarionism” doesn’t seem to work here.

Since we’re still pretending to be perfect Bayesians, we only care about people who are fairly predictable (given access to their information) and have information that we don’t have. If they don’t have any new information, then we can just follow the causal path and say “and here, sir, is where you went wrong.”. Even if we don’t understand their mind perfectly, we don’t take them seriously since it is clear that whatever they were doing, they’re doing it wrong. On the other hand, if the other person has a lot of data, but we have no idea how data affects their beliefs, then we can’t extract any useful information.

We only change our beliefs to more closely match theirs when they are not only predictable, but predictably rational. If you know someone is always wrong, then reversing his stupidity can help you get more accurate beliefs, but it won’t bring you closer to agreement- just the opposite!

If we stop kidding ourselves and realize that we aren’t perfect Bayesian, then we have to start giving credit to how other people think. If you and an epistemic peer come upon the same data set and come to different conclusions, then you have no reason to think that your way of thinking is any more accurate than his (as we assumed he’s an epistemic peer). While you may have different initial impressions, you better be able to converge to the same belief. And again, on each iteration, it shouldn’t be predictable who is going to fall on which side.

If we revisit the cases like religion, then you still understand how they came to their beliefs and exactly why they fail. So to the extent that you believe you can recognize stupidity when you see it, you still stick to your own belief. Even though you aren’t perfect, for this case, you’re good enough.

One sentence summary: You want to shift your belief to the average over answers given by predictably rational “Rituals of Cognition”/​data set pairs1, not people2.

You weight the different “Rituals Of Cognition”/​data pairs by how much you trust the ROC and by how large the data set is. You must, however, keep in mind that to trust yourself more than average, you have to have a better than average reason to think that you’re better than average.

To the extent that everyone has a unique take on the subject, counting people and counting cognitive rituals are equivalent. But when it comes to a group where all people think pretty close to the same way, then they only get one “vote”.

You can get “bonus points” if you can predict the conditional response of irrational peoples action in response to data and update based on that. For practical purposes though, I don’t think much of this happens as not many people are intelligently stupid.

ETA: This takes the anthropomorphism out of the loop. We’re looking at valid ROC, and polling human beliefs is just a cheap way to find them. If we can come up with other ways of finding them, I expect that to be very valuable. The smart people that impress me most aren’t the ones that learn slightly quicker, since everyone else gets there too. The smart people that impress me the most come in where everyone else is stumped and chop Gordian’s knot in half with their unique way of thinking about the problem. Can we train this skill?

Footnotes:
1. I’m fully aware of how hoaky this sounds without any real math there, but it seems like it should be formalizable.
If you’re just trying to improve human rationality (as opposed to programming AI), the real math would have to be interpreted again anyway and I’m not gonna spend the time right now.

2. Just as thinking identically to your twin doesn’t help you get the right answer (and therefore is weighted less), if you can come up with more than one valid way of looking at things, you can expect justifiably be weighted as strongly as a small group of people.