How to use “philosophical majoritarianism”

The ma­jor­ity of peo­ple would hold more ac­cu­rate be­liefs if they sim­ply be­lieved the ma­jor­ity. To state this in a way that doesn’t risk in­for­ma­tion cas­cades, we’re talk­ing about av­er­ag­ing im­pres­sions and com­ing up with the same be­lief.

To the de­gree that you come up with differ­ent av­er­ages of the im­pres­sions, you ac­knowl­edge that your be­lief was just your im­pres­sion of the av­er­age, and you av­er­age those metaim­pres­sions and get closer to be­lief con­ver­gence. You can re­peat this un­til you get bored, but if you’re do­ing it right, your be­liefs should get closer and closer to agree­ment, and you shouldn’t be able to pre­dict who is go­ing to fall on which side.

Of course, most of us are atyp­i­cal cases, and as good ra­tio­nal­ists, we need to up­date on this in­for­ma­tion. Even if our im­pres­sions were (on av­er­age) no bet­ter than the av­er­age, there are cer­tain cases where we know that the ma­jor­ity is wrong. If we’re go­ing to se­lec­tively ap­ply ma­jori­tar­i­anism, we need to figure out the rules for when to ap­ply it, to whom, and how the weight­ing works.

This much I think has been said again and again. I’m gonna at­tempt to de­scribe how.

Imag­ine for a mo­ment that you are a perfectly ra­tio­nal Bayesian, and you just need data.

First re­al­ize that “du­pli­cate peo­ple” don’t count dou­ble. If you make a max­i­mum pre­ci­sion copy of some­one, that doesn’t make him any more likely to be right- clearly we can do bet­ter than av­er­ag­ing over all peo­ple with equal weight­ing. By the same idea, find­ing out that a cer­tain train of thought lead­ing to a cer­tain be­lief is com­mon shouldn’t make you pro­por­tion­ally more con­fi­dent in that idea. The only rea­son it might make you any more con­fi­dent in it is the pos­si­bil­ity that its truth leads to its pro­lifer­a­tion and there­fore its pop­u­lar­ity is (weak) ev­i­dence.

This ex­plains why we can dis­miss the be­liefs of the billions of the­ists. First of all, their be­liefs are very well cor­re­lated so that all use­ful in­for­ma­tion can be learned through only a hand­ful of the­ists. Se­cond of all, we un­der­stand their ar­gu­ments and we un­der­stand how they formed their be­liefs-and have already taken them into ac­count. The rea­son they con­tinue to dis­agree is be­cause the situ­a­tion isn’t sym­met­ric—they don’t un­der­stand the op­pos­ing ar­gu­ments or the causal path that leads one to be a re­duc­tion­ist athe­ist.

No won­der “ma­jori­tar­i­on­ism” doesn’t seem to work here.

Since we’re still pre­tend­ing to be perfect Bayesi­ans, we only care about peo­ple who are fairly pre­dictable (given ac­cess to their in­for­ma­tion) and have in­for­ma­tion that we don’t have. If they don’t have any new in­for­ma­tion, then we can just fol­low the causal path and say “and here, sir, is where you went wrong.”. Even if we don’t un­der­stand their mind perfectly, we don’t take them se­ri­ously since it is clear that what­ever they were do­ing, they’re do­ing it wrong. On the other hand, if the other per­son has a lot of data, but we have no idea how data af­fects their be­liefs, then we can’t ex­tract any use­ful in­for­ma­tion.

We only change our be­liefs to more closely match theirs when they are not only pre­dictable, but pre­dictably ra­tio­nal. If you know some­one is always wrong, then re­vers­ing his stu­pidity can help you get more ac­cu­rate be­liefs, but it won’t bring you closer to agree­ment- just the op­po­site!

If we stop kid­ding our­selves and re­al­ize that we aren’t perfect Bayesian, then we have to start giv­ing credit to how other peo­ple think. If you and an epistemic peer come upon the same data set and come to differ­ent con­clu­sions, then you have no rea­son to think that your way of think­ing is any more ac­cu­rate than his (as we as­sumed he’s an epistemic peer). While you may have differ­ent ini­tial im­pres­sions, you bet­ter be able to con­verge to the same be­lief. And again, on each iter­a­tion, it shouldn’t be pre­dictable who is go­ing to fall on which side.

If we re­visit the cases like re­li­gion, then you still un­der­stand how they came to their be­liefs and ex­actly why they fail. So to the ex­tent that you be­lieve you can rec­og­nize stu­pidity when you see it, you still stick to your own be­lief. Even though you aren’t perfect, for this case, you’re good enough.

One sen­tence sum­mary: You want to shift your be­lief to the av­er­age over an­swers given by pre­dictably ra­tio­nal “Ri­tu­als of Cog­ni­tion”/​data set pairs1, not peo­ple2.

You weight the differ­ent “Ri­tu­als Of Cog­ni­tion”/​data pairs by how much you trust the ROC and by how large the data set is. You must, how­ever, keep in mind that to trust your­self more than av­er­age, you have to have a bet­ter than av­er­age rea­son to think that you’re bet­ter than av­er­age.

To the ex­tent that ev­ery­one has a unique take on the sub­ject, count­ing peo­ple and count­ing cog­ni­tive rit­u­als are equiv­a­lent. But when it comes to a group where all peo­ple think pretty close to the same way, then they only get one “vote”.

You can get “bonus points” if you can pre­dict the con­di­tional re­sponse of ir­ra­tional peo­ples ac­tion in re­sponse to data and up­date based on that. For prac­ti­cal pur­poses though, I don’t think much of this hap­pens as not many peo­ple are in­tel­li­gently stupid.

ETA: This takes the an­thro­po­mor­phism out of the loop. We’re look­ing at valid ROC, and pol­ling hu­man be­liefs is just a cheap way to find them. If we can come up with other ways of find­ing them, I ex­pect that to be very valuable. The smart peo­ple that im­press me most aren’t the ones that learn slightly quicker, since ev­ery­one else gets there too. The smart peo­ple that im­press me the most come in where ev­ery­one else is stumped and chop Gor­dian’s knot in half with their unique way of think­ing about the prob­lem. Can we train this skill?

Foot­notes:
1. I’m fully aware of how hoaky this sounds with­out any real math there, but it seems like it should be for­mal­iz­able.
If you’re just try­ing to im­prove hu­man ra­tio­nal­ity (as op­posed to pro­gram­ming AI), the real math would have to be in­ter­preted again any­way and I’m not gonna spend the time right now.

2. Just as think­ing iden­ti­cally to your twin doesn’t help you get the right an­swer (and there­fore is weighted less), if you can come up with more than one valid way of look­ing at things, you can ex­pect jus­tifi­ably be weighted as strongly as a small group of peo­ple.