# Asch’s Conformity Experiment

Solomon Asch, with ex­per­i­ments origi­nally car­ried out in the 1950s and well-repli­cated since, high­lighted a phe­nomenon now known as “con­for­mity.” In the clas­sic ex­per­i­ment, a sub­ject sees a puz­zle like the one in the nearby di­a­gram: Which of the lines A, B, and C is the same size as the line X? Take a mo­ment to de­ter­mine your own an­swer . . .

<img>

The gotcha is that the sub­ject is seated alongside a num­ber of other peo­ple look­ing at the di­a­gram—seem­ingly other sub­jects, ac­tu­ally con­fed­er­ates of the ex­per­i­menter. The other “sub­jects” in the ex­per­i­ment, one af­ter the other, say that line C seems to be the same size as X. The real sub­ject is seated next-to-last. How many peo­ple, placed in this situ­a­tion, would say “C”—giv­ing an ob­vi­ously in­cor­rect an­swer that agrees with the unan­i­mous an­swer of the other sub­jects? What do you think the per­centage would be?

Three-quar­ters of the sub­jects in Asch’s ex­per­i­ment gave a “con­form­ing” an­swer at least once. A third of the sub­jects con­formed more than half the time.

In­ter­views af­ter the ex­per­i­ment showed that while most sub­jects claimed to have not re­ally be­lieved their con­form­ing an­swers, some said they’d re­ally thought that the con­form­ing op­tion was the cor­rect one.

Asch was dis­turbed by these re­sults:1

That we have found the ten­dency to con­for­mity in our so­ciety so strong . . . is a mat­ter of con­cern. It raises ques­tions about our ways of ed­u­ca­tion and about the val­ues that guide our con­duct.

It is not a triv­ial ques­tion whether the sub­jects of Asch’s ex­per­i­ments be­haved ir­ra­tionally. Robert Au­mann’s Agree­ment The­o­rem shows that hon­est Bayesi­ans can­not agree to dis­agree—if they have com­mon knowl­edge of their prob­a­bil­ity es­ti­mates, they have the same prob­a­bil­ity es­ti­mate. Au­mann’s Agree­ment The­o­rem was proved more than twenty years af­ter Asch’s ex­per­i­ments, but it only for­mal­izes and strength­ens an in­tu­itively ob­vi­ous point—other peo­ple’s be­liefs are of­ten le­gi­t­i­mate ev­i­dence.

If you were look­ing at a di­a­gram like the one above, but you knew for a fact that the other peo­ple in the ex­per­i­ment were hon­est and see­ing the same di­a­gram as you, and three other peo­ple said that C was the same size as X, then what are the odds that only you are the one who’s right? I lay claim to no ad­van­tage of vi­sual rea­son­ing—I don’t think I’m bet­ter than an av­er­age hu­man at judg­ing whether two lines are the same size. In terms of in­di­vi­d­ual ra­tio­nal­ity, I hope I would no­tice my own se­vere con­fu­sion and then as­sign >50% prob­a­bil­ity to the ma­jor­ity vote.

In terms of group ra­tio­nal­ity, seems to me that the proper thing for an hon­est ra­tio­nal­ist to say is, “How sur­pris­ing, it looks to me like B is the same size as X. But if we’re all look­ing at the same di­a­gram and re­port­ing hon­estly, I have no rea­son to be­lieve that my as­sess­ment is bet­ter than yours.” The last sen­tence is im­por­tant—it’s a much weaker claim of dis­agree­ment than, “Oh, I see the op­ti­cal illu­sion—I un­der­stand why you think it’s C, of course, but the real an­swer is B.”

So the con­form­ing sub­jects in these ex­per­i­ments are not au­to­mat­i­cally con­victed of ir­ra­tional­ity, based on what I’ve de­scribed so far. But as you might ex­pect, the devil is in the de­tails of the ex­per­i­men­tal re­sults. Ac­cord­ing to a meta-anal­y­sis of over a hun­dred repli­ca­tions by Smith and Bond . . . 2

. . . Con­for­mity in­creases strongly up to 3 con­fed­er­ates, but doesn’t in­crease fur­ther up to 10–15 con­fed­er­ates. If peo­ple are con­form­ing ra­tio­nally, then the opinion of 15 other sub­jects should be sub­stan­tially stronger ev­i­dence than the opinion of 3 other sub­jects.

Ad­ding a sin­gle dis­sen­ter—just one other per­son who gives the cor­rect an­swer, or even an in­cor­rect an­swer that’s differ­ent from the group’s in­cor­rect an­swer—re­duces con­for­mity very sharply, down to 5–10% of sub­jects. If you’re ap­ply­ing some in­tu­itive ver­sion of Au­mann’s Agree­ment to think that when 1 per­son dis­agrees with 3 peo­ple, the 3 are prob­a­bly right, then in most cases you should be equally will­ing to think that 2 peo­ple will dis­agree with 6 peo­ple.3 On the other hand, if you’ve got peo­ple who are emo­tion­ally ner­vous about be­ing the odd one out, then it’s easy to see how adding a sin­gle other per­son who agrees with you, or even adding a sin­gle other per­son who dis­agrees with the group, would make you much less ner­vous.

Un­sur­pris­ingly, sub­jects in the one-dis­sen­ter con­di­tion did not think their non­con­for­mity had been in­fluenced or en­abled by the dis­sen­ter. Like the 90% of drivers who think they’re above-av­er­age in the top 50%, some of them may be right about this, but not all. Peo­ple are not self-aware of the causes of their con­for­mity or dis­sent, which weighs against any at­tempts to ar­gue that the pat­terns of con­for­mity are ra­tio­nal.4

When the sin­gle dis­sen­ter sud­denly switched to con­form­ing to the group, sub­jects’ con­for­mity rates went back up to just as high as in the no-dis­sen­ter con­di­tion. Be­ing the first dis­sen­ter is a valuable (and costly!) so­cial ser­vice, but you’ve got to keep it up.

Con­sis­tently within and across ex­per­i­ments, all-fe­male groups (a fe­male sub­ject alongside fe­male con­fed­er­ates) con­form sig­nifi­cantly more of­ten than all-male groups. Around one-half the women con­form more than half the time, ver­sus a third of the men. If you ar­gue that the av­er­age sub­ject is ra­tio­nal, then ap­par­ently women are too agree­able and men are too dis­agree­able, so nei­ther group is ac­tu­ally ra­tio­nal . . .

In­group-out­group ma­nipu­la­tions (e.g., a hand­i­capped sub­ject alongside other hand­i­capped sub­jects) similarly show that con­for­mity is sig­nifi­cantly higher among mem­bers of an in­group.

Con­for­mity is lower in the case of blatant di­a­grams, like the one at the be­gin­ning of this es­say, ver­sus di­a­grams where the er­rors are more sub­tle. This is hard to ex­plain if (all) the sub­jects are mak­ing a so­cially ra­tio­nal de­ci­sion to avoid stick­ing out.

Fi­nally, Paul Crowley re­minds me to note that when sub­jects can re­spond in a way that will not be seen by the group, con­for­mity also drops, which also ar­gues against an Au­mann in­ter­pre­ta­tion.

1Solomon E. Asch, “Stud­ies of In­de­pen­dence and Con­for­mity: A Minor­ity of One Against a Unan­i­mous Ma­jor­ity,” Psy­cholog­i­cal Mono­graphs 70 (1956).

2Rod Bond and Peter B. Smith, “Cul­ture and Con­for­mity: A Meta-Anal­y­sis of Stud­ies Us­ing Asch’s (1952b, 1956) Line Judg­ment Task,” Psy­cholog­i­cal Bul­letin 119 (1996): 111–137.

3This isn’t au­to­mat­i­cally true, but it’s true ce­teris paribus.

4For ex­am­ple, in the hy­poth­e­sis that peo­ple are so­cially-ra­tio­nally choos­ing to lie in or­der to not stick out, it ap­pears that (at least some) sub­jects in the one-dis­sen­ter con­di­tion do not con­sciously an­ti­ci­pate the “con­scious strat­egy” they would em­ploy when faced with unan­i­mous op­po­si­tion.

• I don’t see this ex­er­cise as be­ing so much about ra­tio­nal­ity as it is about our re­la­tion­ship with dis­so­nance. Peo­ple in my com­mu­nity (con­text-driven soft­ware testers) are ex­pected to treat con­fu­sion or con­tro­versy as it­self ev­i­dence of a po­ten­tially se­ri­ous prob­lem. For the re­spon­si­ble tester, such ev­i­dence must be in­ves­ti­gated and prob­a­bly raised as an is­sue to the client.

In short, in the situ­a­tion given in the ex­er­cise, I would not an­swer the ques­tion, but rather raise some ques­tions.

I drive tele­phone sur­vey­ors nuts in this way. They just don’t know what to do with a guy who an­swers “no opinion” or “I don’t know” or “can’t an­swer” to ev­ery sin­gle ques­tion in their poorly worded and con­text-non-spe­cific ques­tion­naires.

• Robert Au­mann’s Agree­ment The­o­rem shows that hon­est Bayesi­ans can­not agree to dis­agree—if they have com­mon knowl­edge of their prob­a­bil­ity es­ti­mates, they have the same prob­a­bil­ity es­ti­mate.

Um, doesn’t this also de­pend on them hav­ing com­mon pri­ors?

James

• Yes. More im­por­tantly, it de­pends on them be­ing hon­est Bayesi­ans, which hu­mans are not.

• It feels like there was no ex­plicit rule not to ask ques­tions. It’s in­ter­est­ing what per­centage of sub­jects ac­tu­ally ques­tioned the pro­cess.

If peo­ple are con­form­ing ra­tio­nally, then the opinion of 15 other sub­jects should be sub­stan­tially stronger ev­i­dence than the opinion of 3 other sub­jects.

I don’t see how mod­er­ate num­ber of other wrong-an­swer­ing sub­jects should in­fluence de­ci­sion of ra­tio­nal sub­ject, even if it’s strictly speak­ing stronger ev­i­dence, as un­cer­tainty in your own san­ity should be much lower than prob­a­bil­ity of al­ter­na­tive ex­pla­na­tions for wrong an­swers of other sub­jects.

• The video notes that when the sub­ject is in­structed to write their an­swers, con­for­mity drops enor­mously. That sug­gests we can set aside the hy­poth­e­sis that they con­form for the ra­tio­nal rea­son you set out.

• 90% of drivers can be bet­ter than the av­er­age.

• I took it to mean “You cre­ate some mea­sure­ment that or­ders all of the N drivers (la­beled with the nat­u­ral num­bers). They do not know their num­bers. 90% of them will es­ti­mate that their num­ber is >= the ceiling func­tion of N/​2”.

• Only in a hella skewed dis­tri­bu­tion, far from the ob­served dis­tri­bu­tion of ac­tual driv­ing be­hav­ior.

• Depends on how you mea­sure it. For ex­am­ple, 99.9% of drivers have caused a be­low-av­er­age num­ber of road fatal­ities.

• Even a more sane and more con­tin­u­ously dis­tributed mea­sure could yield that re­sult, de­pend­ing on how you fit the scale. If you mea­sure the like­li­hood of mak­ing a mis­take (so zero would be a perfect driver, and one a ra­bid lemur), I ex­pect the dis­tri­bu­tion to be hella skewed. Most peo­ple drive in a sane way most of the time. But it’s the few reck­less idiots you re­mem­ber—and so does ev­ery sin­gle one of the thou­sand other drivers who had the mis­for­tune to en­counter them. It would not sur­prise me if driv­ing mis­takes fol­lowed more-or-less a Pareto dis­tri­bu­tion.

• ‘This may come as some sur­prise’ to Asch & Au­mann, but ra­tio­nal­ity is not the de­sign point of the hu­man brain (oth­er­wise this blog would have no rea­son to ex­ist), get­ting by in the real world is. And get­ting by in the real world in­volved, for our an­ces­tors through tens of mil­le­nia, group be­long­ing, hence group con­for­mity. See J. Har­ris, ‘No Two Alike’, Chaps. 8 & 9 for a dis­cus­sion which refer­ences the Asch work. This does not mean of course that group con­for­mity was the only adap­ta­tion fac­tor. Be­ing right and be­ing ‘in’ both had (and have...) fit­ness value, and it’s pefectly nat­u­ral that both ten­den­cies ex­ist, in ten­sion.

• tra­di­tional cul­ture =/​= the hu­man brain

• At an ap­plied level, this re­minds me of Dr. Jerry B. Har­vey’s dis­cus­sion of the “Abilene Para­dox” in man­age­ment, where group­think can take over and move an or­ga­ni­za­tion in a di­rec­tion that no-one re­ally wants to go. All it takes is one dis­sen­ter to break the spell.

• Surely there’s more than so­cial con­for­mity/​con­flict aver­sion at work here? In the ex­per­i­ment in the video, an ex­pec­ta­tion of pat­tern con­tinu­a­tion is set up. For most ques­tions, the 4 spo­ken words the sub­ject hears be­fore re­spond­ing do cor­re­spond to the ap­par­ently cor­rect spo­ken word re­sponse. I’d ex­pect sub­con­cious pro­cesses to start in­ter­pret­ing this as an in­di­ca­tor of the cor­rect an­swer re­gard­less of so­cial effects and be in­fluenced ac­cord­ingly, at least enough to cause con­fu­sion which would then in­crease sus­cep­ti­bil­ity to the so­cial effects.

I’d ex­pect this effect to also be re­duced where the sub­ject is writ­ing down his an­swers, as that takes out of the equa­tion the close con­nec­tion be­tween hear­ing spo­ken num­bers and speak­ing spo­ken num­bers.

• Au­mann’s Agree­ment The­o­rem was proved more than twenty years af­ter Asch’s ex­per­i­ments, but it only for­mal­izes and strength­ens an in­tu­itively ob­vi­ous point—other peo­ple’s be­liefs are of­ten le­gi­t­i­mate ev­i­dence.

No, other peo­ple’s be­liefs are of­ten treated as ev­i­dence, and very pow­er­ful ev­i­dence at that.

Belief is not suit­able as any kind of ev­i­dence when more-di­rect ev­i­dence is available, yet peo­ple tend to re­ject di­rect ev­i­dence in or­der to con­form with the be­liefs of oth­ers.

The hu­man goal usu­ally isn’t to pro­duce jus­tified pre­dic­tions of like­li­hood, but to in­gra­ti­ate our­selves with oth­ers in our so­cial group.

What are you at­tempt­ing to do, Eliezer?

• Isn’t this ex­actly what was said in Hug The Query? I’m not sure I un­der­stand why you were down voted.

• “Belief is not suit­able as any kind of ev­i­dence when more-di­rect ev­i­dence is available …” is more like ‘You Can Only Ever Hug The Query By Your­self’.

• Cale­do­nian was a well-known LW troll who would fre­quently make vague, un­read­able, crit­i­cal, some­what hos­tile re­marks.

• So it’s guilt by as­so­ci­a­tion.

• I’d call it ad hominem

• FYI, if you look at Asch’s 1955 Scien­tific Amer­i­can ar­ti­cle, the lines on the cards were a lit­tle closer in length than in the ex­am­ple shown above.

• my vi­sion is so bad that i an­swered ‘none of the above’. i had to de­cide to mea­sure the lines. that meant i first had to get to where i did not think the trick was the ques­tion. that took a cup of tea. ‘trust the ruler, not the vi­sion’ has been added to my list of -ings.

• Isn’t it rea­son­able to find it more likely that peo­ple are ly­ing than that some­thing has gone that fla­grantly wrong with my abil­ity to judge sizes of lines?

• Not nec­es­sar­ily. Maybe your eyes are very bad, or you’ve suffered a stroke. (Though maybe you should be con­cerned about that and halt the ex­per­i­ment, rather than just agree­ing.)

• “Belief is not suit­able as any kind of ev­i­dence when more-di­rect ev­i­dence is available, yet peo­ple tend to re­ject di­rect ev­i­dence in or­der to con­form with the be­liefs of oth­ers.”

Cale­do­nian, this is just wrong. Our abil­ity to in­ter­pret ev­i­dence is not in­fal­lible, and is of­ten fal­lible in ways that are not perfectly cor­re­lated across in­di­vi­d­u­als. So even if we share the same ‘di­rect ev­i­dence’ as other ob­servers of equaly abil­ity their be­liefs are still rele­vant.

• Ex­cept we’d have to take into ac­count the idea that the oth­ers who’s be­liefs we are us­ing as ev­i­dence may them­selves have been us­ing the same idea… That re­sults weight­ing of the beleifs of an ini­tial group be­ing greatly am­plified above and be­yond what it should be, no?

• Robert Au­mann’s Agree­ment The­o­rem shows that hon­est Bayesi­ans can­not agree to dis­agree—if they have com­mon knowl­edge of their prob­a­bil­ity es­ti­mates, they have the same prob­a­bil­ity es­ti­mate.

In ad­di­tion to what James An­nan said, they also both have to know (with very high con­fi­dence) that they are in fact hon­est bayesi­ans. Both sides be­ing hon­est isn’t enough if ei­ther sus­pects the other of ly­ing.

• In terms of in­di­vi­d­ual ra­tio­nal­ity, I hope I would no­tice my own se­vere con­fu­sion and then as­sign >50% prob­a­bil­ity to the ma­jor­ity vote.

Notic­ing your own se­vere con­fu­sion should lead to in­ves­ti­gat­ing the rea­sons for the dis­agree­ment, not to im­me­di­ately go­ing along with the ma­jor­ity. Hon­est Bayesi­ans can­not agree to agree ei­ther. They must go through the pro­cess of shar­ing their in­for­ma­tion, not just their con­clu­sions.

• What are the odds, given to­day’s so­ciety, that a ran­domly se­lected group of peo­ple will in­clude any hon­est Bayesi­ans. Safer to as­sume that most of the group are ei­ther ly­ing, self-de­luded, con­fused, or have al­tered per­cep­tions. Par­tic­u­larly so in a set­ting like a psy­chol­ogy ex­per­i­ment.

• Strict hon­est Bayesi­ans? ZERO. (Not even LW con­tains a sin­gle true hon­est Bayesian.)

Ap­prox­i­ma­tions of hon­est Bayesi­ans? Bet­ter than you might think. Cer­tainly LW is full of rea­son­ably good ap­prox­i­ma­tions, and in stud­ies about 80% of peo­ple are hon­est (though most peo­ple as­sume that only 50% of peo­ple are hon­est, a phe­nomenon known as the Trust Gap). The Bayesian part is harder, since peo­ple who are say, re­li­gious, or su­per­sti­tious, or be­lieve in var­i­ous other ob­vi­ously false things, clearly don’t qual­ify.

• peo­ple who are say, re­li­gious, or su­per­sti­tious, or be­lieve in var­i­ous other ob­vi­ously false things

Why do you think you know this?

• Check out this pa­per:

Gre­gory S. Berns, Jonathan Chap­pelow, Caroline F. Zink, Giuseppe Pagnoni, Me­gan E. Martin-Skurski, and Jim Richards, “Neu­ro­biolog­i­cal Cor­re­lates of So­cial Con­for­mity and In­de­pen­dence Dur­ing Men­tal Ro­ta­tion,” Biolog­i­cal Psy­chi­a­try 58 (2005), pp. 245-253.

It claims that the con­formists can, un­der some con­di­tions, ac­tu­ally come to see the world differ­ently.

• Oh, one other thing. I know it’s been brought up be­fore, but as far as the agree­ment the­o­rem, I don’t feel I can safely use it. What I mean is that it seems I don’t un­der­stand ex­actly when it can and can­not be used. Speci­fi­cally, I know that there’s some­thing I’m miss­ing here, some un­der­stand­ing be­cause I don’t know the cor­rect way to re­solve things like agree­ment the­o­rem vs quan­tum suicide.

It’s been dis­cussed, but I haven’t seen it re­solved, so un­til I know ex­actly why agree­ment the­o­rem does not ap­ply there (or why the ap­par­ently straight­for­ward (to me) way of com­put­ing the quan­tum suicide num­bers is wrong), I’d per­son­ally be re­ally hes­i­tant to use the agree­ment the­o­rem di­rectly.

• The quan­tum suicide num­bers are wrong be­cause of the Born prob­a­bil­ities, and also the fact that con­scious­ness is not an ei­ther-or phe­nomenon. The odds of los­ing 99% of your con­scious­ness may be suffi­ciently high that you effec­tively have no con­scious­ness left. (Also: Have you ever been un­con­scious? Ap­par­ently it is pos­si­ble for you to find your­self in a uni­verse where you WERE un­con­scious for a pe­riod of time.)

Also, I’ve con­vinced that Many-Wor­lds is a dead end and Bohm was right, but I know I’m in the minor­ity on LW.

• Per­haps Eliezer or some­one else can check the math, but ac­cord­ing to my calcu­la­tions, if you use Nick Bostrom’s SSSI (Strong Self-Sam­pling As­sump­tion), and make the refer­ence class “ob­servers af­ter a quan­tum suicide ex­per­i­ment”, then if the prior prob­a­bil­ity of quan­tum im­mor­tal­ity is 12, af­ter a quan­tum suicide ex­per­i­ment has been performed with the per­son sur­viv­ing, both the out­side ob­server and the per­son un­der­go­ing the risk of death should up­date the prob­a­bil­ity of quan­tum im­mor­tal­ity to 47, so that they end up agree­ing.

This seems odd, but it is based on the calcu­la­tion that if the prob­a­bil­ity of quan­tum im­mor­tal­ity is 12, then the prob­a­bil­ity of end­ing up be­ing an ob­server watch­ing the ex­per­i­ment is 1724, while the prob­a­bil­ity of be­ing an ob­server sur­viv­ing the ex­per­i­ment is 724. How did I de­rive this? Well, if Quan­tum Im­mor­tal­ity is true, then the prob­a­bil­ity of be­ing an ob­server watch­ing the ex­per­i­ment is 23, be­cause one ob­server watches some­one die, one ob­server watches some­one sur­vive, and one ob­server ex­pe­riences sur­vival. Like­wise if QI is true, the prob­a­bil­ity of be­ing an ob­server sur­viv­ing the ex­per­i­ment is 13. On the other hand, if QI is false, the prob­a­bil­ity of be­ing an ob­server watch­ing the ex­per­i­ment is 34 (I will leave this deriva­tion to the reader), while the prob­a­bil­ity of be­ing an ob­server sur­viv­ing the ex­per­i­ment is 14.

From this it is not difficult to de­rive the prob­a­bil­ities above, that the prob­a­bil­ity of be­ing a watcher is 1724, and the prob­a­bil­ity of be­ing a sur­vivor 724. If you ap­ply Bayes’s the­o­rem to get the prob­a­bil­ity of QI given the fact of be­ing a sur­vivor, you will get 47. You will also get 47 if you up­date your prob­a­bil­ities both on the fact of be­ing a watcher and on the fact of see­ing a sur­vivor. So the two end up agree­ing.

In­tu­itive sup­port for this is the fact that if a QI ex­per­i­ment were ac­tu­ally performed, and we con­sider the view­point of the one sur­viv­ing 300 suc­ces­sive tri­als, he would cer­tainly con­clude that QI was true, and our in­tu­itions say that the out­side ob­servers should ad­mit that he’s right.

• In­ter­est­ing. If that’s right, then clearly QI is wrong, be­cause we’ve watched peo­ple die.

• In the above calcu­la­tion I for­got to men­tion that for sim­plic­ity I as­sumed that the ex­per­i­ment is such that one would nor­mally have a 50% chance of sur­vival. If this value is differ­ent, the val­ues above would be differ­ent, but the fact of agree­ment would be the same (al­though there would also be the difficulty that a chance other than 50% is not easy to rec­on­cile with a many-wor­lds the­ory any­way.)

• Quan­tum suicide vs. Au­mann has been dis­cussed a cou­ple times be­fore, and yes, it’s very con­fus­ing.

In­tu­itive sup­port for this is the fact that if a QI ex­per­i­ment were ac­tu­ally performed, and we con­sider the view­point of the one sur­viv­ing 300 suc­ces­sive tri­als, he would cer­tainly con­clude that QI was true, and our in­tu­itions say that the out­side ob­servers should ad­mit that he’s right.

My in­tu­itions say out­side ob­servers should not up­date their es­ti­mates one bit, and I’m pretty sure this is cor­rect, un­less they should also in­crease their prob­a­bil­ity of MWI on mak­ing the equiv­a­lent ob­ser­va­tion of a coin com­ing up heads 300 times in a row.

(al­though there would also be the difficulty that a chance other than 50% is not easy to rec­on­cile with a many-wor­lds the­ory any­way.) http://​​www.hed­web.com/​​ev­erett/​​ev­erett.htm#prob­a­bil­ities http://​​han­son.gmu.edu/​​man­gled­wor­lds.html

• IMHO quan­tum im­mor­tal­ity and quan­tum suicide (un­like MWI) are non­sense, but I’m still try­ing to figure out a way to say this that con­vinces other peo­ple.

For prob­a­bil­ities in MWI I recom­mend the work of David Wal­lace.

• Nick, my ar­gu­ment didn’t de­pend on in­tu­ition ex­cept for sup­port; so it doesn’t bother me if your in­tu­ition differs. What was your opinion of the ar­gu­ment (or did I sim­ply omit too many of the de­tails to judge)?

• I think the most in­ter­est­ing ques­tion that arises from these ex­per­i­ments is what’s the differ­ence in per­son­al­ity be­tween peo­ple who dis­sent and peo­ple who con­form (aside from the ob­vi­ous).

• I would guess that if we did a study us­ing the usual Big Five, a sin­gle per­son­al­ity trait would drive most of the var­i­ance, the one called “agree­able­ness”. Un­for­tu­nately this is not ac­tu­ally one trait, we just treat it like it is; there’s no par­tic­u­lar rea­son to think that con­for­mity is cor­re­lated with em­pa­thy, for ex­am­ple, yet they are both con­sid­ered “agree­able­ness”. (This is similar to the prob­lem with the trait “Belief in a Just World”, which in­cludes both the be­lief that a just world is pos­si­ble and the be­lief that it is ac­tual. An ideal moral per­son would definitely be­lieve in the pos­si­bil­ity; but upon ob­serv­ing a sin­gle starv­ing child they would know that it is not ac­tual. Hence should they be high, or low, in “Belief in a Just World”?)

• Un­known: Hrm, hadn’t thought of us­ing the SSSI. Thanks. Ran through it my­self by hand now, and it does seem to re­sult in the ex­per­i­menter and test sub­ject agree­ing.

How­ever, it pro­duces an… odd­ity. Speci­fi­cally, if us­ing the SSSI, then by my calcu­la­tions, when one takes into ac­count that the ex­ter­nal ob­server and the test sub­ject are not the only peo­ple in ex­is­tance, the ac­tual strength of ev­i­dence ex­tractable from a sin­gle quan­tum suicide ex­per­i­ment would seem to be rel­a­tively weak. If the ra­tio of non test sub­jects to test sub­jects is N, and the prob­a­bil­ity of the sub­ject sur­viv­ing sim­ply by the na­ture of the quan­tum ex­per­i­ment is R, the like­li­hood ra­tio is (1+N)/​(R+N), (which both the test sub­ject and the ex­ter­nal ob­server would agree on). See­ing a non­sur­vival gives a MWI to ~ MWI like­li­hood ra­tio of N/​(R+N). At least, as­sum­ing I did the math right. :)

Any­ways, so it looks like if SSSI is valid, quan­tum suicide doesn’t ac­tu­ally give very strong ev­i­dence one way or the other at all, does it?

Hrm… I won­der if in prin­ci­ple it could be used to make es­ti­mates about the to­tal pop­u­la­tion of the uni­verse by do­ing it a bunch of times and then an­a­lyz­ing the ra­tios of ob­served re­sults… chuck­les May have just dis­cov­ered the mad­dest way to do a cen­sus, well, ever.

• Clearly it can’t ac­tu­ally mat­ter what the pop­u­la­tion of the uni­verse is. (There’s noth­ing about the ex­per­i­ment that is based on that! It would be this bizarre non­lo­cal phe­nomenon that pops out of the the­ory with­out be­ing put into it!) That’s the kind of weird­ness you come up with if you do an­thropic calcu­la­tions WRONG.

• Ac­tu­ally, if con­sid­er­ing the SSSA in­stead of just the SSA, one has to take into ac­count all the ob­server-mo­ments, past and fu­ture, right? So there well be, in ad­di­tion to the spe­cific ob­server mo­ments of “im­me­di­ately post ex­per­i­ment test sub­ject (or not), ex­per­i­menter, ev­ery­one else...”, there’ll be past and fu­ture ver­sions theirof, and of other en­tities, so you’ll have K1 to­tal “oth­ers” (other ob­server-mo­ments, that is) in a MW uni­verse, and K2 << K1 “oth­ers” in a sin­gle world uni­verse.

This’ll make the calcu­la­tion a bit more con­fus­ing.

• ″… then what are the odds that only you are the one who’s right?”

If this is the rea­son­ing for peo­ple choos­ing the same an­swer then surely it be­comes a ques­tion of con­fi­dence rather than con­for­mity?

Choos­ing the same an­swer as the group in your ar­gu­ment is be­cause you aren’t con­fi­dent in your an­swer and are will­ing to defer to the ma­jor­ity an­swer. Not nec­es­sar­ily the same as con­for­mity. By your own ra­tioning you are go­ing with the group be­cause you think their an­swer is “bet­ter” not be­cause you want to be part of the group. I know you can ar­gue that that is just your ra­tio­nale for con­for­mity, but I feel that con­for­mity is more about doubt­ing some­thing you are sure you know, to side with a group, rather than doubt­ing some­thing you think you might know.

I feel pos­si­bly a more ac­cu­rate test (us­ing this rea­son­ing for con­for­mity) would be to take a group and tell all the mem­bers in­di­vi­d­u­ally that only they will know the right an­swer. Then give all bar one the same an­swer and one a differ­ent an­swer and see if they will con­form with the group.

• I be­lieve that the sub­jects were of those of a non-ma­tured state, thus mak­ing them of a “childish” mind and not able to pro­cess the situ­a­tion. The sub­jrects would sim­ply say any­thing their peers would say or do. I am test­ing this ex­per­i­ment on my class­mates. I am in the 10th grade and will re­spond back with the solu­tion. I blieve that a ma­tured mind would not give in so eas­ily with a sim­ple ques­tion. It is not the ques­tion at hand that is mak­ing the sub­jects say some­thing com­pletely in­cor­rect, it is the group pres­sure and the ma­tu­rity of the sub­jects. If a child’s mind thinks he or she is to be­lieve that of an­other sub­ject, then it shall think of that at hand. Chil­dren’s minds are so open and naive thatt they will be­lieve some­thing as sim­ple as Santa Clause com­ming down the chim­ney ev­ery year, then they will not hes­i­tate to think of an an­swer to the ques­tion of this ex­per­i­ment. It is a sim­ple and most un­e­d­u­cated ex­per­i­ment I had to pre­sent and test. A ma­tured mind will think not of the group pres­sure but that of the ques­tion. I will be back with my re­sults. Thank you.

Leeroy Jenkins

• “I be­lieve that the sub­jects were of those of a non-ma­tured state...”

I guess that’s the differ­ence be­tween be­ing bi­ased or not. I think your un­der­stand­ing of a “ma­ture mind” equals an “un­bi­ased mind” which is not pre­sent in all the adults. And of course the re­sult of this ex­per­i­ment would have been differ­ent if it were con­ducted on the read­ers of this web­site.

• I don’t see why you think that 3 ex­tra peo­ple, no mat­ter if they’re hon­est or not, amount to any sig­nifi­cant amount of ev­i­dence when you can see the di­a­gram your­self.

Sure, maybe they’re good enough if you can’t see the di­a­gram; 3 peo­ple think­ing the same thing doesn’t of­ten hap­pen when they’re wrong. But when they are wrong, when you can see that they are wrong, then it doesn’t mat­ter how many of them there are.

Also: cer­tainly the odds aren’t high that you’re right if we’re talk­ing to­tally ran­dom odds about a propo­si­tion where the ev­i­dence is to­tally am­bigu­ous. But since there is a di­a­gram, the odds then shift to ei­ther the very low prob­a­bil­ity “My eye­sight has sud­denly be­come hor­rible in this one in­stance and no oth­ers” com­bined with the high prob­a­bil­ity “3/​4 peo­ple are right about a seem­ingly easy prob­lem”, ver­sus the low prob­a­bil­ity “3/​4 peo­ple are wrong about a seem­ingly easy prob­lem”, ver­sus the high prob­a­bil­ity “My eye­sight is work­ing fine”.

I don’t know the ac­tual num­bers for this, but it seems likely the the prob­a­bil­ity of your eye­sight sud­denly malfunc­tion­ing in strange and spe­cific ways is worse then the prob­a­bil­ity of 3 other peo­ple get­ting an easy prob­lem wrong. Re­mem­ber, they can have what­ever long-stand­ing prob­lems with their eye­sight or per­cep­tion or what­ever any­one cares to make up. Or you could just take the re­sults of Asch’s ex­per­i­ment as a prior and say that they’re not that much more im­pres­sive than 1 per­son go­ing first.

(All this of course changes if they can ex­plain why C is a bet­ter an­swer; if they have a good log­i­cal rea­son for it de­spite how odd it seems, it’s prob­a­bly true. But un­til then, you have to rely on your own good log­i­cal rea­son for B be­ing a bet­ter an­swer.)

• “I hope I would no­tice my own se­vere con­fu­sion and then as­sign >50% prob­a­bil­ity to the ma­jor­ity vote.”

On a group level, I wouldn’t think it’s a par­tic­u­larly ra­tio­nal path to mimic the ma­jor­ity, even if you be­lieve that they’re hon­estly re­port­ing. If you had a group of, say, 10 peo­ple, and the first 5 all gave the wrong an­swer, there would then be a ra­tio­nal im­petu­ous for ev­ery­one sub­se­quent to mimic that wrong an­swer on the logic that “the last (5-9) peo­ple all said C, so clearly p(C) > 0.5”.

Far bet­ter to dis­sent and provide the group with new in­for­ma­tion.

• Ooh, that’s re­ally in­ter­est­ing. The best solu­tion might ac­tu­ally be to say the full state­ment, “I see B as equal, but since the other 5 peo­ple be­fore me said C, C is prob­a­bly ob­jec­tively more likely.” Then fu­ture peo­ple af­ter you can still hear what you saw, in­de­pen­dently of what you in­ferred based on oth­ers.

But I think there are a lot of other re­ally in­ter­est­ing prob­lems em­bed­ded in this, in­volv­ing the feed­back be­tween semi-Bayesi­ans try­ing to use each other to pro­cess ev­i­dence. (True Bayesi­ans get the right an­swer; but what an­swer to semi-Bayesi­ans get?)

• 3 Sep 2011 22:16 UTC
1 point
How do you face this situ­a­tion as a ra­tio­nal­ist?
• This gives us a very good rea­son to pub­li­cize dis­sent­ing opinions about just about any­thing—even per­haps when we think those dis­sents are wrong. Ap­par­ently the mere pres­ence of a dis­sen­ter dam­ages group­think and al­lows true an­swers a much bet­ter chance to emerge.

• I was all set to ask whether the re­sult of fe­male groups’ in­creased con­for­mity had any ex­plana­tory power over the ques­tion of why there aren’t more woman in the ra­tio­nal­ist move­ment. Then as I read on, it be­came less likely that fe­male psy­chol­ogy had any­thing to do with it. Rather, in-group vs out-group psy­chol­ogy did. Males, be­ing the so­cially more priv­ileged gen­der, are more likely to see them­selves as ‘just nor­mal’ rather than part of a par­tic­u­lar group called ‘males’.

Of course, this lends it­self to pre­dic­tions. In a given group­ing that self-iden­ti­fies strongly as that group­ing (such as woman, minor­ity eth­nic­i­ties, etc), if that group is very into a par­tic­u­lar sub­ject, its mem­bers will also likely be into it. Whereas, with a group that is less likely to self iden­tify (such as Amer­i­can Cau­casi­ans, Amer­i­cans within Amer­i­can bor­ders (but not abroad) and men) the con­for­mity on in­ter­ests will be less.

Have there been any stud­ies done to test this minor­ity vs ma­jor­ity group con­for­mity idea?

• I’m not up­set about los­ing points for this post, but I am a bit con­fused about it. Many out there know more about this stuff than I do. Did I say some­thing fac­tu­ally in­ac­cu­rate or en­gage in bad rea­son­ing? I want to know so that I don’t re­peat my mis­take.

• Your first para­graph men­tions a highly con­tested the­sis that you ad­mit is ir­rele­vant to the ev­i­dence. Your sec­ond para­graph seems to as­sert that dom­i­nant groups do not strongly-self iden­tify—which seems em­piri­cally false—con­sider spon­ta­neous chants of “USA, USA, USA”

Also, you are us­ing some quasi-tech­ni­cal jar­gon less pre­cisely than the terms are usu­ally used—and your mi­suses seem to be di­rected at sup­port­ing a par­tic­u­lar ide­olog­i­cal po­si­tion.

But that’s just the sense of some­one who prob­a­bly has a con­trary ide­olog­i­cal po­si­tion, so I’m not sure how I would recom­mend you gen­er­al­iz­ing from my im­pres­sion. (and the down­vote is gone at the mo­ment I’m writ­ing this—was it just one? Just ig­nore those if you can’t figure them out.)

• Ah.

I had sus­pected that it might be be­cause some­one had tried to in­fer my po­si­tion on such mat­ters from my ask­ing of the ques­tion and didn’t like the im­pli­ca­tion. I did, af­ter all, ad­mit to in­clud­ing the the­sis that ‘the ob­served high con­for­mance of a group of fe­males is in­fluenced by an as­pect of fe­male psy­chol­ogy’ in my list of pos­si­ble ex­pla­na­tions for the high con­for­mance in that group, even though I ended up re­ject­ing that hy­poth­e­sis.

(I sus­pect that your po­si­tion viz a viz whether ei­ther gen­der is su­pe­rior is not that differ­ent than my own. But to be clear, my po­si­tion is that both gen­ders pos­sess great ca­pac­ity for type 2 cog­ni­tion, which is the most im­por­tant mea­sure­ment of hu­man suc­cess. Any differ­ence be­tween healthy adults of ei­ther gen­der in their use of such cog­ni­tion comes down to so­cial fac­tors, which can be changed to cre­ate a fairer so­ciety.)

I’m still sur­prised about the sec­ond para­graph’s in­ac­cu­racy, though. In my ex­pe­rience, the chants of “USA, USA, USA” oc­cur at sport­ing matches against other coun­tries. That’s not an ‘in­ter­nal to Amer­ica’ thing. Then again, I don’t live in Amer­ica and haven’t for many years. I chose Amer­ica be­cause I was try­ing to cater my words to my au­di­ence. Per­haps that was wrong and I should have spo­ken from ex­pe­rience in­stead. (I’m Aus­tralian.)

I want to use ev­ery word ac­cu­rately, so I would be most ap­pre­ci­a­tive if you could give me a few ex­am­ples of jar­gon I’ve used and a de­scrip­tion (or link to one) of the way it should ac­tu­ally be used.

Thanks, Avi

PS—Yes. It was just one vote, so maybe I got re-up­voted or some­thing. Oh well. The ex­pe­rienced alerted me to an is­sue. That’s all any­one could ask of it.

• Image is miss­ing from ar­ti­cle.

• Thank you, fixed! (And thanks to Said for hav­ing back­ups of all the images on readthe­se­quences.com)

• Glad some­one’s pay­ing at­ten­tion to com­ments on old ar­ti­cles. There’s ac­tu­ally quite a few ex­am­ples of miss­ing images like this. Sorry I didn’t men­tion the ones I’ve en­coun­tered so far. I will do so in the fu­ture.

• Yes, please do. I try to fix all bro­ken links and images in old con­tent that I can find.