Imaginary Positions

Every now and then, one reads an ar­ti­cle about the Sin­gu­lar­ity in which some re­porter con­fi­dently as­serts, “The Sin­gu­lar­i­tar­i­ans, fol­low­ers of Ray Kurzweil, be­lieve that they will be up­loaded into techno-heaven while the un­be­liev­ers lan­guish be­hind or are ex­tin­guished by the ma­chines.”

I don’t think I’ve ever met a sin­gle Sin­gu­lar­ity fan, Kurzweilian or oth­er­wise, who thinks that only be­liev­ers in the Sin­gu­lar­ity will go to up­load heaven and ev­ery­one else will be left to rot. Not one. (There’s a very few pseudo-Ran­dian types who be­lieve that only the truly self­ish who ac­cu­mu­late lots of money will make it, but they ex­pect e.g. me to be damned with the rest.)

But if you start out think­ing that the Sin­gu­lar­ity is a loony re­li­gious meme, then it seems like Sin­gu­lar­ity be­liev­ers ought to be­lieve that they alone will be saved. It seems like a de­tail that would fit the story.

This fit­ting­ness is so strong as to man­u­fac­ture the con­clu­sion with­out any par­tic­u­lar ob­ser­va­tions. And then the con­clu­sion isn’t marked as a de­duc­tion. The re­porter just thinks that they in­ves­ti­gated the Sin­gu­lar­ity, and found some loony cultists who be­lieve they alone will be saved.

Or so I de­duce. I haven’t ac­tu­ally ob­served the in­side of their minds, af­ter all.

Has any ra­tio­nal­ist ever ad­vo­cated be­hav­ing as if all peo­ple are rea­son­able and fair? I’ve re­peat­edly heard peo­ple say, “Well, it’s not always smart to be ra­tio­nal, be­cause other peo­ple aren’t always rea­son­able.” What ra­tio­nal­ist said they were? I would de­duce: This is some­thing that non-ra­tio­nal­ists be­lieve it would “fit” for us to be­lieve, given our gen­eral blind faith in Rea­son. And so their minds just add it to the knowl­edge pool, as though it were an ob­ser­va­tion. (In this case I en­coun­tered yet an­other ex­am­ple re­cently enough to find the refer­ence; see here.)

(Dis­claimer: Many things have been said, at one time or an­other, by one per­son or an­other, over cen­turies of recorded his­tory; and the topic of “ra­tio­nal­ity” is pop­u­larly enough dis­cussed that some self-iden­ti­fied “ra­tio­nal­ist” may have de­scribed “ra­tio­nal­ity” that way at one point or an­other. But I have yet to hear a ra­tio­nal­ist say it, my­self.)

I once read an ar­ti­cle on Ex­tropi­ans (a cer­tain fla­vor of tran­shu­man­ist) which as­serted that the Ex­tropi­ans were a reclu­sive en­clave of techno-mil­lion­aires (yeah, don’t we wish). Where did this de­tail come from? Definitely not from ob­ser­va­tion. And con­sid­er­ing the sheer di­ver­gence from re­al­ity, I doubt it was ever planned as a de­liber­ate lie. It’s not just eas­ily falsified, but a mark of em­barass­ment to give oth­ers too much credit that way (“Ha! You be­lieved they were mil­lion­aires?”) One sus­pects, rather, that the propo­si­tion seemed to fit, and so it was added - with­out any warn­ing la­bel say­ing “I de­duced this from my other be­liefs, but have no di­rect ob­ser­va­tions to sup­port it.”

There’s also a gen­eral prob­lem with re­porters which is that they don’t write what hap­pened, they write the Near­est Cliche to what hap­pened—which is very lit­tle in­for­ma­tion for back­ward in­fer­ence, es­pe­cially if there are few cliches to be se­lected from. The dis­tance from ac­tual Ex­tropi­ans to the Near­est Cliche “reclu­sive en­clave of techno-mil­lion­aires” is kinda large. This may get a sep­a­rate post at some point.

My ac­tual night­mare sce­nario for the fu­ture in­volves well-in­ten­tioned AI re­searchers who try to make a nice AI but don’t do enough math. (If you’re not an ex­pert you can’t track the tech­ni­cal is­sues your­self, but you can of­ten also tell at a glance that they’ve put very lit­tle think­ing into “nice”.) The AI ends up want­ing to tile the galaxy with tiny smiley-faces, or re­ward-coun­ters; the AI doesn’t bear the slight­est hate for hu­mans, but we are made of atoms it can use for some­thing else. The most prob­a­ble-seem­ing re­sult is not Hell On Earth but Null On Earth, a galaxy tiled with pa­per­clips or some­thing equally morally in­ert.

The imag­i­nary po­si­tion that gets in­vented be­cause it seems to “fit”—that is, fit the folly that the other be­lieves is gen­er­at­ing the po­si­tion—is “The Sin­gu­lar­ity is a dra­matic fi­nal con­flict be­tween Good AI and Evil AI, where Good AIs are made by well-in­ten­tioned peo­ple and Evil AIs are made by ill-in­ten­tioned peo­ple.”

In many such cases, no mat­ter how much you tell peo­ple what you re­ally be­lieve, they don’t up­date! I’m not even sure this is a mat­ter of any de­liber­ately jus­tify­ing de­ci­sion on their part—like an ex­plicit counter that you’re con­ceal­ing your real be­liefs. To me the pro­cess seems more like: They stare at you for a mo­ment, think “That’s not what this per­son ought to be­lieve!”, and then blink away the dis­so­nant ev­i­dence and con­tinue as be­fore. If your real be­liefs are less con­ve­nient for them, the same phe­nomenon oc­curs: words from the lips will be dis­carded.

There’s an ob­vi­ous rele­vance to pre­dic­tion mar­kets—that if there’s an out­stand­ing dis­pute, and the mar­ket-mak­ers don’t con­sult both sides on the word­ing of the pay­out con­di­tions, it’s pos­si­ble that one side won’t take the bet be­cause “That’s not what we as­sert!” In which case it would be highly in­ap­pro­pri­ate to crow “Look at those mar­ket prices!” or “So you don’t re­ally be­lieve; you won’t take the bet!” But I would guess that this is­sue has already been dis­cussed by pre­dic­tion mar­ket ad­vo­cates. (And that stan­dard pro­ce­dures have already been pro­posed for re­solv­ing it?)

I’m won­der­ing if there are similar Imag­i­nary Po­si­tions in, oh, say, eco­nomics—if there are things that few or no economists be­lieve, but which peo­ple (or jour­nal­ists) think economists be­lieve be­cause it seems to them like “the sort of thing that economists would be­lieve”. Open gen­eral ques­tion.