# Rationality is Systematized Winning

Fol­lowup to: New­comb’s Prob­lem and Re­gret of Rationality

“Ra­tion­al­ists should win,” I said, and I may have to stop say­ing it, for it seems to con­vey some­thing other than what I meant by it.

Where did the phrase come from origi­nally? From con­sid­er­ing such cases as New­comb’s Prob­lem: The su­per­be­ing Omega sets forth be­fore you two boxes, a trans­par­ent box A con­tain­ing \$1000 (or the equiv­a­lent in ma­te­rial wealth), and an opaque box B that con­tains ei­ther \$1,000,000 or noth­ing. Omega tells you that It has already put \$1M in box B if and only if It pre­dicts that you will take only box B, leav­ing box A be­hind. Omega has played this game many times be­fore, and has been right 99 times out of 100. Do you take both boxes, or only box B?

A com­mon po­si­tion—in fact, the main­stream/​dom­i­nant po­si­tion in mod­ern philos­o­phy and de­ci­sion the­ory—is that the only rea­son­able course is to take both boxes; Omega has already made Its de­ci­sion and gone, and so your ac­tion can­not af­fect the con­tents of the box in any way (they ar­gue). Now, it so hap­pens that cer­tain types of un­rea­son­able in­di­vi­d­u­als are re­warded by Omega—who moves even be­fore they make their de­ci­sions—but this in no way changes the con­clu­sion that the only rea­son­able course is to take both boxes, since tak­ing both boxes makes you \$1000 richer re­gard­less of the un­chang­ing and un­change­able con­tents of box B.

And this is the sort of think­ing that I in­tended to re­ject by say­ing, “Ra­tion­al­ists should win!

Said Miyamoto Musashi: “The pri­mary thing when you take a sword in your hands is your in­ten­tion to cut the en­emy, what­ever the means. When­ever you parry, hit, spring, strike or touch the en­emy’s cut­ting sword, you must cut the en­emy in the same move­ment. It is es­sen­tial to at­tain this. If you think only of hit­ting, spring­ing, strik­ing or touch­ing the en­emy, you will not be able ac­tu­ally to cut him.”

Said I: “If you fail to achieve a cor­rect an­swer, it is fu­tile to protest that you acted with pro­pri­ety.”

This is the dis­tinc­tion I had hoped to con­vey by say­ing, “Ra­tion­al­ists should win!

There is a meme which says that a cer­tain rit­ual of cog­ni­tion is the paragon of rea­son­able­ness and so defines what the rea­son­able peo­ple do. But alas, the rea­son­able peo­ple of­ten get their butts handed to them by the un­rea­son­able ones, be­cause the uni­verse isn’t always rea­son­able. Rea­son is just a way of do­ing things, not nec­es­sar­ily the most formidable; it is how pro­fes­sors talk to each other in de­bate halls, which some­times works, and some­times doesn’t. If a hoard of bar­bar­ians at­tacks the de­bate hall, the truly pru­dent and flex­ible agent will aban­don rea­son­able­ness.

No. If the “ir­ra­tional” agent is out­com­pet­ing you on a sys­tem­atic and pre­dictable ba­sis, then it is time to re­con­sider what you think is “ra­tio­nal”.

For I do fear that a “ra­tio­nal­ist” will clutch to them­selves the rit­ual of cog­ni­tion they have been taught, as loss af­ter loss piles up, con­sol­ing them­selves: “I have be­haved vir­tu­ously, I have been so rea­son­able, it’s just this awful un­fair uni­verse that doesn’t give me what I de­serve. The oth­ers are cheat­ing by not do­ing it the ra­tio­nal way, that’s how they got ahead of me.”

It is this that I in­tended to guard against by say­ing: “Ra­tion­al­ists should win!” Not whine, win. If you keep on los­ing, per­haps you are do­ing some­thing wrong. Do not con­sole your­self about how you were so won­der­fully ra­tio­nal in the course of los­ing. That is not how things are sup­posed to go. It is not the Art that fails, but you who fails to grasp the Art.

Like­wise in the realm of epistemic ra­tio­nal­ity, if you find your­self think­ing that the rea­son­able be­lief is X (be­cause a ma­jor­ity of mod­ern hu­mans seem to be­lieve X, or some­thing that sounds similarly ap­peal­ing) and yet the world it­self is ob­vi­ously Y.

But peo­ple do seem to be tak­ing this in some other sense than I meant it—as though any per­son who de­clared them­selves a ra­tio­nal­ist would in that mo­ment be in­vested with an in­vin­cible spirit that en­abled them to ob­tain all things with­out effort and with­out over­com­ing dis­ad­van­tages, or some­thing, I don’t know.

Maybe there is an al­ter­na­tive phrase to be found again in Musashi, who said: “The Way of the Ichi school is the spirit of win­ning, what­ever the weapon and what­ever its size.”

“Ra­tion­al­ity is the spirit of win­ning”? “Ra­tion­al­ity is the Way of win­ning”? “Ra­tion­al­ity is sys­tem­atized win­ning”? If you have a bet­ter sug­ges­tion, post it in the com­ments.

The about cap­tures the ex­pected sys­tem­atic win­ning part, as you are con­sid­er­ing the model of win­ning, not nec­es­sar­ily the ac­ci­den­tal win­ning it­self. It limits the scope to the win­ning only, leav­ing only the sec­ondary roles for parry, hit, spring, strike or touch. Be­ing a study about the real thing, ra­tio­nal­ity em­ploys a set of tricks that al­low to work it, in spe­cial cases and at coarse lev­els of de­tail. Be­ing about the real thing, ra­tio­nal­ity aims to give the means for ac­tu­ally win­ning.

• Per­son­ally, I think the word “win” might be the prob­lem. Win­ning is very bi­nary, which isn’t how ra­tio­nal­ity is defined. Per­haps “Ra­tion­al­ists max­i­mize”?

• Wikipe­dia has this right:

“a ra­tio­nal agent is speci­fi­cally defined as an agent which always chooses the ac­tion which max­imises its ex­pected perfor­mance, given all of the knowl­edge it cur­rently pos­sesses.”

Ex­pected perfor­mance. Not ac­tual perfor­mance. Whether its ac­tual perfor­mance is good or not de­pends on other fac­tors—such as how mal­i­cious the en­vi­ron­ment is, whether the agent’s pri­ors are good—and so on.

• Prob­lem with that in hu­man prac­tice is that it leads to peo­ple defend­ing their ru­ined plans, say­ing, “But my ex­pected perfor­mance was great!” Vide the failed trad­ing com­pa­nies say­ing it wasn’t their fault, the mar­ket had just done some­thing that it shouldn’t have done once in the life­time of the uni­verse. Achiev­ing a win is much harder than achiev­ing an ex­pec­ta­tion of win­ning (i.e. some­thing that it seems you could defend as a good try).

• Ex­pected perfor­mance is what ra­tio­nal agents are ac­tu­ally max­imis­ing.

Whether that cor­re­sponds to ac­tual perfor­mance de­pends on what their ex­pec­ta­tions are. What their ex­pec­ta­tions are typ­i­cally de­pends on their his­tory—and the past is not nec­es­sar­ily a good guide to the fu­ture.

Highly ra­tio­nal agents can still lose. Ra­tional ac­tions (that fol­low the laws of in­duc­tion and de­duc­tion ap­plied to their sense data) are not nec­es­sar­ily the ac­tions that win.

Ra­tional agents try to win—and base their efforts on their ex­pec­ta­tions. Whether they ac­tu­ally win de­pends on whether their ex­pec­ta­tions are cor­rect. In my view, at­tempts to link ra­tio­nal­ity di­rectly to “win­ning” miss the dis­tinc­tion be­tween ac­tual and ex­pected util­ity.

There are rea­sons for as­so­ci­a­tions be­tween ex­pected perfor­mance and ac­tual perfor­mance. In­deed, those as­so­ci­a­tions are why agents have the ex­pec­ta­tions they do. How­ever, the as­so­ci­a­tion is statis­ti­cal in na­ture.

Dis­sect the brain of a ra­tio­nal agent, and it is its ex­pected util­ity that is be­ing max­imised. Its ac­tual util­ity is usu­ally not some­thing that is com­pletely un­der its con­trol.

It’s im­por­tant not to define the “ra­tio­nal ac­tion” as “the ac­tion that wins”. Whether an ac­tion is ra­tio­nal or not should be a func­tion of an agent’s sense data up to that point—and should not vary de­pend­ing on en­vi­ron­men­tal fac­tors which the agent knows noth­ing about. Other­wise, the ra­tio­nal­ity of an ac­tion is not prop­erly defined from an agent’s point of view.

I don’t think that the ex­cuses hu­mans use for failures is an is­sue here.

Be­hav­ing ra­tio­nally is not the only virtue needed for suc­cess. For ex­am­ple, you also need to en­ter situ­a­tions with ap­pro­pri­ate pri­ors.

Only if you want ra­tio­nal­ity to be the sole virtue, should “but I was be­hav­ing ra­tio­nally” be the ul­ti­mate defense against an in­qui­si­tion.

Ra­tion­al­ity is good, but to win, you also need effort, per­sis­tence, good pri­ors, etc—and it would be very, very bad form to at­tempt to bun­dle all those into the no­tion of be­ing “ra­tio­nal”.

• Ex­pected perfor­mance is what ra­tio­nal agents are ac­tu­ally max­imis­ing.

Does that mean that I should me­chan­i­cally over­write my be­liefs about the chance of a lot­tery ticket win­ning, in or­der to max­i­mize my ex­pec­ta­tion of the pay­out? As Nesov says, ra­tio­nal­ity is about util­ity; which is why a ra­tio­nal agent in fact max­i­mizes their ex­pec­ta­tion of util­ity, while try­ing to max­i­mize util­ity (not their ex­pec­ta­tion of util­ity!).

It may help to un­der­stand this and some of the con­ver­sa­tions be­low if you re­al­ize that the word “try” be­haves a lot like “quo­ta­tion marks” and that hav­ing an ex­tra “pair” of quo­ta­tion “marks” can re­ally make “your” sen­tences seem a bit odd.

• I’m not sure I get this at all.

I offer you a bet, I’ll toss a coin, and give you £100 if it comes up heads, you give me £50 if it comes up tails. Pre­sum­ably you take the bet right? Be­cause your ex­pected re­turn is £50 - surely this is the sense in which ra­tio­nal­ists max­imise ex­pected util­ity. We don’t mean “the amount of util­ity they ex­pect to win”, but ex­pec­ta­tion in the tech­ni­cal sense—ie, the product of the like­li­hood of var­i­ous events hap­pen­ing with their util­ity in the uni­vserses in which those events hap­pen (or prob­a­bly more prop­erly an in­te­gral...)

If you ex­pect to lose £50 and you are wrong, that doesn’t ac­tu­ally say any­thing about the ex­pec­ta­tion of your win­nings.

• If you ex­pect to lose £50 and you are wrong, that doesn’t ac­tu­ally say any­thing about the ex­pec­ta­tion of your win­nings.

It does, how­ever, say some­thing about your ex­pec­ta­tion of your win­nings. Ex­pec­ta­tion can be very knowl­edge de­pen­dent. Let’s say some­one rolls two six sided dice, and then offers you a bet where you win \$100 if the sum of the dice is less than 5, but lose \$10 if the sum is greater than 5. You might perform var­i­ous calcu­la­tions to de­ter­mine your ex­pected value of ac­cept­ing the bet. But if I hap­pen to peak and see one of the dice has landed on 6, then I will calcu­late a differ­ent ex­pected value than you will.

So we have differ­ent ex­pected val­ues for calcu­lat­ing the bet, be­cause we have differ­ent in­for­ma­tion.

So EY’s point is that if a ra­tio­nal agent’s only pur­pose was to max­i­mize (their) ex­pected util­ity, they could eas­ily do this by se­lec­tively ig­nor­ing in­for­ma­tion, so that their calcu­la­tions turn out a spe­cific way.

But ac­tu­ally ra­tio­nal agents are not in­ter­ested in max­i­miz­ing (their) ex­pected util­ity. They are in­ter­ested in max­i­miz­ing real util­ity. Ex­cept it’s im­pos­si­ble to do this with­out perfect in­for­ma­tion, and so what agents end up do­ing is max­i­miz­ing ex­pected util­ity, al­though they are try­ing to max­i­mize real util­ity.

It’s like if I’m tak­ing a his­tory exam in school. I am try­ing to achieve 100% on the exam, but end up in­stead achiev­ing only 60% be­cause I have im­perfect in­for­ma­tion. My goal wasn’t 60%, it was 100%. But the ac­tual ac­tions I took (the an­swers I se­lected) led to to ar­rive at 60% in­stead of my true goal.

Ra­tional agents are try­ing to max­i­mize real util­ity, but end up max­i­miz­ing ex­pected util­ity (by defi­ni­tion), even though that’s not their true goal.

• Re: Does that mean that I should me­chan­i­cally over­write my be­liefs about the chance of a lot­tery ticket win­ning, in or­der to max­i­mize my ex­pec­ta­tion of the pay­out?

No, it doesn’t. It means that the pro­cess go­ing on in the brains of in­tel­li­gent agents can be well mod­el­led as calcu­lat­ing ex­pected util­ities—and then se­lect­ing the ac­tion that cor­re­sponds to the largest one.

In­tel­li­gent agents are bet­ter mod­el­led as Ex­pected Utility Max­imisers than Utility Max­imisers. Whether they ac­tu­ally max­imise util­ity de­pends on whether they are in an en­vi­ron­ment where their ex­pec­ta­tions pan out.

• In­tel­li­gent agents are bet­ter mod­el­led as Ex­pected Utility Max­imisers than Utility Max­imisers.

By defi­ni­tion, in­tel­li­gent agents want to max­i­mize to­tal util­ity. In the ab­sence of perfect knowl­edge, they act on ex­pected util­ity calcu­la­tions. Is this not a mean­ingful dis­tinc­tion?

• Re: Does that mean that I should me­chan­i­cally over­write my be­liefs about the chance of a lot­tery ticket win­ning, in or­der to max­i­mize my ex­pec­ta­tion of the pay­out?

No, it doesn’t. It means that the pro­cess go­ing on in the brains of in­tel­li­gent agents heads can be ac­cu­rately mod­el­led as calcu­lat­ing ex­pected util­ities—and then se­lect­ing the ac­tion that cor­re­sponds to the largest of these.

Agents are bet­ter mod­el­led as Ex­pected Utility Max­imisers than as Utility Max­imisers. Whether an Ex­pected Utility Max­imiser ac­tu­ally max­imises util­ity de­pends on whether it is in an en­vi­ron­ment where its ex­pec­ta­tions pan out.

• I am in­clined to ar­gue along ex­actly the same lines as Tim, though I worry there is some­thing I am miss­ing.

• Prob­lem with that in hu­man prac­tice is that it leads to peo­ple defend­ing their ru­ined plans, say­ing, “But my ex­pected perfor­mance was great!”

It’s true that peo­ple make this kind of re­sponse, but that doesn’t make it valid, or mean that we have to throw away the no­tion of ra­tio­nal­ity as max­i­miz­ing ex­pected perfor­mance, rather than ac­tual perfor­mance.

In the case of failed trad­ing com­pa­nies, can’t we just say that de­spite their fan­tasies, their ex­pected perfor­mance shouldn’t have been so great as they thought? And the fact that their ac­tual re­sults differed from their ex­pected re­sults should cast sus­pi­cion on their ex­pec­ta­tions.

Per­haps we can say that ex­pec­ta­tions about perfor­mance be epistem­i­cally ra­tio­nal, and only then can an agent who max­i­mizes their ex­pected perfor­mance be in­stru­men­tally ra­tio­nal.

Achiev­ing a win is much harder than achiev­ing an ex­pec­ta­tion of win­ning (i.e. some­thing that it seems you could defend as a good try).

Some ex­pec­ta­tions win. Some ex­pec­ta­tions lose. Yet not all ex­pec­ta­tions are cre­ated equal. Non-ac­ci­den­tal win­ning starts with some­thing that seems good to try (can ac­ci­den­tal win­ning be ra­tio­nal?). At least, there is some link be­tween ex­pec­ta­tions and ra­tio­nal­ity, such that we can call some ex­pec­ta­tions more ra­tio­nal than oth­ers, re­gard­less of whether they ac­tu­ally win or lose.

An ex­am­ple Soul­lessAu­toma­ton made was that we shouldn’t con­sider lot­tery win­ners ra­tio­nal, even though they won, be­cause they should not have ex­pected to. Con­versely, all sorts of in­duc­tive ex­pec­ta­tions can be ra­tio­nal, even though some­times they will fail due to the prob­lem of in­duc­tion. For in­stance, it’s ra­tio­nal to ex­pect that the sun will rise to­mor­row. If Omega de­cides to blow up the sun, my ex­pec­ta­tion will still have been ra­tio­nal, even though I turned out to be wrong.

• Yet not all ex­pec­ta­tions are cre­ated equal. Non-ac­ci­den­tal win­ning starts with some­thing that seems good to try (can ac­ci­den­tal win­ning be ra­tio­nal?).

In the real world, of course, most things are some mix­ture of con­trol­lable and ran­dom­ized. Depend­ing on your defi­ni­tion of ac­ci­den­tal, it can be ra­tio­nal to make low-cost steps to po­si­tion your­self to take ad­van­tage of pos­si­ble events you have no con­trol over. I wouldn’t call this ac­ci­den­tal, how­ever, be­cause the av­er­age ex­pected gain should be net pos­i­tive, even if one ex­pects (id est, with con­fi­dence greater than 50%) to lose.

I used the lot­tery as an ex­am­ple be­cause it’s gen­er­ally a clear-cut case where the ex­pected gain minus the cost of par­ti­ci­pat­ing is net nega­tive and the con­trol­lable fac­tor (how many tick­ets you buy) has ex­tremely small im­pact.

• Yes, and I liked your ex­am­ple for ex­actly this rea­son: the ex­pected value of buy­ing lot­tery tick­ets is nega­tive.

I think that this shows that it is ir­ra­tional to take an ac­tion where it’s clear-cut that the ex­pected value is nega­tive, even though due to chance, one iter­a­tion of that ac­tion might pro­duce a pos­i­tive re­sult. You are us­ing ac­ci­den­tal the same way I am: win­ning from an ac­tion with a nega­tive ex­pected value is what I would call ac­ci­den­tal, and win­ning with a pos­i­tive ex­pected value is non-ac­ci­den­tal.

Things are a bit more com­pli­cated when we don’t know the ex­pected value of an ac­tion. For ex­am­ple, in Eliezer’s ex­am­ples of failed trad­ing com­pa­nies, we don’t know the cor­rect ex­pected value of their trad­ing strate­gies, or whether they were pos­i­tive or nega­tive.

In cases where the ex­pected value of an ac­tion is un­known, per­haps the in­stru­men­tal ra­tio­nal­ity of the ac­tion is con­tin­gent on the epistemic ra­tio­nal­ity of our es­ti­ma­tion of its ex­pected value.

• I like your defi­ni­tion of an ac­ci­den­tal win, it matches my in­tu­itive defi­ni­tion and is stated more clearly than I would have been able to.

In cases where the ex­pected value of an ac­tion is un­known, per­haps the in­stru­men­tal ra­tio­nal­ity of the ac­tion is con­tin­gent on the epistemic ra­tio­nal­ity of our es­ti­ma­tion of its ex­pected value.

Yes. Ac­tu­ally, I think the “In cases where the ex­pected value of an ac­tion is un­known” clause is likely un­nec­es­sary, be­cause the ac­cu­racy of an ex­pected value calcu­la­tion is always at least slightly un­cer­tain.

Fur­ther­more, the sec­ond-or­der calcu­la­tion of the ex­pected value of ex­pend­ing re­sources to in­crease episte­molog­i­cal ra­tio­nal­ity should be pos­si­ble; and in the case that act­ing on a propo­si­tion is ir­ra­tional due to low cer­tainty, and the sec­ond-or­der value of in­creas­ing cer­tainty is nega­tive, the ra­tio­nal thing to do is shrug and move on.

• It sounds like the ob­jec­tion you’re giv­ing here is that “some peo­ple will mis­in­ter­pret ex­pected perfor­mance in the tech­ni­cal sense as ex­pected perfor­mance in the col­lo­quial sense (i.e., my guess as to how things will turn out).” That doesn’t seem like much of a crit­i­cism though, and it doesn’t sound se­vere enough to throw out what is a pretty stan­dard defi­ni­tion. Peo­ple will also mis­in­ter­pret your al­ter­nate defi­ni­tion, as we have seen.

Do you have other ob­jec­tions?

• What you say is im­por­tant: the vast ma­jor­ity of whin­ing “ra­tio­nal­ists” weren’t done dirty by a uni­verse that “no­body could have fore­seen” (the sub-prime mort­gage crisis/​pi­lot­ing jets into build­ings). If you sam­ple a ran­dom loser claiming such (my rea­son­ing was flawless, my pri­ors in­cor­po­rated all fea­si­bly available hu­man knowl­edge), an im­par­tial judge would in nearly all cases cor­rectly call them to task.

But clearly it’s not always the case that my rea­son­ing (and/​or pri­ors) is at fault when I lose. My up­dates shouldn’t over­shoot based on em­piri­cal noise and false hu­mil­ity. I think what you want to say is that most likely even (es­pe­cially?) the most proud ra­tio­nal­ists prob­a­bly shield them­selves from at­tribut­ing their loss to their own er­ror (“eat less salt”).

I’d like some quan­tifi­able demon­stra­tion of an ex­ter­nal­iz­ing bias, some cal­ibra­tion of my own per­sonal ten­dency to deny ev­i­dence of my own ir­ra­tional­ity (or of my wrong pri­ors).

• I’m not sure how you can im­ple­ment an ad­mo­ni­tion to Win and not just to (truly, sincerely) try. What is the em­piri­cal differ­ence?

I sup­pose you could use an ex­pected re­gret mea­sure (that is, the differ­ence be­tween the ideal re­sult and the re­sult of the de­ci­sion summed across the dis­tri­bu­tion of prob­a­ble fu­tures) in­stead of an ex­pected util­ity mea­sure.

Ex­pected re­gret tends to pro­duce more ro­bust strate­gies than ex­pected util­ity. For in­stance, in New­comb’s prob­lem, we could say that two-box­ing comes from ex­pected util­ity but one-box­ing comes from re­gret-min­i­miz­ing (since a “failed” two-box gives \$1,000,000-\$1,000=\$999,000 of re­gret, if you be­lieve Omega would have acted differ­ently if you had been the type of per­son to one-box, where a “failed” one-box gives \$1000-\$0=\$1,000 of re­gret).

Us­ing more ro­bust strate­gies may be a way to more con­sis­tently Win, though per­haps the true goal should be to know when to use ex­pected util­ity and when to use ex­pected re­gret (and there­fore to take ad­van­tage both of po­ten­tial bo­nan­zas and of risk-limit­ing mechanisms).

• I’m quite con­fi­dent there is only a lan­guage differ­ence be­tween eliezer’s de­scrip­tion and the point a num­ber of you have just made. Win­ning ver­sus try­ing to win are clearly two differ­ent things, and it’s also clear that “gen­uinely try­ing to win” is the best one can do, based on the defi­ni­tion those in this thread are us­ing. But Eli’s point on ob was that tel­ling one­self “I’m gen­uinely try­ing to win” of­ten re­sults in less than gen­uinely try­ing. It re­sults in “try­ing to try”...which means be­ing satis­fied by a dis­play of effort rather than util­ity max­i­miz­ing. So in­stead, he ar­guesn why not say to one­self the im­per­a­tive “Win!”, where he bakes the “try” part into the im­plicit im­per­a­tive. I agree eli’s lan­guage us­age here may be slightly non stan­dard for most of us (me in­cluded) and there­fore per­haps mis­lead­ing to the un­ini­ti­ated, but I’m doubt­ful we need to stress about it too much if the facts are as I’ve stated. Does any­one dis­agree? Per­haps one could ar­gue eli should have to say, “Ra­tional agents should win_eli” and link to an Ex­pla­na­tion like this thread, if we are gen­uinely con­cerned about peo­ple get­ting con­fused.

• Eliezer seems to be talk­ing about ac­tu­ally win­ning—e.g.: “Achiev­ing a win is much harder than achiev­ing an ex­pec­ta­tion of win­ning”.

He’s been do­ing this pretty con­sis­tently for a while now—in­clud­ing on his ad­minis­tra­tor’s page on the topic:

That is why this dis­cus­sion is still hap­pen­ing.

• Here’s a func­tional differ­ence: Omega says that Box B is empty if you try to win what’s in­side it.

• Yes! This func­tional differ­ence is very im­por­tant!

In Logic, you be­gin with a set of non-con­tra­dict­ing as­sump­tions and then build a con­sis­tent the­ory based on those as­sump­tions. The de­duc­tions you make are analo­gous to be­ing ra­tio­nal. If the as­sump­tions are non-con­tra­dict­ing, then it is im­pos­si­ble to de­duce some­thing false in the sys­tem. (Analo­gously, it is im­pos­si­ble for ra­tio­nal­ity not to win.) How­ever, you can get a para­dox by hav­ing a self-refer­en­tial state­ment. You can prove that ev­ery suffi­ciently com­plex the­ory is not closed—there are things that are true that you can’t prove from within the sys­tem. Along the same lines, you can build a para­dox by forc­ing the sys­tem to try to talk about it it­self.

What Grob­stein has pre­sented is a clas­sic para­dox and is the clos­est you can come to ra­tio­nal­ity not win­ning.

• I un­der­stand all that, but I still think it’s im­pos­si­ble to op­er­a­tional­ize an ad­mo­ni­tion to Win. If

Omega says that Box B is empty if you try to win what’s in­side it.

then you sim­ply can­not im­ple­ment a strat­egy that will give you the pro­ceeds of Box B (un­less you’re us­ing some defi­ni­tion of “try” that is in­con­sis­tent with “choose a strat­egy that has a par­tic­u­lar ex­pected re­sult”).

I think that falls un­der the “rit­ual of cog­ni­tion” ex­cep­tion that Eliezer dis­cussed for a while: when Win­ning de­pends di­rectly on the rit­ual of cog­ni­tion, then of course we can define a situ­a­tion in which ra­tio­nal­ity doesn’t Win. But that is perfectly mean­ingless in ev­ery other situ­a­tion (which is to say, in the world), where the re­sult of the rit­ual is what mat­ters.

• Agents do try to win. The don’t nec­es­sar­ily ac­tu­ally win. For ex­am­ple, if they face a su­pe­rior op­po­nent. Kas­parov was be­hav­ing in a highly ra­tio­nal man­ner in his bat­tle with Deep Blue. He didn’t win. He did try to, though. Thus the dis­tinc­tion be­tween try­ing to win and ac­tu­ally win­ning.

• It’s re­ally easy to con­vince your­self that you’ve truly, sincerely tried—try­ing to try is not nearly as effec­tive as try­ing to win.

• The in­tended dis­tinc­tion was origi­nally be­tween try­ing to win and ac­tu­ally win­ning. You are com­par­ing two kinds of try­ing.

• You are com­par­ing two kinds of try­ing.

I’m not sure how you can im­ple­ment an ad­mo­ni­tion to Win and not just to (truly, sincerely) try. What is the em­piri­cal differ­ence?

Based on the above, I be­lieve the dis­tinc­tion was be­tween two differ­ent kinds of ad­mo­ni­tions. I was point­ing out that an ad­mo­ni­tion to win will cause some­one to try to win, and an ad­mo­ni­tion to try will cause some­one to try to try.

• Right, but again, the topic is the defi­ni­tion of in­stru­men­tal ra­tio­nal­ity, and whether it refers to “try­ing to win” or “ac­tu­ally win­ning”.

What do “ad­mo­ni­tions” have to do with things? Are you ar­gu­ing that be­cause tel­ling some­one to “win” may some pos­i­tive effect that tel­ling some­one to “try to win” lacks that we should define “in­stru­men­tal ra­tio­nal­ity” to mean “win­ning”—and not “try­ing to win”?

Isn’t that an idiosyn­cracy of hu­man psy­chol­ogy—which surely ought to have noth­ing to do with the defi­ni­tion of “in­stru­men­tal ra­tio­nal­ity”.

Con­sider the ex­am­ple of hand­i­cap chess. You start with no knight. You try to win. Ac­tu­ally you lose. Were you be­hav­ing ra­tio­nally? I say: you may well have been. Ra­tion­al­ity is more about the try­ing, than it is about the win­ning.

• The ques­tion was about ad­mo­ni­tions. I com­mented based on that. I didn’t mean any­thing fur­ther about in­stru­men­tal ra­tio­nal­ity.

• OK. I don’t think we have a dis­agree­ment, then.

I con­sider it to be a prob­a­bly-true fact about hu­man psy­chol­ogy that if you tell some­one to “try” rather than tel­ling them to “win” then that in­tro­duces failure pos­si­bil­ites into their mind. That may have a pos­i­tive effect, if they are nat­u­rally over-con­fi­dent—or a nega­tive one, if they are nat­u­rally wracked with self-doubt.

It’s the lat­ter group who buy self-help books: the former group doesn’t think it needs them. So the self-help books tell you to “win”—and not to “try” ;-)

• Thomblake’s in­ter­pre­ta­tion of my post matches my own.

• Right, but again, the topic is the defi­ni­tion of in­stru­men­tal ra­tio­nal­ity, and whether it refers to “try­ing to win” or “ac­tu­ally win­ning”.

What do “ad­mo­ni­tions” have to do with things? Are you ar­gu­ing that be­cause tel­ling some­one to “win” may some a pos­i­tive effect that tel­ling some­one to “try to win” lacks that we should define “in­stru­men­tal ra­tio­nal­ity” to mean “win­ning” and not “try­ing to win”?

Isn’t that an idiosyn­cracy of hu­man psy­chol­ogy—which surely ought to have noth­ing to do with the defi­ni­tion of “in­stru­men­tal ra­tio­nal­ity”.

Con­sider the ex­am­ple of hand­i­cap chess. You start with no knight. You try to win. Ac­tu­ally you lose. Were you be­hav­ing ra­tio­nally? I say: you may well have been. Ra­tion­al­ity is more about the try­ing, than it is about the win­ning.

• I agree. I’m just not­ing that an ad­mo­ni­tion to Win is strictly an ad­mo­ni­tion to try, phrased more strongly. Win­ning is not an ac­tion—it is a re­sult. All I can sug­gest are ac­tions that get you to that re­sult.

I can tell you “don’t be satis­fied with try­ing and failing,” but that’s not quite the same.

• As for the “Try­ing-to-try” page—an ar­gu­ment from Yoda and the Force? It reads like some­thing out of a self-help man­ual!

Sure: if you are try­ing to in­spire con­fi­dence in your­self in or­der to im­prove your perfor­mance, then you might un­der some cir­cum­stances want to think only of win­ning—and ig­nore the pos­si­bil­ity of try­ing and failing. But let’s not get our sub­jects in a mud­dle, here—the topic is the defi­ni­tion of in­stru­men­tal ra­tio­nal­ity, not how some new-age self-help man­ual might be writ­ten.

• Of course, this isn’t the first time I have pointed this out—see:

No­body seemed to have any co­her­ent crit­i­cism the last time around—and yet now we have the same is­sue all over again.

• I’m not com­plain­ing, just ob­serv­ing. I see you are us­ing the “royal we” again.

I won­der whether be­ing sur­rounded by agents that agree with you is helping.

• I agree with you that peo­ple shouldn’t drink fatal poi­son, and that 2+2=4. Should you feel wor­ried be­cause of that?

• If it were also the case that your friends all agreed with you, but the “main­stream/​dom­i­nant po­si­tion in mod­ern philos­o­phy and de­ci­sion the­ory” dis­agreed with you, then yes, you should prob­a­bly feel a bit wor­ried.

• Good point, my re­ply didn’t take it into ac­count. It all de­pends on the depth of un­der­stand­ing, so to an­swer your re­mark con­sider e.g. su­per­nat­u­ral, UFOs.

• Is there re­ally such a dis­agree­ment about New­comb’s prob­lem?

The is­sue seems to be whether agents can con­vinc­ingly sig­nal to a pow­er­ful agent that they will act in some way in the fu­ture—i.e. whether it is pos­si­ble to make cred­ible promises to such a pow­er­ful agent.

I think that this is pos­si­ble—at least in prin­ci­ple. Eliezer also seems to think this is pos­si­ble. I per­son­ally am not sure that such a pow­er­ful agent could achieve the pro­posed suc­cess rate on un­mod­ified hu­mans—but in the con­text of ar­tifi­cial agents, I see few prob­lems—es­pe­cially if Omega can leave the ar­tifi­cial agent with the boxes in a cho­sen con­trol­led en­vi­ron­ment, where Omega can be fairly con­fi­dent that they will not be in­terfered with by in­ter­ested third par­ties.

Do many in “mod­ern philos­o­phy and de­ci­sion the­ory” re­ally dis­agree with that?

More to the point, do they have a co­her­ent counter-ar­gu­ment?

• Thanks for men­tion­ing ar­tifi­cial agents. If they can run ar­bi­trary com­pu­ta­tions, Omega it­self isn’t im­ple­mentable as a pro­gram due to the halt­ing prob­lem. Maybe this is rele­vant to New­comb’s prob­lem in gen­eral, I can’t tell.

• Surely not a se­ri­ous prob­lem: if the agent is go­ing to hang around un­til the uni­ver­sal heat death be­fore pick­ing a box, then Omega’s pred­c­i­tion of its ac­tions doesn’t mat­ter.

• Sugges­tion: “Ra­tion­al­ists seek to Win, not to be ra­tio­nal”.

Sugges­tion: “If what you think is ra­tio­nal ap­pears less likely to Win than what you think is ir­ra­tional, then you need to re­assess prob­a­bil­ities and your un­der­stand­ing of what is ra­tio­nal and what is ir­ra­tional”.

Sugges­tion: “It is not ra­tio­nal to do any­thing other than the thing which has the best chance of win­ning”.

If I have a choice be­tween what I define as the “Ra­tional” course of ac­tion, and a course of ac­tion which I de­scribe as “ir­ra­tional” but which I pre­dict has a bet­ter chance of win­ning, I am ei­ther pre­dict­ing badly or wrongly defin­ing what is Ra­tional.

I am not sure my sug­ges­tions are Bet­ter, but I am grop­ing to­wards un­der­stand­ing and hope my grop­ings help.

EDIT: and the warn­ing is that we may de­ceive our­selves into think­ing that we are be­ing ra­tio­nal, when we are miss­ing some­thing, us­ing the wrong map, ar­gu­ing fal­la­ciously. So what about:

Sugges­tion: “If you are not Win­ning, con­sider whether you are re­ally be­ing ra­tio­nal”.

“If you are not Win­ning more than peo­ple you be­lieve to be ir­ra­tional, this may be ev­i­dence that you are not re­ally be­ing ra­tio­nal”.

On a differ­ent tack, “Ra­tion­al­ists win wher­ever ra­tio­nal­ity is an aid to win­ning”. I am not go­ing to win mil­lions on the Lot­tery, be­cause I do not play it.

• Sugges­tion: “If you are not Win­ning, con­sider whether you are re­ally be­ing ra­tio­nal”.

The prob­lem with this is that win­ning as a met­ric is swamped with ran­dom noise and differ­ent start­ing points.

Some­one win­ning the lot­tery when you don’t is not ev­i­dence that you are not be­ing ra­tio­nal.

Some­one whose par­ents were both high-paid lawyers mak­ing a for­tune in busi­ness when you don’t is not ev­i­dence that you are not be­ing ra­tio­nal.

• Yes, but...

Of course there is ran­dom noise and differ­ent start­ing points, but there is also some ev­i­dence of whether one is re­ally ra­tio­nal. It is a ques­tion of epistemic ra­tio­nal­ity what Wins should ac­crue to Ra­tional peo­ple, and what wins (eg, parentage, the lot­tery) do not.

• I dis­agree. Some­one win­ning the lot­tery when you don’t is ev­i­dence that you are not be­ing ra­tio­nal, if get­ting a large sum of money for lit­tle effort is a goal you’d shoot for. But on eval­u­a­tion, it should be seen as ev­i­dence that counts for lit­tle or noth­ing. Most of us have already done that eval­u­a­tion.

• I don’t fol­low.

I used the lot­tery as an ex­am­ple of very ran­dom­ized wins. It’s the “right place at the right time” fac­tor. Some events in life are, for all prac­ti­cal pur­poses, ran­dom­ized and out of an agent’s di­rect con­trol. By the cen­tral limit the­o­rem, some agents will seem to ac­cu­mu­late large wins due in large part to these kinds of ran­dom events, and some will ac­cu­mu­late large losses.

Most agents will be, by defi­ni­tion, near the cen­ter of the nor­mal dis­tri­bu­tion. The ex­is­tence of agents at the tails of the curve does not con­sti­tute ev­i­dence of one’s own ir­ra­tional­ity.

• Right, but you could be wrong about it be­ing ran­dom­ized, or hav­ing nega­tive ex­pected value; not win­ning it can be taken as ev­i­dence that you’re not be­ing ra­tio­nal.

Sup­pose that ev­ery­one on your street other than you plays the lotto; you laugh at them for not be­ing ra­tio­nal. Every week, some­one on your street wins the lotto—by the end of the year, ev­ery­one else has be­come a mil­lion­aire. Doesn’t it seem like you might have mi­s­un­der­stood some­thing about the lot­tery?

Of course, it could be that you ex­am­ine it fur­ther and find that the lot­tery is in­deed ran­dom and you’ve just no­ticed a very im­prob­a­ble event. It was still ev­i­dence that was worth in­ves­ti­gat­ing.

• There’s a big differ­ence be­tween “some­one else wins the lot­tery” and “ev­ery­one else on your street wins the lot­tery”. One is likely, the other ab­surdly un­likely.

Given your cur­rent knowl­edge of how the lot­tery works, the ex­pected value is nega­tive, ergo not play­ing the lot­tery is ra­tio­nal. Some­one else win­ning the lot­tery (a re­sult pre­dicted by your cur­rent un­der­stand­ing) is it­self not ev­i­dence that this de­ci­sion is ir­ra­tional.

How­ever, if an ex­tremely im­prob­a­ble event oc­curs, such as ev­ery­one on your street win­ning the lotto, this is strong ev­i­dence that your knowl­edge of the lot­tery is mis­taken, and given the large po­ten­tial pay­off it then be­comes ra­tio­nal to ex­am­ine the mat­ter fur­ther, and al­ter your cur­rent un­der­stand­ing if nec­es­sary. Your ear­lier ac­tions may look ir­ra­tional in hind­sight, but that doesn’t change that they were ra­tio­nal based on your knowl­edge at the time.

• ...pre­sum­ing that your knowl­edge at the time was it­self ra­tio­nally ob­tained based on the ev­i­dence; and in the long run, we should not ex­pect too many times to find our­selves be­liev­ing with high con­fi­dence that the lot­tery has a tiny pay­out and then see­ing ev­ery­one on the street win­ning the lot­tery. If this mis­take re­curs, it is a sign of epistemic ir­ra­tional­ity.

I make this point be­cause a lot of suc­cess in life con­sists in hold­ing your­self to high stan­dards; and a lot of that is hunt­ing down the ex­cuses and kil­ling them.

• ...pre­sum­ing that your knowl­edge at the time was it­self ra­tio­nally ob­tained based on the ev­i­dence; and in the long run, we should not ex­pect too many times to find our­selves be­liev­ing with high con­fi­dence that the lot­tery has a tiny pay­out and then see­ing ev­ery­one on the street win­ning the lot­tery. If this mis­take re­curs, it is a sign of epistemic ir­ra­tional­ity.

Yes. I was gen­er­ally as­sum­ing in my com­ment that the rhetor­i­cal “you” is an im­perfect epistemic ra­tio­nal­ist with a rea­son­ably sen­si­ble set of pri­ors.

The point I was try­ing to make is to not hand­wave away the differ­ence be­tween mak­ing the most op­ti­mal known choices at a mo­ment in time vs. up­dat­ing one’s model of the world. It’s pos­si­ble (if silly) to be very ir­ra­tional on one and largely ra­tio­nal on the other.

• I make this point be­cause a lot of suc­cess in life con­sists in hold­ing your­self to high stan­dards; and a lot of that is hunt­ing down the ex­cuses and kil­ling them.

• Choos­ing what gives the “best chance of win­ning” is good ad­vice for a two-val­ued util­ity func­tion, but I’m also in­ter­ested in re­duc­ing the sever­ity of my loss un­der un­cer­tainty and mis­for­tune.

I guess “max­i­miz­ing ex­pected util­ity” isn’t as sexy as “win­ning”.

• In­deed. For­get about “win­ning”. It is not sexy if it is wrong.

• I don’t think so. I take “win­ing” to be ac­tu­al­iza­tion of one’s val­ues, which en­com­passes min­i­miz­ing loss.

Fur­ther­more, I think it ac­tu­ally helps to make the terms “sexy”, be­cause I am a heuris­tic hu­man; my brain is wired for nar­ra­tives and mo­ti­vated by drama and “cool­ness.” Fram­ing ideas as some­thing Grand and Awe­some makes them mat­ter to me emo­tion­ally, makes them a part of my iden­tity, and makes me more likely to ap­ply them.

Similarly, there are cer­tain worth­while causes for which I fight. They ARE worth fight­ing for, but I’m de­lud­ing my­self if I act as if I’m so morally su­pe­rior that I sup­port them only be­cause the prob­lems are so press­ing that I couldn’t pos­si­bly not do any­thing, that I have a duty to fulfill. That may be true, but it is also true that I dis­posed to be a fighter, and I am look­ing for a cause for which to fight. Know­ing this, dra­ma­tiz­ing the causes that ac­tu­ally do mat­ter (as great bat­tles for the fate of the hu­man species) will mo­ti­vate me to pur­sue them.

I have to be care­ful (as with any­thing), not to al­low this sort of fram­ing to dis­tort my per­cep­tion of the real, but I think as long as I know what I am do­ing, and I con­tain my self-ma­nipu­la­tion to fram­ing (and not de­nial of facts), I am served by it.

• I think you’re defin­ing “win­ning” too strictly. Some­times a minor loss is still a win, if the al­ter­na­tive was a large one.

• Win­ning is a con­ven­tional dic­tio­nary word, though. You can’t eas­ily just re­define it with­out caus­ing con­fu­sion. “Win­ning” and “max­imis­ing” have differ­ent defi­ni­tions and con­no­ta­tions.

• The first defi­ni­tion from google—Be suc­cess­ful or vic­to­ri­ous in (a con­test or con­flict).

This is no differ­ent than I or most peo­ple would define it, and I don’t think it con­tra­dicts with how I used it.

• “Win­ning” refers to out­comes, not to ac­tions, so it should just be “max­i­miz­ing util­ity”.

• There is an is­sue never re­mem­bered here, about the ques­tion that we be­lieve the world is X but it is Y: Are you sure that ra­tio­nal­ity is pure product of brains… Are you sure that mind is pure product of brains… What if mind is product of a hid­den su­pe­rior nat­u­ral sys­tem whose bits-in­for­ma­tion are in­vad­ing our im­me­di­ate world and be­ing ag­gre­gated to our synapses… If so, ra­tio­nal­ity as pure product of mind will make the most evolved ra­tio­nal­ist a loser, by while… Or don t… (sorry, I have no punc­tu­a­tion mark in this key­board)

Here, in Ama­zon jun­gle, lays our real ori­gins. And you see here that this bio­sphere is product of chaos. We are product of chaos, not or­der. It seems to me that we are the flow of or­der lift­ing up from chaos. So, for long term win­ning, those that best rep­re­sents this flow will have bad times be­cause the forces of chaos are the strongest. Then, the win­ners now, are still rep­re­sen­tants of chaos, less evolved...

But it seems to me that above the chaotic bio­sphere I see Cos­mos at or­dered state. So, I sus­pect that this Cos­mos is the ” nat­u­ral” su­per-sys­tem send­ing bits-in­for­ma­tion and mod­el­ling this ter­res­trial chaos into a fu­ture state of or­der. It is act­ing over the last evolved sys­tem here, and I think it is the mind, not the brain. So, if one is be­ing driven for to be ra­tio­nal­ist (in re­la­tion to Cos­mos and or­dered state), he,she will be a loser in re­la­tion to this bio­sphere in chaotic state. The in­tel­li­gent best thing to do should to find a mid­dle al­ter­na­tive, fight­ing this world at the same time that do it with less sac­ri­fice. What do you think…

• What do you think...

More or­dered states could prove to be un­sus­tain­able whether or not there’s some sort of over­ar­ch­ing sys­tem such as you de­scribe at play. Your as­sump­tions seem to be quite com­pli­cated and thus get a low prob­a­bil­ity ahead of time, there’s no speci­fi­cally sup­port­ing ev­i­dence (in­deed it’s not even sure what sup­port­ing ev­i­dence for some su­per sys­tem send­ing down in­for­ma­tion would be.)

Ba­si­cally the idea falls be­neath the noise level for me in terms of cred­i­bil­ity. Maybe or­dered sys­tems lose be­cause the mag­i­cal uni­corns have a love of chaos in their hearts. I con­sider the two ideas about as se­ri­ously.

• Thanks, Es­tar­ilo. I re­ally need to fix my world vi­sion and thoughts.

You said: ” More or­dered states could prove to be un­sus­tain­able whether or not there’s some sort of over­ar­ch­ing sys­tem such as you de­scribe at play.”

I think yes, more or­dered state must be un­sus­tain­able, eter­nally. But, chaos also must be un­sus­tain­able. If so, there are these cy­cles, when chaos pro­duces or­der and or­der pro­duces chaos. The fi­nal re­sults is evolu­tion, be­cause each cy­cle is a lit­tle bit more com­plex. There is hi­er­ar­chy of sys­tems. Over­ar­ch­ing sys­tems can be two types: 1) in re­la­tion to com­plex­ity and, 2) in re­la­tion to size, force. A lion is more strong than a hu­man, but hu­man is more com­plex. We have two sys­tems mod­el­ling evolu­tion at Earth. 1) the as­tro­nom­i­cal sys­tem (biggest size and less evolved), which is our an­ces­tor, but we are in­side it, he cre­ated us. This sys­tem is a perfect ma­chine, but not in­tel­li­gent, not ra­tio­nal like us. What­ever, he is the agent be­hind nat­u­ral se­lec­tion, be­cause he is the whole en­vi­ron­ment. 2) The sec­ond sys­tem is un­ten­able, but he must ex­ists, be­cause here there is mind, con­scious­nesses and our an­ces­tral as­tro­nomic has no mind. I don’t ac­cept that this Uni­verse cre­ates things that he has no in­for­ma­tion for, so, the sys­tem that made the emer­gence of mind here must be su­pe­rior to the Uni­verse. And if he is ex-ma­chine, makes no sense to talk about or­dered or chaotic states. He must be more sus­tain­able than the Uni­verse. I am not talk­ing about su­per­nat­u­ral gods, I am sug­gest­ing a nat­u­ral su­pe­rior sys­tem from which this thing called con­scious­ness is com­ing from..

You said: ” there’s no speci­fi­cally sup­port­ing ev­i­dence”

It is prob­a­ble be­cause we have a real known pa­ram­e­ter. An em­bryo gets ” mind” be­cause it comes from a su­pe­rior hi­er­ar­chic sys­tem that ex­ists be­yond his “Uni­verse” (the womb). The su­pe­rior sys­tem is the hu­man species, his par­ents. So, it is pos­si­ble that a nat­u­ral su­per-sys­tem ex­ist­ing be­yond our uni­verse have trans­mit­ted be­fore the Big Bang the in­for­ma­tions for the mind ap­pears here at the right time.

You said: ” Maybe or­dered sys­tems lose be­cause mag­i­cal uni­corns...”

In the al­ter­na­tion be­tween cy­cles, there are the al­ter­na­tions be­tween dom­i­nant and re­ces­sive. If chaos is dom­i­nant here and now, the or­dered state is weak and a loser, till the chaos be­ing ex­tinct. And ra­tio­nal­ity is more rel­a­tive to or­der than chaos. But ra­tio­nal­ity is not the wis­dom. Must have a third su­pe­rior state. What do you think ?

• I don’t ac­cept that this Uni­verse cre­ates things that he has no in­for­ma­tion for

It is pos­si­ble to cre­ate some­thing with­out hav­ing the in­for­ma­tion for it. The clas­sic ex­am­ple; if enough mon­keys type at ran­dom on enough type­writ­ers for long enough, then sooner or later (prob­a­bly much, much later) one of them will ran­domly type out the com­plete works of Shake­speare. Even if none of the mon­keys have ever heard of Shake­speare.

• I can’t grasp yours ex­am­ple. Typewrit­ers has the in­for­ma­tions. Let­ters are graphic sym­bols of sounds that are sig­nals of real things. My world vi­sion started with com­par­a­tive anatomy be­tween all nat­u­ral sys­tems and the uni­ver­sal pat­terns founded here were pro­jected for calcu­la­tions about uni­verses and first causes. As fi­nal re­sult we got the same the­ory of Hideki Yukawa calcu­lat­ing the nu­clear gluon, how pro­tons and neu­trons in­ter­acts. As re­sult, this uni­verse started with all in­for­ma­tions for ev­ery­thing here, like any new ori­gins of any hu­man be­ing started with prior in­for­ma­tion for cre­at­ing the em­bryo and its womb (his en­tire uni­verse). But these in­for­ma­tions for uni­verses are nat­u­ral. Two groups of vor­texes one spin right, other spin left. The in­ter­ac­tions be­tween then cre­ates the in­ter­me­di­ary move­ments. Each vor­texes has at least seven prop­er­ties which were the phys­i­cal brutes forces(ten­dency to in­er­tia, ten­dency to move­ment; ten­dency to grow, ten­dency to shorter; etc.). The differ­ent in­ten­si­ties of these forces and their in­ter­ac­tions pro­duces an in­finity of in­di­vi­d­ual types or vor­tex. Each vor­tex is one in­for­ma­tion, like genes. Th ere are genes that be­gins work­ing later, so, there are uni­ver­sal in­for­ma­tions in the air not ap­plied yet. Like those build­ing con­scious­ness here. But, my re­sults from these method is still the­o­ret­i­cal. It makes sense and one day will be falsifi­able

• I’m con­fused; I can’t un­der­stand what you are say­ing. I think that part of this is the lan­guage bar­rier (what is your first lan­guage, by the way?) and part of it is prob­a­bly an in­fer­en­tial dis­tance is­sue (that is, what you’re say­ing is far enough away from any­thing that I ex­pect that I’m hav­ing trou­ble mak­ing the men­tal leap).

Typewrit­ers has the in­for­ma­tions.

So… would this mean that a type­writer con­tains the in­for­ma­tion for any­thing that can be typed on a type­writer? In­clud­ing… say… the se­cret of im­mor­tal­ity, plans for a time ma­chine, and a way to de­tect the Higgs Bo­son? That seems a rather broad defi­ni­tion of ‘in­for­ma­tion’.

• So… would this mean that a type­writer con­tains the in­for­ma­tion for any­thing that can be typed on a type­writer? In­clud­ing… say… the se­cret of im­mor­tal­ity, plans for a time ma­chine, and a way to de­tect the Higgs Bo­son? That seems a rather broad defi­ni­tion of ‘in­for­ma­tion’.

Well, in in­for­ma­tion-the­o­retic terms, the in­for­ma­tion for those comes from who­ever looks over the mon­key’s work and se­lects Shake­speare (or what­ever.)

• But the out­put ex­ists, whether it is se­lected or not. (Ad­mit­tedly, there will al­most cer­tainly be sev­eral in­ac­cu­rate Shake­speare-like imi­ta­tions/​par­o­dies/​etc. that ex­ist as well by then).

• As fi­nal re­sult we got the same the­ory of Hideki Yukawa calcu­lat­ing the nu­clear gluon, how pro­tons and neu­trons in­ter­acts. As re­sult, this uni­verse started with all in­for­ma­tions for ev­ery­thing here,

Heav­ily com­pressed, mind, but it’s tech­ni­cally true that a su­per­in­tel­li­gence could de­duce us. I’m pretty sure that doesn’t im­ply it was de­liber­ately de­signed, though; we could just be an emer­gent prop­erty of the uni­verse, not it’s ob­ject.

• I think yes, more or­dered state must be un­sus­tain­able, eter­nally. But, chaos also must be un­sus­tain­able. If so [...]

You’re putting the cart be­fore the horse here. You’ve said that they must be—why must they be? If they are then what pre­dic­tions does their be­ing so let you make and how have you tested them?

What, for that mat­ter, are your for­mal defi­ni­tions of or­der and chaos? The way I’d define them, chaos ex­ists mostly on a quan­tum level and when you start to gen­er­al­ise out cor­re­lates start show­ing up on a macro­scopic level re­ally quickly, and then it’s not chaos any­more be­cause it’s—at least in prin­ci­ple—pre­dictable.

I mean it’s not silly to sup­pose that se­lec­tion and mu­ta­tion—with the former be­ing the or­der en­forc­ing part of evolu­tion and the lat­ter be­ing the ‘chaotic’ part, op­er­ate in cy­cles. I be­lieve if you model evolu­tion of finite pop­u­la­tions us­ing Fokker Planck equa­tions you tend to have an in­creas­ing spread of phe­no­types be­tween pe­ri­ods of heavy se­lec­tion—but it’s not re­ally an area I’ve much in­ter­est in so I couldn’t say for sure.

We have two sys­tems mod­el­ling evolu­tion at Earth. 1) the as­tro­nom­i­cal sys­tem (biggest size and less evolved), which is our an­ces­tor, but we are in­side it, he cre­ated us. This sys­tem is a perfect ma­chine, but not in­tel­li­gent, not ra­tio­nal like us. What­ever, he is the agent be­hind nat­u­ral se­lec­tion, be­cause he is the whole en­vi­ron­ment. 2) The sec­ond sys­tem is un­ten­able, but he must ex­ists, be­cause here there is mind, con­scious­nesses and our an­ces­tral as­tro­nomic has no mind.

I don’t know what this means. You’re as­sign­ing an over­ar­ch­ing sys­tem agency. But agency tends to mean that some­thing is al­ive and think­ing in English. Like a hu­man would be said to have agency, whereas a com­puter—at least in the com­mon “I’ve got one un­der my desk” sense—wouldn’t. Sys­tems don’t tend to be con­sid­ered to have gen­der in English ei­ther. In French lots of words are gen­dered but in English very few are. The only English things I can think of that are gen­dered other than liv­ing crea­tures are ships; tra­di­tion­ally thought of as fe­male.

The sec­ond sys­tem just seems to be un­defined.

I don’t ac­cept that this Uni­verse cre­ates things that he has no in­for­ma­tion for, so, the sys­tem that made the emer­gence of mind here must be su­pe­rior to the Uni­verse.

If you want to find a hu­man how easy is that for you to do? Turn out of your front door and go to town and you’ll prob­a­bly find a fair num­ber of them. If you want to find a spe­cific hu­man how much in­for­ma­tion do you need? I be­lieve if you start off know­ing noth­ing about them other than that they’re some­where on Earth you only re­ally need some­thing like 32 bits of in­for­ma­tion but in any case it’s a lot more.

If you want to cre­ate a table you just make a table. It’s not hard. If you want to cre­ate a spe­cific table de­sign you need to know what it looks like at the very least.

If you want to cre­ate a child you need a part­ner. If you want to cre­ate a brown haired, blue eyed girl and no other kids be­sides … you’re prob­a­bly go­ing to be off pick­ing par­tic­u­lar part­ners to up your chances or run­ning off to play with ge­netic en­g­ineer­ing.

Gen­er­ally the rule is that the more picky you want to be the more info you need.

If you just wanted to cre­ate a per­son, and noth­ing else, you would re­quire a lot of in­for­ma­tion. If you wanted to cre­ate an en­tire uni­verse you would need very lit­tle in­for­ma­tion. The uni­verse is very large, and seems to con­sist mostly of rep­e­ti­tions of fairly sim­ple things, which sug­gests to me an in­for­ma­tion­ally sparse gen­e­sis.

And if he is ex-ma­chine, makes no sense to talk about or­dered or chaotic states. He must be more sus­tain­able than the Uni­verse. I am not talk­ing about su­per­nat­u­ral gods, I am sug­gest­ing a nat­u­ral su­pe­rior sys­tem from which this thing called con­scious­ness is com­ing from..

Do you need to sup­pose a sys­tem at all? If what you’re talk­ing about can be defined en­tirely in terms of a con­flict be­tween or­der and chaos—which re­ally just seems to be evolu­tion in progress. What ex­plana­tory power does this sys­tem have?

So, it is pos­si­ble that a nat­u­ral su­per-sys­tem ex­ist­ing be­yond our uni­verse have trans­mit­ted be­fore the Big Bang the in­for­ma­tions for the mind ap­pears here at the right time.

Sure, any­thing’s pos­si­ble. But how prob­a­ble is it and what grounds do you have for be­liev­ing that it’s that prob­a­ble?

In the al­ter­na­tion be­tween cy­cles, there are the al­ter­na­tions be­tween dom­i­nant and re­ces­sive. If chaos is dom­i­nant here and now, the or­dered state is weak and a loser, till the chaos be­ing ex­tinct. And ra­tio­nal­ity is more rel­a­tive to or­der than chaos. But ra­tio­nal­ity is not the wis­dom. Must have a third su­pe­rior state. What do you think ?

Broadly you seem to be say­ing some­thing to the effect of: In the ab­sence of strong se­lec­tion pres­sures the trend is to­wards di­s­or­der and de­cay. Which I agree with. And I can see how that would re­late to ra­tio­nal­ity—there are sys­tems, like school­ing, that lose their pur­pose and es­sen­tially go in­sane in the ab­sence of strong de­mands. Why are schools so crappy? A large part of it seems to be be­cause adults don’t have an eco­nomic need for chil­dren at that age and it’s poli­ti­cally ex­pe­di­ent to con­duct ed­u­ca­tion in a cer­tain way that seems to pro­duce work—with­out ac­tu­ally test­ing whether that work is use­ful be­cause by that point the gov­ern­ment will be out of power.

I sus­pect ra­tio­nal­ity car­ries con­no­ta­tions in your lan­guage that it doesn’t nec­es­sar­ily have in English. If a chaotic/​ran­dom/​brute force method of travers­ing the search space turns out to be bet­ter suited to cer­tain situ­a­tions I’d as­sign it a re­ally high prior that peo­ple who define them­selves as ra­tio­nal­ists would make their de­ci­sions in that re­gard by throw­ing dice or some equiv­a­lent that in­tro­duced chaos into their ac­tions. Like my pass­words—what are my pass­words? I don’t know. Most of them are 128 char­ac­ter gib­ber­ish.

If you think of ra­tio­nal­ity as sys­tem­a­tised win­ning it seems more like: What­ever works. Than any­thing par­tic­u­larly tied to a spe­cific se­lec­tion/​mu­ta­tion ra­tio.

• An em­bryo gets ” mind” be­cause it comes from a su­pe­rior hi­er­ar­chic sys­tem that ex­ists be­yond his “Uni­verse” (the womb). The su­pe­rior sys­tem is the hu­man species, his par­ents.

Well, an em­bryo de­vel­ops a mind be­cause it’s got the ge­netic code for it—which, yes, comes from the larger ex­ter­nal sys­tem that evolved that code. Is that what you meant?

So, it is pos­si­ble that a nat­u­ral su­per-sys­tem ex­ist­ing be­yond our uni­verse have trans­mit­ted be­fore the Big Bang the in­for­ma­tions for the mind ap­pears here at the right time.

I must ad­mit, I don’t see how that fol­lows. Are you sug­gest­ing our uni­verse was de­signed speci­fi­cally as a “womb” to cre­ate us? That’s the only anal­ogy I can see, and evolu­tion­ary ad­van­tage seems a sim­pler rea­son for sen­tience to evolve—al­though I guess those aren’t mu­tu­ally ex­clu­sive, if this “nat­u­ral su­per-sys­tem ex­ist­ing be­yond our uni­verse” an­ti­ci­pated that would re­sult in us. But why pos­tu­late this? It could as eas­ily have de­signed the uni­verse as a “womb” to pro­duce muffins! We could as eas­ily be part of this muffin-womb. (Man, there’s a sen­tence I never ex­pected to type.)

If chaos is dom­i­nant here and now, the or­dered state is weak and a loser, till the chaos be­ing ex­tinct. And ra­tio­nal­ity is more rel­a­tive to or­der than chaos.

But sci­ence again and again has dis­cov­ered that what we thought was “chaos” is merely the com­plex re­sult of sim­ple rules—or­der, in other words, that we can ex­ploit with ra­tio­nal­ity.

And ra­tio­nal­ity is more rel­a­tive to or­der than chaos. But ra­tio­nal­ity is not the wis­dom. Must have a third su­pe­rior state. What do you think ?

If ra­tio­nal­ity works in or­dered states, what’s the ana­log that works in “chaotic” states?

• You said: “Well, an em­bryo de­vel­ops a mind be­cause it’s got the ge­netic code for it—which, yes, comes from the larger ex­ter­nal sys­tem that evolved that code. Is that what you meant?”

Our con­flict is due two differ­ent in­ter­pre­ta­tions of ge­netic code. You think that biolog­i­cal sys­tems (aka life) evolved a ge­netic code, so, you think that had no ge­netic code be­fore life. It is not what is sug­gest­ing the re­sults from my differ­ent method of in­ves­ti­ga­tion. There is no ” code” in the sense that are com­posed by sym­bols. Each hori­zon­tal base-pair of nu­cleotides is a deriva­tion with some lit­tle differ­ence of an an­ces­tor sys­tem, the origi­nal first galax­ies. (you need see the model of this galaxy and how it fits as nu­cleotide in my web­site). So, DNA is merely a pile of di­ver­sified copies of a unique an­ces­tor as­tro­nom­i­cal sys­tem, which pro­duces di­ver­sifi­ca­tion and func­tional biolog­i­cal sys­tems. But, galax­ies got their sys­tem’s con­figu­ra­tion from atoms sys­tem, and they got from par­ti­cles as sys­tems, so, the prior causes of this ” ge­netic makeup” seems to be be­yond the Big Bang. The in­for­ma­tions for build­ing the mind of an em­bryo came from a sys­tem out­side his womb; maybe in­for­ma­tions for build­ing minds in the whole uni­verse came from a nat­u­ral sys­tem out­side the uni­verse. Why not? con­figu­ra­tion from atoms sys­tem, and they got from par­ti­cles as sys­tems (Sorry, I need stop now but I will come back. Sheers...)

• Wait, you think hu­man ge­netic code has ex­isted, un­changed, since the be­gin­ning of time? Yeah, I can see how that would lead to hu­man ex­cep­tion­al­ism and such. Pretty sure it’s phys­i­cally im­pos­si­ble, though. Or do you just mean it’s the re­sult of a causal chain lead­ing back to the be­gin­ning of time?

• What if mind is product of a hid­den su­pe­rior nat­u­ral sys­tem whose bits-in­for­ma­tion are in­vad­ing our im­me­di­ate world and be­ing ag­gre­gated to our synapses

Well, our per­son­al­ities, mem­o­ries and so on can be af­fected by in­terfer­ing with the brain, and it cer­tainly looks like it’s do­ing some sort of in­for­ma­tion pro­cess­ing (as far as we can tell), so … seems un­likely, to be hon­est. Also, our minds do kind of look evolved to fit our biolog­i­cal niche.

If so, ra­tio­nal­ity as pure product of mind will make the most evolved ra­tio­nal­ist a loser

I’m hav­ing real trou­ble pars­ing this. Are you say­ing evolu­tion will make us ir­ra­tional? Or that ra­tio­nal­ity is in­com­pat­i­ble with love­craf­tian pup­petry? Or some­thing com­pletely differ­ent?

Here, in Ama­zon jun­gle, lays our real ori­gins.

You … re­al­ize hu­man’s didn’t evolve in the Ama­zon, right?

And you see here that this bio­sphere is product of chaos. We are product of chaos, not or­der. It seems to me that we are the flow of or­der lift­ing up from chaos. So, for long term win­ning, those that best rep­re­sents this flow will have bad times be­cause the forces of chaos are the strongest. Then, the win­ners now, are still rep­re­sen­tants of chaos, less evolved...

I’m not sure I’d char­ac­ter­ize the nat­u­ral world as “chaotic” as such. Com­plex, some­times, sure, but it fol­lows some pretty sim­ple rules, and when we de­duce these rules we can ma­nipu­late them.

But it seems to me that above the chaotic bio­sphere I see Cos­mos at or­dered state. So, I sus­pect that this Cos­mos is the ” nat­u­ral” su­per-sys­tem send­ing bits-in­for­ma­tion and mod­el­ling this ter­res­trial chaos into a fu­ture state of order

The uni­verse is definitely or­dered, but don’t for­get evolu­tion can pro­duce some pretty “de­signed” look­ing struc­tures.

What do you think...

I think you sound kind of like a crank, to be hon­est with you. You seem to be treat­ing “or­der” and “chaos” more like el­e­men­tal forces or some­thing, and gen­er­ally sound like you’ve got prob­lems with mag­i­cal think­ing. That said, I had some trou­ble un­der­stand­ing bits of what you wrote, so it’s pos­si­ble I’m in­ad­ver­tently ad­dress­ing a straw­man ver­sion of your claims. Tell me, are you a na­tive English speaker?

• Thanks, Mu­gaSofer, for yours con­struc­tive re­ply. No, I am not a na­tive English and my brain was hard-wired at the sal­vage jun­gle here, so, I think is a good op­por­tu­nity for me de­bat­ing our differ­ent ex­pe­riences and world views. I hope that it must be cu­ri­ous for you too.

You said: ” Well, our per­son­al­ities, mem­o­ries and so on can be af­fected by in­terfer­ing with the brain, and it cer­tainly looks like it’s do­ing some sort of in­for­ma­tion pro­cess­ing (as far as we can tell), so … seems un­likely, to be hon­est.”

Yes, these things (per­son­al­ity, mem­o­ries, etc.) com­poses our ” state of be­ing” and they are merely product of brains/​na­ture. But, we have a real phe­nom­ena where we watch the emer­gence of con­scious­nesses with­out be­ing product of brains: the em­bryo. There is no nat­u­ral ar­chi­tec­ture able to be con­scious of its ex­is­tence, nei­ther are the brains alone. So, where comes from the con­scious state of em­bryos? From a su­pe­rior hi­er­ar­chic sys­tem that ex­ists be­yond his uni­verse (the womb), and this sys­tem is called ” hu­man species” . So, it is not zero the prob­a­bil­ity that hu­man mind is product of a hid­den su­pe­rior nat­u­ral sys­tem whose bits-in­for­ma­tion are in­vad­ing our im­me­di­ate world and be­ing ag­gre­gated to our synapses, be­sides the pos­si­bil­ity that it was en­crypted into our genes (if my mod­els about Ma­trix/​DNA are right).

You said: “Are you say­ing evolu­tion will make us ir­ra­tional? Or that ra­tio­nal­ity is in­com­pat­i­ble with love­craf­tian pup­petry? Or some­thing com­pletely differ­ent?”

No, evolu­tion will make us more suit­able to real nat­u­ral world. But, due the al­ter­na­tion be­tween chaos and or­der, and due our ori­gins com­ing from chaos, the flow of or­der (which is the ba­sis for ra­tio­nal­ity) is the baby and weak force just now. Chaos is dy­ing, or­der is grow­ing, but now, chaos still is the strongest, so. ir­ra­tional­ity and ran­dom­ness are the win­ners, by while.

You said: ” You … re­al­ize hu­man’s didn’t evolve in the Ama­zon, right?”

I don’t un­der­stand your ques­tion. Be­ing still vir­gin and un­touch­able, the el­e­ments of Ama­zon hid­den niches are wit­ness of life’s ori­gins. And we see chaos here. So, our ori­gins came from ter­res­trial chaotic state of Na­ture, which came from or­dered state of Cos­mos… Cyclic al­ter­na­tions.

You said: ” I’m not sure I’d char­ac­ter­ize the nat­u­ral world as “chaotic” as such. Com­plex, some­times, sure, but it fol­lows some pretty sim­ple rules, and when we de­duce these rules we can ma­nipu­late them.”

Nat­u­ral world is the Uni­verse, not this ter­res­trial bio­sphere alone. This bio­sphere is a kind of dis­tur­bance, a noise, in re­la­tion to the or­dered state of Cos­mos. Bio­sphere is product of an en­tropic pro­cess, like the ra­di­a­tion of sun. So, the dis­tur­bance is cor­rected by the or­dered Cos­mos, from which is com­ing the emer­gence of those rules you are talk­ing about. The cu­ri­ous thing is that hu­mans are the car­ri­ers of those rules, we are bring­ing or­der to our sal­vage en­vi­ron­ment.

You said: ” The uni­verse is definitely or­dered, but don’t for­get evolu­tion can pro­duce some pretty “de­signed” look­ing struc­tures.”

The Uni­verse, as a con­glomer­ate of galax­ies, seems to be mass with no shape, not a sys­tem. We don’t know if there is a nu­cleus, re­la­tions among parts, etc. We can’t know if it is or­dered or chaotic. Evolu­tion is the re­sult of a flow of en­ergy mov­ing in­side this Uni­verse. Like any fe­tus is un­der evolu­tion due a ge­netic flow pro­duc­ing more de­signed look­ing struc­tures. The source of this “evolu­tion” is a nat­u­ral sys­tem (hu­man species) liv­ing be­yond the fe­tus’ uni­verse (the womb). This is the unique real nat­u­ral pa­ram­e­ter we have for the­o­ries about the uni­verse.

You said: You seem to be treat­ing “or­der” and “chaos” more like el­e­men­tal forces or some­thing, and gen­er­ally sound like you’ve got prob­lems with mag­i­cal think­ing.

It is not mag­i­cal think­ing, it is the nor­mal nat­u­ral chain of causes and effects. Every sys­tem that reaches an or­dered state is at­tacked by en­tropy, which pro­duces chaos, from which lift up or­der again, but each cy­cle is more com­plex than the an­ces­tors cy­cles. At chaotic states, like our bio­sphere, gen­er­a­tions of empty minds are more likely to be win­ners, while gen­er­a­tions of rea­son­able minds must be losers at short time and the fi­nal win­ner at long time. But, maybe the jun­gle is teach­ing me ev­ery­thing wrong. What do you think?

• Thanks, Mu­gaSofer, for yours con­struc­tive re­ply. No, I am not a na­tive English and my brain was hard-wired at the sal­vage jun­gle here, so, I think is a good op­por­tu­nity for me de­bat­ing our differ­ent ex­pe­riences and world views. I hope that it must be cu­ri­ous for you too.

It cer­tainly is that.

Yes, these things (per­son­al­ity, mem­o­ries, etc.) com­poses our ” state of be­ing” and they are merely product of brains/​na­ture.

So … what’s left? Doesn’t that ex­plain ev­ery­thing we mean by “mind”?

But, we have a real phe­nom­ena where we watch the emer­gence of con­scious­nesses with­out be­ing product of brains: the em­bryo. There is no nat­u­ral ar­chi­tec­ture able to be con­scious of its ex­is­tence, nei­ther are the brains alone. So, where comes from the con­scious state of em­bryos? From a su­pe­rior hi­er­ar­chic sys­tem that ex­ists be­yond his uni­verse (the womb), and this sys­tem is called ” hu­man species” .

So, it is not zero the prob­a­bil­ity that hu­man mind is product of a hid­den su­pe­rior nat­u­ral sys­tem whose bits-in­for­ma­tion are in­vad­ing our im­me­di­ate world and be­ing ag­gre­gated to our synapses, be­sides the pos­si­bil­ity that it was en­crypted into our genes (if my mod­els about Ma­trix/​DNA are right).

I’ve replied to this as­ser­tion el­se­where; hope I got the in­ter­pre­ta­tion right.

No, evolu­tion will make us more suit­able to real nat­u­ral world. But, due the al­ter­na­tion be­tween chaos and or­der, and due our ori­gins com­ing from chaos, the flow of or­der (which is the ba­sis for ra­tio­nal­ity) is the baby and weak force just now. Chaos is dy­ing, or­der is grow­ing, but now, chaos still is the strongest, so. ir­ra­tional­ity and ran­dom­ness are the win­ners, by while.

You know, I’m not sure what you mean by “chaos”. If it’s just ran­dom­ness, ra­tio­nal­ity can tell you how to choose pti­mally us­ing prob­a­bil­ities; per­haps that’s not what you mean? Is it com­plex­ity?

I don’t un­der­stand your ques­tion. Be­ing still vir­gin and un­touch­able, the el­e­ments of Ama­zon hid­den niches are wit­ness of life’s ori­gins. And we see chaos here. So, our ori­gins came from ter­res­trial chaotic state of Na­ture, which came from or­dered state of Cos­mos… Cyclic al­ter­na­tions.

Oh, I think I get it; the Ama­zon is em­ble­matic of Earth be­fore civ­i­liza­tion, right? The an­ces­tral en­vi­ron­ment. Which is, nat­u­rally, where we evolved.

Nat­u­ral world is the Uni­verse, not this ter­res­trial bio­sphere alone. This bio­sphere is a kind of dis­tur­bance, a noise, in re­la­tion to the or­dered state of Cos­mos. Bio­sphere is product of an en­tropic pro­cess, like the ra­di­a­tion of sun. So, the dis­tur­bance is cor­rected by the or­dered Cos­mos, from which is com­ing the emer­gence of those rules you are talk­ing about. The cu­ri­ous thing is that hu­mans are the car­ri­ers of those rules, we are bring­ing or­der to our sal­vage en­vi­ron­ment.

But even the bio­sphere fol­lows laws, even if some­times the re­sults are so com­plex we have trou­ble dis­cern­ing them.

The Uni­verse, as a con­glomer­ate of galax­ies, seems to be mass with no shape, not a sys­tem. We don’t know if there is a nu­cleus, re­la­tions among parts, etc. We can’t know if it is or­dered or chaotic. Evolu­tion is the re­sult of a flow of en­ergy mov­ing in­side this Uni­verse. Like any fe­tus is un­der evolu­tion due a ge­netic flow pro­duc­ing more de­signed look­ing struc­tures. The source of this “evolu­tion” is a nat­u­ral sys­tem (hu­man species) liv­ing be­yond the fe­tus’ uni­verse (the womb). This is the unique real nat­u­ral pa­ram­e­ter we have for the­o­ries about the uni­verse.

Sorry; by “evolu­tion” I meant nat­u­ral se­lec­tion. You know, Dar­winism?

It is not mag­i­cal think­ing, it is the nor­mal nat­u­ral chain of causes and effects. Every sys­tem that reaches an or­dered state is at­tacked by en­tropy, which pro­duces chaos, from which lift up or­der again, but each cy­cle is more com­plex than the an­ces­tors cy­cles. At chaotic states, like our bio­sphere, gen­er­a­tions of empty minds are more likely to be win­ners, while gen­er­a­tions of rea­son­able minds must be losers at short time and the fi­nal win­ner at long time. But, maybe the jun­gle is teach­ing me ev­ery­thing wrong. What do you think?

Well, I un­der­stand phys­i­cally en­tropy is always in­creas­ing, and repli­ca­tors tend to over­run available re­sources and im­prove via se­lec­tion, but I’m not clear on these “cy­cles”.

• Ra­tion­al­ists are the ones who win when things are fair, or when things are un­fair ran­domly over an ex­tended pe­riod. Ra­tion­al­ity is an ad­van­tage, but it is not the only ad­van­tage, not the supreme ad­van­tage, not an ad­van­tage at all in some con­ceiv­able situ­a­tions, and can­not rea­son­ably be ex­pected to pro­duce con­sis­tent win­ning when things are un­fair non-ran­domly. How­ever, it is a cul­tivable ad­van­tage, which is among the things that makes it in­ter­est­ing to talk about.

A ra­tio­nal­ist might be un­for­tu­nate enough that (s)he does not do well, but ce­teris paribus, (s)he will do bet­ter. Maybe that could be the slo­gan—“ra­tio­nal­ists do bet­ter”? With the im­plied par­en­thet­i­cal “(than they would do if they were not ra­tio­nal­ists, with the caveat that you can con­coct un­likely situ­a­tions in which ra­tio­nal­ity is an im­ped­i­ment to some val­ues of “do­ing well”)”.

• “You can’t re­li­ably do bet­ter than ra­tio­nal­ity in a non-patholog­i­cal uni­verse” is prob­a­bly closer to the math.

• It’s im­pos­si­ble to add sub­stance to “non-patholog­i­cal uni­verse.” I sus­pect cir­cu­lar­ity: a non-patholog­i­cal uni­verse is one that re­wards ra­tio­nal­ity; ra­tio­nal­ity is the dis­po­si­tion that lets you win in a non­patholog­i­cal uni­verse.

You need to at­tempt to define terms to avoid these traps.

• Patholog­i­cal uni­verses are ones like: where there is no or­der and the right an­swer is ran­domly placed. Or where the facts are mal­i­ciously ar­ranged to en­trap in a re­cur­sive red her­ring where the sim­plest well-sup­ported an­swer is always wrong, even af­ter try­ing to out-think the mal­ice. Or where the whole uni­verse is one flawless red her­ring (“God put the fos­sils there to test your faith”).

“No free lunch” de­mands they be math­e­mat­i­cally con­ceiv­able. But to as­sert that the real uni­verse be­haves like this is to go mad.

• Since we learn rea­son from the uni­verse we’re in, if we were in a uni­verse you’re refer­ring to as “patholog­i­cal”, we (well, sen­tients, if any) would have learned a method of ar­riv­ing at con­clu­sions which matched that. Like­wise, since the uni­verse pro­duced math, I don’t think it has any mean­ing to talk of whether uni­verses with differ­ent fun­da­men­tal rules are “math­e­mat­i­cally con­ceiv­able”.

• http://​​en.wikipe­dia.org/​​wiki/​​No_free_lunch_in_search_and_optimization

No search al­gorithm beats ran­dom pick­ing in the to­tally gen­eral case. This im­plies the to­tally gen­eral case must in­clude an equal bal­ance of pathol­ogy and san­ity. In­tu­itively, a prob­lem could be struc­tured so ev­ery good de­ci­sion gives a bad re­sult.

Edit: this post gives a perfect ex­am­ple of a patholog­i­cal prob­lem: there is only one de­ci­sion to be made, a Bayesian loses, a ran­dom picker gets it right half the time and an anti-Bayesian wins.

How­ever we seem to be liv­ing in a sane uni­verse.

• (like­wise the fair­ness lan­guage of the par­ent post)

• Maybe that could be the slo­gan—“ra­tio­nal­ists do bet­ter”? With the im­plied par­en­thet­i­cal “(than they would do if they were not ra­tio­nal­ists, with the caveat that you can con­coct un­likely situ­a­tions in which ra­tio­nal­ity is an im­ped­i­ment to some val­ues of “do­ing well”)”.

By par­allel con­struc­tion with the epistemic ra­tio­nal­ity of the site’s name, per­haps “ra­tio­nal­ists make fewer mis­takes”?

• Ra­tion­al­ity seems like a good name for the ob­vi­ous ideal that you should be­lieve things that are true and use this true knowl­edge to achieve your goals. Be­cause so­cial or­ganisms are weird in ways whose de­tails are be­yond the scope of this com­ment, striv­ing to be more ra­tio­nal might not pay off for a hu­man seek­ing to move up in a hu­man world—but aside from this minor de­tail re­lat­ing to an ex­tremely patholog­i­cal case, it’s still prob­a­bly a good idea.

• Hmmm. Un­less you are sug­gest­ing a differ­ent defi­ni­tion for ra­tio­nal­ity, I think I dis­agree. If an athe­ist has the goal of gain­ing busi­ness con­tacts (or some­thing) and he can fur­ther this goal by join­ing a church, and im­per­son­at­ing the ir­ra­tional be­hav­iors he sees, he isn’t be­ing ir­ra­tional. While be­hav­iors that tend to have their ori­gins in ir­ra­tional thought are some­times re­warded by hu­man so­ciety, the ir­ra­tional­ity it­self never is. I think be­com­ing more ra­tio­nal will help a per­son move up in a hu­man sta­tus hi­er­ar­chy, if that is the ra­tio­nal­ist’s goal. I think we have this stereo­typed idea of ra­tio­nal­ists as Asperger’s-af­flicted know-it-alls who are un­able to deal with ir­ra­tional hu­mans. It sim­ply doesn’t have to be that way.

• I de­no­ta­tively agree with your con­clu­sion, but I think that many if not most as­piring ra­tio­nal­ists are in­ca­pable of that level of Machi­avel­li­anism. Sup­pose that your typ­i­cal hu­man cares about both so­cial sta­tus and be­ing forthright, and that there are so­cial penalties for mak­ing cer­tain true but un­pop­u­lar state­ments. Striv­ing for ra­tio­nal­ity in this situ­a­tion, could very well mean hav­ing to choose be­tween pop­u­lar­ity and hon­esty, whereas the ir­ra­tional­ist can have her cake and eat it, too. So yes, some may choose pop­u­lar­ity—but you see, it is a choice.

• I always thought that the ma­jor­ity of ex­po­si­tion in your New­comb ex­am­ple went to­wards, not “Ra­tion­al­ists should WIN”, but a weaker claim which seems to be a smaller in­fer­en­tial dis­tance from most would-be ra­tio­nal­ists:

Ra­tion­al­ists should not sys­tem­at­i­cally lose; what­ever sys­tem­at­i­cally loses is not ra­tio­nal­ity.

(Of course, one needs the log­i­cal caveat that we’re not deal­ing with a pure ir­ra­tional­ist-re­warder; but such things don’t seem to ex­ist in this uni­verse at the mo­ment.)

• Re: Ra­tion­al­ists should not sys­tem­at­i­cally lose; what­ever sys­tem­at­i­cally loses is not ra­tio­nal­ity.

Even if you are play­ing go with a 9-stone hand­i­cap against a shodan?

• “Lose” = “perform worse than an­other (us­able) strat­egy, all prefer­ences con­sid­ered”.

• Nick, show me a dic­tio­nary with this in and we can talk. Other­wise, it seems as though you are re­defin­ing a perfectly com­mon and or­di­nary en­glish word to mean some­thing es­o­teric and counter-in­tu­itive.

• Well, I don’t think I’d fare bet­ter by think­ing less ra­tio­nally; and if I re­ally needed to find a way to win, ra­tio­nal­ity at least shouldn’t hurt me me in the pro­cess.

I was hop­ing to be pithy by ne­glect­ing a few im­plicit as­sump­tions. For one, I mean that (in the ab­sence of di­rect re­wards for differ­ent cog­ni­tive pro­cesses) good ra­tio­nal­ists shouldn’t sys­tem­at­i­cally lose when they can see a strat­egy that sys­tem­at­i­cally wins. Of course there are Kobayashi Maru sce­nar­ios where all the ra­tio­nal­ity in the world can’t win, but that’s not what we’re talk­ing about.

• Re: “First, fore­most, fun­da­men­tally, above all else: Ra­tional agents should WIN.”

In an at­tempt to sum­marise the ob­jec­tions, there seem to be two fairly-fun­da­men­tal prob­lems:

1. Ra­tional agents try. They can­not nec­es­sar­ily win: win­ning is an out­come, not an ac­tion;

2. “Win­ning” is a poor syn­onym for “in­creas­ing util­ity”: some­times agents should min­imise their losses.

“Ra­tion­al­ists max­imise ex­pected util­ity” would be a less con­tro­ver­sial for­mu­la­tion.

• I agree with your two prob­lems, but the prob­lem with your al­ter­na­tive and so many oth­ers pre­sented here is that it doesn’t so strongly speak to the dis­tinc­tion which EY means to draw, be­tween want­ing to be seen to have fol­lowed the forms for max­imis­ing ex­pected util­ity and ac­tu­ally seek­ing to max­imise ex­pected util­ity.

Also, of course, one who at each mo­ment makes the de­ci­sion that max­imises ex­pected fu­ture util­ity defects against Clippy in both Pri­soner’s Dilemma and Parfit’s Hitch­hiker sce­nar­ios, and ar­guably two-boxes against Omega, and by EY’s defi­ni­tion that counts as “not win­ning” be­cause of the nega­tive con­se­quences of Clippy/​Omega know­ing that that’s what we do.

• Re: “it doesn’t so strongly speak to the dis­tinc­tion which EY means to draw”

I wasn’t try­ing to do that. It seems like a non-triv­ial con­cept. Is it im­por­tant to try and cap­ture that dis­tinc­tion in a slo­gan?

Re: “one who at each mo­ment makes the de­ci­sion that max­imises ex­pected fu­ture util­ity defects”

Ex­pected util­ity max­imis­ing agents don’t have com­mit­ment mechanisms, and can’t be trusted to make promises? I am scep­ti­cal. In my view, you can ex­press prac­ti­cally any agent as an ex­pected util­ity max­imiser. It seems easy enough to imag­ine com­mit­ment mechanisms. I don’t see where the prob­lem is.

• In the Least Con­ve­nient Pos­si­ble World, I imag­ine no­body has a com­mit­ment mechanism in the Pri­soner’s Dilemma.

• You can’t claim com­mit­ment mechanisms are not pos­si­ble when in fact they ev­i­dently are. “Always co­op­er­ate” is an ex­am­ple of a strat­egy which is com­mit­ted to co­op­er­ate in the pris­oner’s dilemma.

• “Com­mit­ment mechanism” typ­i­cally means some way to im­pose a cost on a party for break­ing the com­mit­ment, oth­er­wise it is, in the game the­o­rist’s par­lance, “cheap talk” in­stead. In the one-shot PD, there is by defi­ni­tion no com­mit­ment mechanism, and it was in this LCPW that Eliezer’s de­ci­sion the­o­ries are fre­quently tested.

You’re talk­ing about the re­peated PD with “always co­op­er­ate,” rather than the one-shot ver­sion, which was the sce­nario in which we found our­selves with Clippy. Please un­der­stand—I’m not say­ing EU-max­ing agents do not have com­mit­ment mechanisms in gen­eral, just that the PD was for­mu­lated ex­pressly to show the break­down of co­op­er­a­tion un­der cer­tain cir­cum­stances.

Re­gard­less, always co­op­er­ate definitely does not max­i­mize ex­pected util­ity in the vast ma­jor­ity of en­vi­ron­ments. In­deed, it is not part of liter­ally any sta­ble equil­ibrium in a finite-time RPD. But more to the point, AC is only “com­mit­ted” in the sense that, if given no op­por­tu­ni­ties af­ter­ward to make de­ci­sions, it will ap­pear to pro­duce com­mit­ted be­hav­ior. It is un­sta­ble pre­cisely be­cause it re­quires no fur­ther de­ci­sion points, where the RPD (in which it is played) has them ev­ery round.

• You and I are us­ing differ­ent defi­ni­tions of “com­mit­ment mechanism”, then.

The idea I am talk­ing about is demon­strat­ing to the other party that you are a nice, co­op­er­a­tive agent. For ex­am­ple by show­ing the other agent your source code. That con­cept has noth­ing to do with crime and pun­ish­ment.

The type of com­mit­ment mechanism I am talk­ing about is one that con­vinc­ingly demon­strates that you are com­mit­ted to a par­tic­u­lar course of ac­tion un­der some speci­fied cir­cum­stances. That in­cludes com­mit­ment via threat of re­tri­bu­tion—but also in­cludes some other things.

AC’s sta­bil­ity is tan­gen­tial to my point. If you want to com­plain that AC is un­sta­ble, per­haps con­sider TFT in­stead. That is ex­actly the same as AC on the first round.

• Re: “it doesn’t so strongly speak to the dis­tinc­tion which EY means to draw”

I wasn’t try­ing to do that. It seems like a non-triv­ial con­cept. Is it im­por­tant to try cap­ture that idea in a slo­gan?

Re: “one who at each mo­ment makes the de­ci­sion that max­imises ex­pected fu­ture util­ity defects”

Ex­pected util­ity max­imis­ing agents don’t have com­mit­ment mechanisms, and can’t be trusted to make promises? I am scep­ti­cal. In my view, you can ex­press prac­ti­cally any agent as an ex­pected util­ity max­imiser. It seems easy enough to imag­ine com­mit­ment mechanisms. I don’t see where the prob­lem is.

• Also, of course, one who at each mo­ment makes the de­ci­sion that max­imises ex­pected fu­ture util­ity defects against Clippy in both Pri­soner’s Dilemma and Parfit’s Hitch­hiker sce­nar­ios, and ar­guably two-boxes against Omega, and by EY’s defi­ni­tion that counts as “not win­ning” be­cause of the nega­tive con­se­quences of Clippy/​Omega know­ing that that’s what we do.

I think I’m mi­s­un­der­stand­ing you here be­cause this looks like a con­tra­dic­tion. Why does mak­ing the de­ci­sion that max­i­mizes ex­pected util­ity nec­es­sar­ily have nega­tive con­se­quences? It sounds like you’re work­ing un­der a de­ci­sion the­ory that in­volves prefer­ence re­ver­sals.

• I’m talk­ing about the differ­ence be­tween CDT, which stiffs the lift-giver in Parfit’s Hitch­hiker and so never gets a lift, and other de­ci­sion the­o­ries.

• Oh, I see. I thought you were say­ing an op­ti­mal de­ci­sion the­ory stiffed the lift-giver.

• I hope I’ve be­come clearer in the four years since I wrote that!

• . . . did not no­tice the date-stamp. Good thing thread necros are al­lowed here.

• “Ra­tion­al­ists max­imise ex­pected util­ity” would be a less con­tro­ver­sial for­mu­la­tion.

But, alas, less catchy.

• As con­ta­gious memes go, “ra­tio­nal­ists should win” seems to be rather pathogenic to me. A pro­posed ra­tio­nal­ist slo­gan shouldn’t need so many foot­notes. For the sake of minds ev­ery­where, I think it would be best to try to kill it off in its early stages.

• I much pre­fer “ra­tio­nal­ists should win” be­cause it’s sim­ple, ac­cessible lan­guage. Makes this ar­ti­cle more pow­er­ful than it would oth­er­wise be. Every­one gets win­ning; how many peo­ple find terms like ex­pected util­ity max­imi­sa­tion mean­ingful on a gut level?

• Ra­tion­al­ity is made of win.

Duhhh!

(Cf.)

• It seems to me that the dis­agree­ment isn’t so much about win­ning as the ex­pec­ta­tion.

In fact I don’t re­ally agree with this win­ning vs. be­lief modes of ra­tio­nal­ity.

Both ap­proaches are try­ing to max­i­mize their ex­pected pay­out. Eliezer’s ap­proach has a wider hori­zon of what it con­sid­ers when figur­ing out what the uni­verse is like.

The stan­dard ap­proach is that since the con­tent of the boxes is already de­ter­mined at the time of the choice, so tak­ing both will always put you \$1000 ahead.

Eliezer looks (I think) out to the most likely fi­nal out­comes. (or looks back at how the chain of causal­ity of one’s de­ci­sion is com­min­gled with the chain of causal­ity of Omega’s de­ci­sion. )

I think that flaw in the stan­dard ap­proach is not ‘not win­ning’ but a false be­lief about the re­la­tion­ship be­tween the boxes and your choices. (the be­lief that there isn’t any.) Once you have the right an­swer, mak­ing the choice that wins is ob­vi­ous.

The way we would know that the stan­dard ap­proach is the wrong one is by look­ing at re­sults. That a cer­tain set of choices con­sis­tently wins isn’t ev­i­dence that it is ra­tio­nal, it is ev­i­dence that it wins. Believ­ing that it wins is ra­tio­nal.

So maybe: “Ra­tion­al­ity is learn­ing how to win”

• “Ra­tion­al­ity is learn­ing how to win”

I like that.

• I would like it two if it wasn’t for the fact that it, well, isn’t.

• I guess when I look over the com­ments, the prob­lem with the phrase­ol­ogy is that peo­ple seem to in­evitably be­gin de­bat­ing over whether ra­tio­nal­ists win and ask­ing how much they win—the prop­er­ties of a fixed sort of crea­ture, the “ra­tio­nal­ist”—rather than say­ing, “What wins sys­tem­at­i­cally? Let us define ra­tio­nal­ity ac­cord­ingly.”

Not sure what sort of catch­phrase would solve this.

• Yes. Ra­tion­al­ism shouldn’t be see as a bag of dis­crete tricks, but rather, as the means for achiev­ing any given end—what it takes to do some­thing you want to do. The par­tic­u­lars will vary, of course, de­pend­ing on the end in ques­tion, but the ra­tio­nal in­di­vi­d­ual should do bet­ter at figur­ing them out.

On a side note, I’m not sure com­ing up with bet­ter slo­gans, catch­phrases, and ne­ol­o­gisms is the right thing to be aiming for.

• Do not un­der­es­ti­mate the power of po­etry.

• ‘What­ever wins is ra­tio­nal’?

‘Win­ners are ra­tio­nal’?

‘Ra­tion­al­ity is win­ning’?

Hm. Slo­ga­neer­ing is harder than it looks.

• It runs into prob­lems el­se­where, but what about “Ra­tion­al­ism should win” ?

• Well, that’s wrong, but think­ing about why it’s wrong leads me to re­al­ize that maybe “Ra­tion­al­ity should win” would have been a bet­ter move.

But I did also want to con­vey the idea that as­piring to be a ra­tio­nal­ist means as­piring to be stronger, some­thing more formidable than a de­bat­ing style… well, I guess “ra­tio­nal­ity should win” con­veys a bit of that too.

• I don’t think I buy this for New­comb-like prob­lems. Con­sider Omega who says, “There will be \$1M in Box B IFF you are ir­ra­tional.”

Ra­tion­al­ity as win­ning is prob­a­bly sub­ject to a whole fam­ily of Rus­sell’s-Para­dox-type prob­lems like that. I sup­pose I’m not sure there’s a bet­ter no­tion of ra­tio­nal­ity.

• What you give is far harder than a New­comb-like prob­lem. In New­comb-like prob­lems, Omega re­wards your de­ci­sions, he isn’t look­ing at how you reach them. This leaves you free to op­ti­mize those de­ci­sions.

• What you give is far harder than a New­comb-like prob­lem. In New­comb-like prob­lems, Omega re­wards your de­ci­sions, he isn’t look­ing at how you reach them.

You mi­s­un­der­stand. In my var­i­ant, Omega is also not look­ing at how you reach your de­ci­sion. Rather, he is look­ing at you be­fore­hand—“scan­ning your brain”, if you will—and eval­u­at­ing the kind of per­son you are (i.e., how you “would” be­have). This, along with the choice you make, de­ter­mines your later re­ward.

In the clas­si­cal prob­lem, (un­less you just as­sume back­wards cau­sa­tion,) what Omega is do­ing is as­sess­ing the kind of per­son you are be­fore you’ve phys­i­cally in­di­cated your choice. You’re re­warded IFF you’re the kind of per­son who would choose only box B.

My var­i­ant is ex­actly sym­met­ri­cal: he as­sesses whether you are the kind of per­son who is ra­tio­nal, and re­sponds as I out­lined.

• We have such an Omega: we just re­fer to it differ­ently.

After all, we are used to treat­ing our genes and our en­vi­ron­ments as definite in­fluences on our abil­ity to Win. Taller peo­ple tend to make more money; Omega says “there will be \$1mil in box B if you have alle­les for height.”

If Omega makes de­ci­sions based on prop­er­ties of the agent, and not on the de­ci­sions ei­ther made or pre­dicted to be made by the agent, then Omega is no differ­ent from, well, a lot of the world.

Ra­tion­al­ity, then, might be bet­ter re­defined un­der these ob­ser­va­tions as “mak­ing the de­ci­sions that Win when­ever such de­ci­sions ac­tu­ally af­fect one’s prob­a­bil­ity of Win­ning,” though I pre­fer Eliezer’s more gen­eral rules plus the tacit un­der­stand­ing that we are only in­clud­ing situ­a­tions where de­ci­sions make a differ­ence.

• Quot­ing my­self:

(though I don’t see how you iden­tify any dis­tinc­tion be­tween “prop­er­ties of the agent” and “de­ci­sions . . . pre­dicted to be made by the agent” or why you care about it).

I’ll go fur­ther and say this dis­tinc­tion doesn’t mat­ter un­less you as­sume that New­comb’s prob­lem is a time para­dox or some other kind of back­wards cau­sa­tion.

This is all tan­gen­tial, though, I think.

• Yes, all well and good (though I don’t see how you iden­tify any dis­tinc­tion be­tween “prop­er­ties of the agent” and “de­ci­sions . . . pre­dicted to be made by the agent” or why you care about it). My point is that a con­cept of ra­tio­nal­ity-as-win­ning can’t have a definite ex­ten­sion say across the do­main of agents, be­cause of the ex­is­tence of Rus­sell’s-Para­dox prob­lems like the one I iden­ti­fied.

This is perfectly ro­bust to the point that weird and seem­ingly ar­bi­trary prop­er­ties are re­warded by the game known as the uni­verse. Your pro­posed re­defi­ni­tion may ac­tu­ally dis­agree with EY’s the­ory of New­comb’s prob­lem. After all, your de­ci­sion can’t empty box B, since the con­tents of box B are de­ter­mi­nate by the time you make your de­ci­sion.

• After all, your de­ci­sion can’t empty box B, since the con­tents of box B are de­ter­mi­nate by the time you make your de­ci­sion.

Hello. My name is Omega. Un­til re­cently I went around claiming to be all-know­ing/​psy­chic/​what­ever, but now I un­der­stand ly­ing is Wrong, so I’m turn­ing over a new leaf. I’d like to offer you a game.

Here are two boxes. Box A con­tains \$1,000, box B con­tains \$1,000,000. Both boxes are cov­ered by touch-sen­si­tive layer. If you choose box B only (please sig­nal that by touch­ing box B), it will send out a ra­dio sig­nal to box A, which will promptly dis­in­te­grate. If you choose both boxes (please sig­nal that by touch­ing box A first), a ra­dio sig­nal will be sent out to box B, which will dis­in­te­grate it’s con­tent, so open­ing it will re­veal an empty box.

(I got the dis­in­te­grat­ing tech­nol­ogy from the wreck of a UFO that crashed into my barn, but that’s not rele­vant here.)

I’m afraid, if I or my gad­gets de­tect any at­tempt to tem­per with the op­er­a­tion of my boxes, I will be forced to dis­qual­ify you.

In case there is doubt, this is the same game I used to offer back in my de­ceit­ful days. The differ­ence is, now the player knows the rules are en­forced by cold hard elec­tron­ics, so there’s no temp­ta­tion to try and out­smart any­body.

So, what will it be?

• Yes, you are chang­ing the hypo. Your Omega dummy says that it is the same game as New­comb’s prob­lem, but it’s not. As VN notes, it may be equiv­a­lent to the ver­sion of New­comb’s prob­lem that as­sumes time travel, but this is not the clas­si­cal (or an in­ter­est­ing) state­ment of the prob­lem.

• What is your point? You seem to be giv­ing a metaphor for solv­ing the prob­lem by imag­in­ing that your ac­tion has a di­rect con­se­quence of chang­ing the past (and as a re­sult, con­tents of the box in the pre­sent). More about that in this com­ment.

• Naive ar­gu­ment com­ing up.

How Omega de­cides what to pre­dict or what makes it’s stated con­di­tion for B (aka. re­sult of “pre­dic­tion”) come true, is not rele­vant. Ig­nor­ing the data that says it’s always/​al­most always cor­rect, how­ever, seems … not right. Any de­ci­sion must be made with the un­der­stand­ing that Omega is most likely to pre­dict it. You can’t out­smart it by failing to up­date it’s ex­pected state of mind in the last sec­ond. The mo­ment you de­cide to two-box is the mo­ment Omega pre­dicted, when it chose to empty box B.

Con­sider this:

Andy: “Sure, one box seems like the good choice, be­cause Omega would take the mil­lion away oth­er­wise. OK. … Now that the boxes are in front of me, I’m think­ing I should take both. Be­cause, you know, two is bet­ter than one. And it’s already de­cided, so my choice won’t change any­thing. Both boxes.”

Barry: “Sure, one box seems like the good choice, be­cause Omega would take the mil­lion away oth­er­wise. OK. … Now that the boxes are in front of me, I’m think­ing I should take both. Be­cause, you know, two is bet­ter than one. Of course the out­come still de­pends on what Omega pre­dicted. Say I choose both boxes. So if Omega’s pre­dic­tion is cor­rect this time, I will find an empty B. But maybe Omega was wrong THIS time. Sure, and maybe THIS time I will also win the lot­tery. How it would have known is not rele­vant. The fact that O already acted on it’s pre­dic­tion doesn’t make it more likely to be wrong. Really, what is the dilemma here? One box.”

Ok, I don’t ex­pect that I’m the first per­son to say all this. But then, I wouldn’t have ex­pected any­body to two-box, ei­ther.

• ma­jor said:

Ig­nor­ing the data that says it’s always/​al­most always cor­rect, how­ever, seems … not right.

You’re not the only per­son to won­der this. Either I’m miss­ing some­thing, or two-box­ers just fail at in­duc­tion.

I have to won­der how two-box­ers would do on the “Hot Stove Prob­lem.”

In case you guys haven’t heard of such a ma­jor prob­lem in philos­o­phy, I will briefly ex­plain the Hot Stove Prob­lem:

You have touched a hot stove 100 times. 99 times you have been burned. Noth­ing has changed about the stove that you know about. Do you touch it again?

• I can see the re­la­tion to New­comb—this is also a weird coun­ter­fac­tual that will never hap­pen. I haven’t de­liber­ately touched a hot stove in my adult life, and don’t ex­pect to. I cer­tainly won’t get to 99 times.

• If one defines ra­tio­nal­ity in some way that isn’t about win­ning, your ex­am­ple shows that ra­tio­nal­ists-in-such-a-sense might not win.

If one defines ra­tio­nal­ity as ac­tu­ally win­ning, your ex­am­ple shows that there are things that even Omega can­not do be­cause they in­volve log­i­cal con­tra­dic­tion.

If one defines ra­tio­nal­ity as some­thing like “ex­pected win­ning given one’s model of the uni­verse” (for quib­bles, see be­low), your ex­am­ple shows that you can’t co­her­ently carry around a model of the uni­verse that in­cludes a su­per­be­ing who de­liber­ately acts so as to in­val­i­date that model.

I find all three of these things rather un­sur­pris­ing.

The tra­di­tional form of New­comb’s prob­lem doesn’t in­volve a su­per­be­ing de­liber­ately act­ing so as to in­val­i­date your model of the uni­verse. That seems like a big enough differ­ence from your ver­sion to in­val­i­date in­fer­ences of the form “there’s no such thing as act­ing ra­tio­nally in grob­stein’s ver­sion of New­comb’s prob­lem; there­fore it doesn’t make sense to use any ver­sion of New­comb’s prob­lem in form­ing one’s ideas about what con­sti­tutes act­ing ra­tio­nally”.

• What is it, pray tell, that Omega can­not do?

Can he not scan your brain and de­ter­mine what strat­egy you are fol­low­ing? That would be odd, be­cause this is no stronger than the origi­nal New­comb prob­lem and does not seem to con­tain any log­i­cal im­pos­si­bil­ities.

Can he not com­pute the strat­egy, S, with the prop­erty “that at each mo­ment, act­ing as S tells you to act—given (1) your be­liefs about the uni­verse at that point and (2) your in­ten­tion of fol­low­ing S at all times—max­i­mizes your net util­ity [over all time]?” That would be very odd, since you seem to be­lieve a reg­u­lar per­son can com­pute S. If you can do it, why not Omega? (NB, no, it doesn’t help to define an ap­prox­i­ma­tion of S and use that. If it’s ra­tio­nal, Omega will pun­ish you for it. If it’s not, why are you do­ing it?)

Can he not com­pare your strat­egy to S, given that he knows the value of each? That seems odd, be­cause a push­down au­toma­ton could make the com­par­i­son. Do you re­quire Omega to be weaker than a push­down au­toma­ton?

No?

Then is it pos­si­ble, maybe, that the prob­lem is in the defi­ni­tion of S?

• What is it, pray tell, that Omega can­not do?

Well, for in­stance, he can­not make 1+1=3. And, if one defines ra­tio­nal­ity as ac­tu­ally win­ning then he can­not act in such a way that ra­tio­nal peo­ple lose. This is perfectly ob­vi­ous; and, in case you have mi­s­un­der­stood what I wrote (as it looks like you have), that is the only thing I said that Omega can­not do.

In the dis­cus­sion of strat­egy S, my claim was not about what Omega can do but about what you (a per­son at­tempt­ing to im­ple­ment such a strat­egy) can con­sis­tently in­clude in your model of the uni­verse. If you are an S-ra­tio­nal agent, then Omega may de­cide to screw you over, in which case you lose; that’s OK (as far as the no­tion of ra­tio­nal­ity goes; it’s too bad for you) be­cause S doesn’t pur­port to guaran­tee that you don’t lose.

What S does pur­port to do is to ar­range that, in so far as the uni­verse obeys your (in­com­plete, prob­a­bil­is­tic, …) model of it, you win on av­er­age. Omega’s malfea­sance is only a prob­lem for this if it’s in­cluded in your model. Which it can’t be. Hence:

what your ex­am­ple shows [...] is that you can’t con­sis­tently ex­pect Omega to act in a way that falsifies your be­liefs and/​or in­val­i­dates your strate­gies for act­ing.

(Ac­tu­ally, I think that’s not quite right. You could prob­a­bly con­sis­tently ex­pect that, pro­vided your ex­pec­ta­tions about how he’s go­ing to to it were vague enough.)

I did not claim, nor do I be­lieve, that a reg­u­lar per­son can com­pute a perfectly ra­tio­nal strat­egy in the sense I de­scribed. Nor do I be­lieve that a reg­u­lar per­son can play chess with­out mak­ing any mis­takes. None the less, there is such a thing as play­ing chess well; and there is such a thing as be­ing (im­perfectly, but bet­ter than one might be) ra­tio­nal. Even with a defi­ni­tion of the sort Eliezer likes.

• The ra­tio­nal­ity that doesn’t se­cure your wish isn’t the true ra­tio­nal­ity.

Win­ning has no fixed form. You’ll do what­ever is needed to suc­ceed, how­ever origi­nal or far fetched it would sound. How it sounds is ir­rele­vant, how it works is the crux.

And If at first what you tried didn’t work, then you’ll learn, adapt, and try again, mak­ing no pause for ex­cuses, if you merely want to suc­ceed, you’ll be firm as a rock, re­lentless in your at­tempts to find the path to suc­cess.

And if your win­ning didn’t go as smoothly or well as you wanted or thought it should, in gen­eral, then learn, adapt, and try again. Think out­side of the box, self re­curse on win­ning it­self. Even­tu­ally, you should re­fine and pre­cise your meth­ods into a tree, from gen­eral to spe­cial­ized.

That tree will have a trunk of gen­eral cases and meth­ods used to solve those, and any case that lies ahead, up­wards on the tree; and the higher you go, the more spe­cial­ized the method, the rarer the case it solves. The tree isn’t fixed ei­ther, it can and will grow and change.

• Re: The ra­tio­nal­ity that doesn’t se­cure your wish isn’t the true ra­tio­nal­ity.

Again with the ex­am­ple of hand­i­cap chess. You start with no knight. You wish to win. Ac­tu­ally you lose. Does that mean you were be­hav­ing ir­ra­tionally? No, of course not! It is not whether you win or lose, but how you play the game.

• Yes!

Ra­tion­al­ity is Messy, Uncer­tain and Fum­bling.

The ex­pla­na­tion af­ter­wards looks Neat, Cer­tain and Cut ’n Dried.

• Say “Ra­tion­al­ists are” in­stead of “Ra­tion­al­ity is” and I’ll agree with that.

• I am won­der­ing, if this differ­ence makes a differ­ence.

“Ra­tion­al­ity” is of course a nom­i­nal­i­sa­tion—you can’t put it in a wheelbar­row—so it is an ab­strac­tion that can mean many things. “Ra­tion­al­ists” are more con­crete.

How­ever the ac­tivity of ra­tio­nal­ity is de­pen­dent on a car­rier (the ra­tio­nal­ist). No car­rier, no ra­tio­nal­ity. The ac­tivity of ra­tio­nal­ity is messy, un­cer­tain and fum­bling. Would non-hu­man car­ri­ers of ra­tio­nal­ity be less messy?. Maybe they would be quicker and the quick­ness would dis­guise the messi­ness. Maybe they would turn down fewer blind alleys, but surely they are as blind as us.

Thus I do not see a differ­ence that makes a differ­ence.

• What about cases where any ra­tio­nal course of ac­tion still leaves you on the los­ing side?

Although this may seem to be im­pos­si­ble ac­cord­ing to your defi­ni­tion of ra­tio­nal­ity, I be­lieve it’s pos­si­ble to con­struct such a sce­nario be­cause of the fun­da­men­tal limi­ta­tions of a hu­man brains abil­ity to simu­late.

In pre­vi­ous posts you’ve said that, at worst, the ra­tio­nal­ist can sim­ply simu­late the ‘ir­ra­tional’ be­havi­our that is cur­rently the win­ning strat­egy. I would con­tend that hu­mans can’t simu­late effec­tively enough for this to be an op­tion. After all we know that sev­eral bi­ases stem from our in­abil­ity to effec­tively simu­late our own fu­ture emo­tions, so to effec­tively simu­late an en­tire other be­ings re­sponse to a com­plex situ­a­tion would seem to be a task be­yond the cur­rent hu­man brain.

As a con­crete ex­am­ple I might sug­gest the abil­ity to lie. I be­lieve it’s fairly well es­tab­lished that hu­mans are not hugely effec­tive liars and there­fore the most effec­tive way to lie is to truly be­lieve the lie. Does this not strongly sug­gest that limi­ta­tions of simu­la­tion mean that a ra­tio­nal course of ac­tion can still be beaten by an ir­ra­tional one?

I’m not sure that even if this is true it should effect a uni­ver­sal defi­ni­tion of ra­tio­nal­ity—but it would place bounds on the effec­tive­ness of ra­tio­nal­ity in be­ings of limited simu­la­tion ca­pac­ity.

• If hu­mans are im­perfect ac­tors then in situ­a­tions (such as a game of chicken) in which it is bet­ter to (1) be ir­ra­tional and seen as ir­ra­tional then it is to (2) be ra­tio­nal and seen as ra­tio­nal

then the ra­tio­nal ac­tor will lose.

Of course hold­ing con­stant ev­ery­one else’s be­liefs about you, you always gain by be­ing more ra­tio­nal.

• Given that I one-box on New­comb’s Prob­lem and keep my word as Parfit’s Hitch­hiker, it would seem that the ra­tio­nal course of ac­tion is to not steer your car even if it crashes (if for some rea­son win­ning that game of chicken is the most im­por­tant thing in the uni­verse).

• You are play­ing chicken with your ir­ra­tional twin. Both of you would rather sur­vive than win. Your twin, how­ever, doesn’t un­der­stand that it’s pos­si­ble to die when play­ing chicken. In the game your twin both sur­vives and wins whereas you sur­vive but lose.

• Then you mur­der the twin prior to the game of chicken, and fake his suicide. Or you in­timi­date the twin, us­ing your ad­vanced ra­tio­nal skills to de­ter­mine how ex­actly to best fill them with fear and doubt.

But be­fore mur­der­ing or risk­ing an un­cer­tain in­timi­da­tion feint, there’s an­other ques­tion you need to ask your­self. How cer­tain are you that the twin is ir­ra­tional? The Cold War was (prob­a­bly) a per­cep­tual er­ror; nei­ther side re­al­ized that they were in a pris­on­ers dilemma, they both as­sumed that the other side preferred “un­bal­anced ar­ma­ment” over “mu­tual ar­ma­ment” over “mu­tual disar­ma­ment;” in re­al­ity, the last two should have been switched.

Worst case sce­nario? You die play­ing chicken, be­cause the stakes were worth it. The Ra­tional path isn’t always nice.

(There are some eth­i­cal premises im­plicit in this ar­gu­ment, premises which I plan to ar­gue are nat­u­ral deriva­tives from Game The­ory… but I’m still work­ing on that ar­ti­cle.)

• My an­swer to that one is that I don’t play chicken in the first place un­less the stake is some­thing I’m pre­pared to die for.

• There are lots of chicken like games that don’t in­volve death. For ex­am­ple, your boss wants some task done and ei­ther you or a co-worker can do it. The worst out­come for both you and the co-worker is for the task to not get done. The best is for the other per­son to do the task.

• My an­swer still ap­plies—I’m not go­ing to make a song and dance about who does it, un­less the other guy has been sys­tem­at­i­cally not pul­ling his weight and it’s got to the point where that mat­ters more to me than this task get­ting done.

• For New­comb’s Prob­lem, is it fair to say that if you be­lieve the given in­for­ma­tion, the crux is whether you be­lieve it’s pos­si­ble (for Omega) to have a 99%+ cor­rect pre­dic­tion of your de­ci­sion based on the givens? Re­fusal to ac­cept that seems to me the only jus­tifi­ca­tion for two-box­ing. Per­haps that’s a sign that I’m less tied to a fixed set of “ra­tio­nal­ist” pro­ce­dures than a perfect ra­tio­nal­ist would be, but I would feel like I were pre­tend­ing to say oth­er­wise.

I also won­der if the many pub­lic af­fir­ma­tions I’ve heard of “I would one-box New­comb’s Prob­lem” are at­tempts at con­vinc­ing Omega to be­lieve us in the un­likely event of ac­tu­ally en­coun­ter­ing the Prob­lem. It does give a similar sort of thrill to “God will rap­ture me to heaven.”

• +1 for “Ra­tion­al­ists win”. What is Parfit’s Hitch­hiker? I couldn’t find an an­swer on Google.

• It’s a test case for ra­tio­nal­ity as pure self-in­ter­est (re­ally it’s like an al­tru­is­tic ver­sion of the game of Chicken).

Sup­pose I’m purely self­ish and stranded on a road at night. A mo­torist pulls over and offers to take me home for \$100, which is a good deal for me. I only have money at home. I will be able to get home then IFF I can promise to pay \$100 when I get home.

But when I get home, the marginal benefit to pay­ing \$100 is zero (un­der as­sump­tion of pure self­ish­ness). There­fore if I be­have ra­tio­nally at the mar­gin when I get home, I can­not keep my promise.

I am bet­ter off over­all if I can com­mit in ad­vance to keep­ing my promise. In other words, I am bet­ter off over­all if I have a dis­po­si­tion which some­times causes me to be­have ir­ra­tionally at the mar­gin. Un­der the self-in­ter­est no­tion of ra­tio­nal­ity, then, it is ra­tio­nal, at the mar­gin of choos­ing your dis­po­si­tion, to choose a dis­po­si­tion which is not ra­tio­nal un­der the self-in­ter­est no­tion of ra­tio­nal­ity. (This is what Parfit de­scribes as an “in­di­rectly self-defeat­ing” re­sult; note that be­ing in­di­rectly self-defeat­ing is not a knock­down ar­gu­ment against a po­si­tion.)

• Ah, thanks. I’m of the school of thought that says it is ra­tio­nal both to promise to pay the \$100, and to have a policy of keep­ing promises.

• I think it is both right and ex­pected-util­ity-max­i­miz­ing to promise pay the \$100, right to pay the \$100, and not ex­pected-util­ity-max­i­miz­ing to pay the \$100 un­der stan­dard as­sump­tions of you’ll never see the driver again or what­not.

• You’re as­sum­ing it does no dam­age to one­self to break one’s own promises. Virtue the­o­rists would dis­agree.

Break­ing one’s promises dam­ages one’s in­tegrity—whether you con­sider that a trait of char­ac­ter or merely a valuable fact about your­self, you will lose some­thing by break­ing your promise even if you never see the fel­low again.

• Your ar­gu­ment is equiv­a­lent to, “But what if your util­ity func­tion rates keep­ing promises higher than a mil­lion or­gasms, what then?”

The hypo is meant to be a very sim­ple model, be­cause sim­ple mod­els are use­ful. It in­cludes two goods: get­ting home, and hav­ing \$100. Any other spec­u­la­tive val­ues that a real per­son might or might not have are dis­trac­tions.

• Sim­ple mod­els are fine as long as we don’t for­get they are only ap­prox­i­ma­tions. Ra­tion­al­ists should win in the real world.

• Ex­cept that you men­tion both per­sons and promises in the hy­po­thet­i­cal ex­am­ple, so both things fac­tor into the cor­rect de­ci­sion. If you said that it’s not a per­son mak­ing the de­ci­sion, or that there’s no promis­ing in­volved, then you could dis­count in­tegrity.

• Yes, this seems unim­peach­able. The miss­ing piece is, ra­tio­nal at what mar­gin? Once you are home, it is not ra­tio­nal at the mar­gin to pay the \$100 you promised.

• This as­sumes no one can ever find out you didn’t pay, as well. In gen­eral, though, it seems bet­ter to as­sume ev­ery­thing will even­tu­ally be found out by ev­ery­one. This seems like enough, by it­self, to keep promises and avoid most lies.

• Right. The ques­tion of course is, “bet­ter” for what pur­pose? Which model is bet­ter de­pends on what you’re try­ing to figure out.

• Thank you, I too was cu­ri­ous.

We need names for these po­si­tions; I’d use hy­per-ra­tio­nal­ist but I think that’s slightly differ­ent. Per­haps a con­se­quen­tial­ist does what­ever has the max­i­mum ex­pected util­ity at any given mo­ment, and a meta-con­se­quen­tial­ist is a ma­chine built by a con­se­quen­tial­ist which is ex­pected to achieve the max­i­mum over­all util­ity at least in part through be­ing trust­wor­thy to keep com­mit­ments a pure con­se­quen­tial­ist would not be able to keep.

I guess I’m not sure why peo­ple are so in­ter­ested in this class of prob­lems. If you sub­sti­tute Clippy for my lift, and up the stakes to a billion lives lost later in re­turn for two billion saved now, there you have a prob­lem, but when it’s hu­man be­ings on a hu­man scale there are good or­di­nary con­se­quen­tial­ist rea­sons to hon­our such bar­gains, and those rea­sons are enough for the driver to trust my com­mit­ment. Does any­one re­ally an­ti­ci­pate a ver­sion of this situ­a­tion aris­ing in which only a meta-con­se­quen­tial­ist wins, and if so can you de­scribe it?

• I do think these prob­lems are mostly use­ful for pur­poses of un­der­stand­ing and (moreso) defin­ing ra­tio­nal­ity (“ra­tio­nal­ity”), which is per­haps a some­what du­bi­ous use. But look how much time we’re spend­ing on it.

• I very much recom­mend Rea­sons and Per­sons, by the way. A friend stole my copy and I miss it all the time.

• OK, thanks!

Your friend stole a book on moral philos­o­phy? That’s pretty spe­cial!

• It’s still in print and read­ily available. If you re­ally miss it all the time, why haven’t you bought an­other copy?

• It’s \$45 from Ama­zon. At that price, I’m go­ing to scheme to steal it back first.

OR MAYBE IT’S BECAUSE I’M CRAAAZY AND DON’T ACT FOR REASONS!

• Gosh. It’s only £17 in the UK.

(I wasn’t mean­ing to sug­gest that you’re crazy, but I did won­der about … hmm, not sure whether there’s a stan­dard name for it. Be­ing less pre­pared to spend X to get Y on ac­count of hav­ing done so be­fore and then lost Y. A sort of con­verse to the en­dow­ment effect.)

• Men­tal ac­count­ing has that effect in the short run, but seems un­likely to ap­ply here.

• Why don’t you ac­cept his dis­tinc­tion be­tween act­ing ra­tio­nally at a given mo­ment and hav­ing the dis­po­si­tion which it is ra­tio­nal to have, in­te­grated over all time?

EDIT: er, Parfit’s, that is.

• This is a clas­sic point and clearer than the re­lated ar­gu­ment I’m mak­ing above. In ad­di­tion to be­ing part of the ac­cu­mu­lated game the­ory learn­ing, it’s one of the types of ar­gu­ments that shows up fre­quently in Derek Parfit’s dis­cus­sion of what-is-ra­tio­nal­ity, in Ch. 1 of Rea­sons and Per­sons.

I feel like there are difficul­ties here that EY is not at­tempt­ing to tackle.

• James, when you say, “be ra­tio­nal”, I think this shows a mi­s­un­der­stand­ing.

It may be re­ally im­por­tant to im­press peo­ple with a cer­tain kind of reck­less courage. Then it is Ra­tional to play chicken as bravely as you can. This Wins in the sense of be­ing bet­ter than the al­ter­na­tive open to you.

Nor­mally, I do not want to take the risk of be­ing knocked down by a car. Only in this case is it not ra­tio­nal to play chicken: be­cause not play­ing achieves what I want.

I do not see why a ra­tio­nal­ist should be less coura­geous, less able to es­ti­mate dis­tances and speeds, and so less likely to win at Chicken.

• No. The point is that you ac­tu­ally want to sur­vive more than you want to win, so if you are ra­tio­nal about Chicken you will some­times lose (con­sult your model for de­tails). Given your prefer­ences, there will always be some dis­tance \ep­silon be­fore the cliff where it is ra­tio­nal for you to give up.

There­fore, un­der these as­sump­tions, the strat­egy “win or die try­ing” seem­ingly re­quires you to be ir­ra­tional. How­ever, if you can cred­ibly com­mit to this strat­egy—be the kind of per­son who will win or die try­ing—you will beat a ra­tio­nal player ev­ery time.

This is a case where it is ra­tio­nal to have an ir­ra­tional dis­po­si­tion, a dis­po­si­tion other than do­ing what is ra­tio­nal at ev­ery mar­gin.

• But a per­son who truly cares more about win­ning than sur­viv­ing can be ut­terly ra­tio­nal in choos­ing that strat­egy.

• In chicken-like games in which one player is ra­tio­nal and the other ir­ra­tional:

The ra­tio­nal per­son cares more about sur­viv­ing than win­ning and so sur­vives and loses.

The ir­ra­tional per­son who doesn’t think through the con­se­quences of los­ing both sur­vives and wins.

• Agreed. In fact, the clas­sic game-the­o­retic model of chicken re­quires that the play­ers vastly pre­fer los­ing their pride to los­ing their lives. If win­ning/​los­ing > los­ing/​dy­ing, then in a situ­a­tion with im­perfect in­for­ma­tion, we would as­sign a pos­i­tive prob­a­bil­ity to play­ing ag­gres­sively.

And tech­ni­cally speak­ing, it is most ra­tio­nal, in the game-the­o­retic sense, to dis­able your steer­ing os­ten­ta­tiously be­fore the other player does so as well. In that case, you’ve won the game be­fore it be­gins, and there is no ac­tual risk.

• No, if you are ra­tio­nal the best ac­tion is to con­vince your op­po­nent that you have dis­abled your steer­ing when in fact you have not done so.

• Either a) your op­po­nent truly does be­lieve that you’ve dis­abled your steer­ing, in which case the out­comes are iden­ti­cal and the ac­tions are equally ra­tio­nal, or b) we ac­count for the (small?) chance that your op­po­nent can de­ter­mine that you ac­tu­ally have not dis­abled your steer­ing, in which case he os­ten­ta­tiously dis­ables his and wins. Only by set­ting up what is in effect a dooms­day de­vice can you en­sure that he will not be tempted to in­for­ma­tion-gath­er­ing brinks­man­ship.

• “Ra­tion­al­ists should win.” is what sold me on this site. Its a good phrase.

• Alleged ra­tio­nal­ists should not find them­selves en­vy­ing the mere de­ci­sions of alleged non­ra­tional­ists, be­cause your de­ci­sion can be what­ever you like.

Eliezer said this in the New­comb’s Prob­lem post which in­tro­duced “Ra­tion­al­ists should win”.

Per­haps for a slo­gan, shorten it to: “Ra­tion­al­ists should not envy the mere de­ci­sions of non­ra­tional­ists.” This em­pha­sizes that ra­tio­nal­ity con­tributes to win­ning through good de­ci­sions.

A po­ten­tial prob­lem is that, in some cir­cum­stances, an alleged ra­tio­nal­ist could find a fac­tor that seems un­re­lated to their de­ci­sions to blame for los­ing, and there­fore ar­gue that their be­ing ra­tio­nal is con­sis­tent with the slo­gan. For ex­am­ple, a some­one who blames los­ing on luck might need to re­con­sider their prob­a­bil­ity the­ory that is in­form­ing their de­ci­sions. Though this should not be a fully gen­eral coun­ter­ar­gu­ment, some­one who wins more of­ten than oth­ers in the same situ­a­tion is likely do­ing some­thing right, even if they do not win with prob­a­bil­ity 1.

• Both boxes might be trans­par­ent. In this case, you would see the money in both boxes only if you are ra­tio­nal enough to un­der­stand, that you have to pick just B.

Wouldn’t that be an ir­ra­tional move? Not all! You have to un­der­stand that to be ra­tio­nal.

• That’s brilli­ant! (I’m not sure what you mean by un­der­stand though.)

In other words, Omega does one of the two things: it ei­ther offers you \$1000 + \$1, or only \$10. It offers you \$1000 + \$1 only if it pre­dicts that you won’t take the \$1, oth­er­wise it only gives you \$10.

This is a var­i­ant of coun­ter­fac­tual mug­ging, ex­cept that there is no chance in­volved. Your past self prefers to pre­com­mit to not tak­ing the \$1, while your pre­sent self faced with that situ­a­tion prefers to take the 1\$.

• You have to un­der­stand this twist, to be able to call your­self ra­tio­nal, by my book.

You un­der­stood the twist, as I see.

• This re­ply is too mys­te­ri­ous to re­veal whether you got the crite­rion right.

• Hmmm… It looks like the de­ci­sion to take the \$1 de­ter­mines the situ­a­tion where you make that de­ci­sion out of re­al­ity. Effects of pre­com­mit­ment be­ing re­stricted to the coun­ter­fac­tual branches are a usual thing, but in this prob­lem they stare you right in the face, which is rather dar­ing.

• Another vari­a­tion, play­ing only on real/​coun­ter­fac­tual, with­out mo­ti­vat­ing the real de­ci­sion. Omega comes to you and offers \$1, if and only if it pre­dicts that you won’t take it. What do you do? It looks neu­tral, since ex­pected gain in both cases is zero. But the de­ci­sion to take the \$1 sounds rather bizarre: if you take the \$1, then you don’t ex­ist!

Agents self-con­sis­tent un­der re­flec­tion are coun­ter­fac­tual zom­bies, in­differ­ent to whether they are real or not.

• Seems roughly as dis­turb­ing as Wikipe­dia’s ar­ti­cle on Gaus­sian adap­ta­tion:

Gaus­sian adap­ta­tion as an evolu­tion­ary model of the brain obey­ing the Heb­bian the­ory of as­so­ci­a­tive learn­ing offers an al­ter­na­tive view of free will due to the abil­ity of the pro­cess to max­i­mize the mean fit­ness of sig­nal pat­terns in the brain by climb­ing a men­tal land­scape in anal­ogy with phe­no­typic evolu­tion.

Such a ran­dom pro­cess gives us lots of free­dom of choice, but hardly any will. An illu­sion of will may, how­ever, em­anate from the abil­ity of the pro­cess to max­i­mize mean fit­ness, mak­ing the pro­cess goal seek­ing. I. e., it prefers higher peaks in the land­scape prior to lower, or bet­ter al­ter­na­tives prior to worse. In this way an illu­sive will may ap­pear. A similar view has been given by Zo­har 1990. See also Kjel­lström 1999.

• If you want your source code to be self-con­sis­tent un­der re­flec­tion, you know what you have to do.

• Ra­tion­al­ity is win­ning that doesn’t gen­er­ate a sur­prise; ran­domly win­ning the lot­tery gen­er­ates a sur­prise. A good mea­sure of ra­tio­nal­ity is the amount of com­plex­ity in­volved in or­der to win, and the sur­prise gen­er­ated by that win. If to win at a cer­tain task re­quires that your method have many com­plex steps, and you win, non-sur­pris­ingly, then the method used was a very ra­tio­nal one.

• It seems to me that some of the kib­itz­ing is due to hu­man cog­ni­tive ar­chi­tec­ture mak­ing it difficult to be both episte­molog­i­cally and in­stru­men­tally ra­tio­nal in many con­texts, e.g., ex­pected over­con­fi­dence in so­cial in­ter­ac­tions, mo­ti­va­tion is­sues re­lated to op­ti­mism/​pes­simism, &c.

An ideal ra­tio­nal agent would not have this prob­lem, but hu­man cog­ni­tion is… sub­op­ti­mal.

“Ra­tion­al­ity is what­ever wins.”

If it’s not a win­ning strat­egy, you’re not do­ing it right. If it is a win­ning strat­egy, over­all in as long of terms as you can plan, then it’s ra­tio­nal­ity. It doesn’t mat­ter what the per­son thinks: whether they’d call them­selves ra­tio­nal­ists or not.

• All else be­ing equal, shouldn’t ra­tio­nal­ists, al­most by defi­ni­tion, win? The only way this wouldn’t hap­pen would be in a con­test of pure chance, in which ra­tio­nal­ity could con­fer no ad­van­tage. It seems like we’re just talk­ing se­man­tics here.

• If hu­man be­ings had perfect con­trol over their minds and bod­ies—e.g., could tweak Sys­tem 1 with­out limit and perform any phys­i­cally pos­si­ble act/​be­hav­ior -- your point would be stronger.

How­ever, as oth­ers have men­tioned el­se­where, there may be cases where we are just not ca­pa­ble of im­ple­ment­ing a strat­egy that ra­tio­nal­ity sug­gests is op­ti­mal (e.g., con­vinc­ingly pre­tend­ing to be more con­fi­dent than you are to the point that all rele­vant Sys­tem 1 im­pulses/​re­ac­tions are those of a per­son who is nat­u­rally over­con­fi­dent).

It may be the case that an uber­men­sch ra­tio­nal­ist can even­tu­ally learn to do any­thing that can be done via non-ra­tio­nal means, but that’s not clear a pri­ori, es­pe­cially if we con­sider finite lifes­pans and op­por­tu­nity costs.

• Agreed. Par­tic­u­larly in hy­po­thet­i­cal cases where one ra­tio­nally con­cludes that it would be in their best in­ter­est to be­have ir­ra­tionally, e.g., over-con­fi­dence in one­self or be­lief in God. Even if one ar­rived at those con­clu­sions, it’s not clear to me how any­one could de­cide to be­come ir­ra­tional in those ways. Pas­cal’s no­tion of “boot­strap­ping” one­self into re­li­gious be­lief never struck me as very plau­si­ble. In­ter­est­ingly though, “fak­ing” con­fi­dence in one­self of­ten does tend to lead to real con­fi­dence via some sort of feed­back mechanism, e.g., in­ter­ac­tions with women.

• As an an­swer to my and oth­ers’ con­stant nag­ging, your post feels strangely un­fulfilling. Just what prob­lems does the Art solve, and how do you check if the solu­tions are cor­rect? Of course the prob­lems can be the­o­ret­i­cal, not real-world—this isn’t the is­sue at all.

• I know what prob­lem my Art is in­tended to solve. You may feel that some progress has been ex­hibited, or not; it will cer­tainly pale by com­par­i­son to my fu­ture hopes, but it might not seem so pale by com­par­i­son to the av­er­age. The Art seems to be giv­ing me what I ask of it; I have hopes that this will hold true of oth­ers, and that I will be able to un­der­stand what they have in­vented.

Mean­while, there are plenty of peo­ple shoot­ing off their own feet in straight­for­ward ways; and it is a good deal eas­ier to do some­thing about that, than to pro­duce su­per­stars.

• If one val­ues win­ning above ev­ery­thing else, then ev­ery­thing that leads to win­ning is ra­tio­nal. The re­duc­tio to this is if tor­tur­ing a googol­plex of be­ings at max­i­mum du­ra­tion and in­creas­ing in­ten­sity leads to win­ning, then that’s what must be done.

Yet… per­haps win­ning then is not what we should most value? Per­haps we should value de­stroy­ing the thing which val­ues tor­tur­ing a googol­plex of be­ings. What if we need to tor­ture half of a googol­plex of be­ings to out­com­pete some­thing will­ing to tor­ture a googol­plex of be­ings? What if out­com­pet­ing such a thing is im­pos­si­ble? What is the thresh­old for the num­ber of be­ings tor­tured, in to­tal? Such a ques­tion must by defi­ni­tion seem ir­ra­tional to some­one win­ning at all costs, this is the trade­off one makes for valu­ing win­ning at all costs and call­ing it ra­tio­nal­ity. At which point does one say, “The most ra­tio­nal move is stop­ping all for­ward mo­men­tum im­me­di­ately.”?(“You are miss­ing the point! Ra­tion­al­ity is just your *in­de­pen­dant* strat­egy!” That is miss­ing the point.) This does not ap­pear to be a uni­verse where a sys­tem which in­tends to max­i­mize truth and ethics can win. I sus­pect once we can tran­scend tem­po­ral bias and ego­cen­tric bias via con­vinc­ing vir­tual ex­pe­rience, in the spe­cific sense of liv­ing lives like Junko Fu­ruta’s and Elis­a­beth Fritzl’s, we will not ap­pre­ci­ate win­ning at all costs. The para­dox here is the thing which tends to reach con­vinc­ing vir­tual simu­la­tions is not the thing which val­ues simu­lat­ing such things. That lit­tle voice in your head that says , “Er­ror. Ir­ra­tional ap­peal to emo­tion.” is the same voice which tor­tures the en­tire mul­ti­verse to win(if this is the win­ning strat­egy). The con­clu­sion here is that ethics and truth don’t win. The thing which is least hin­dered by a com­mit­ment to val­ues other than win­ning, wins. If any­thing could be said to bad, that is, if one is not a moral nihilist, then that would be bad news. Again worth notic­ing the lit­tle voice that re­jects this word “bad”, which upon hav­ing one’s hands planted into hot coals for no rea­son, would ap­pre­ci­ate things differ­ently and re­al­ize an ob­jec­tive prop­erty of con­scious­ness that is as grounded as the most ba­sic math­e­mat­i­cal ex­pres­sion.

• You’re con­fus­ing ends with means, ter­mi­nal goals with in­stru­men­tal goals, moral­ity with de­ci­sion the­ory, and about a dozen other ways of ex­press­ing the same thing. It doesn’t mat­ter what you con­sider “good”, be­cause for any fixed defi­ni­tion of “good”, there are go­ing to be op­ti­mal and sub­op­ti­mal meth­ods of achiev­ing good­ness. Win­ning is sim­ply the task of iden­ti­fy­ing and car­ry­ing out an op­ti­mal, rather than sub­op­ti­mal, method.

• If there are ob­jec­tively cor­rect and false val­ues, then it mat­ters to the epistemic ra­tio­nal­ist which sub­jec­tive val­ues they have, be­cause they might be wrong. (it also mat­ters to the ER whether val­ues are sub­jec­tive).

Epistemic and in­stru­men­tal ra­tio­nal­ity have never been the same thing. “Ra­tion­al­ity is win­ning” can­not define them both, but and, as it hap­pens, only defines IR.

• I’m not sure if it’s bet­ter, but here’s one that works well. Similar to the phrase, “Physi­cian, heal thy­self!” an­other way to say ra­tio­nal­ists should win is to say, “Ra­tion­al­ist, im­prove thy­self!”

If you aren’t ac­tu­ally im­prov­ing your­self and the world around you, then you aren’t us­ing the tools of ra­tio­nal­ity cor­rectly. And it fol­lows that to im­prove the world around you, you first have to be in a po­si­tion to do so by do­ing the same to your­self.

• I one box new­combs prob­lem be­cause the pay­offs are too dis­pro­por­tionate to make it in­ter­est­ing. how about this? if omega pre­dicted you would two box they are both empty if omega pre­dicted you would one box both boxes have \$1000

• That pay­off ma­trix doesn’t pre­serve the form of the prob­lem. One of the fea­tures of the prob­lem is that what­ever is in box B, you’re bet­ter off two-box­ing than one-box­ing if you ig­nore the in­fluence of Omega’s pre­dic­tion. A bet­ter for­mu­la­tion would be that box A has \$1000, and Box B has \$2000 iff Omega be­lieves you will one-box. Box B has to po­ten­tially have more than box A, or there’s no point in one-box­ing what ever DT you have.

• Let’s use one of Polya’s ‘how to solve it’ strate­gies and see if the in­verse helps: Ir­ra­tional­ists should lose. Ir­ra­tional­ity is sys­tem­atized los­ing.

On an­other note, ra­tio­nal­ity can re­fer to ei­ther be­liefs or be­hav­iors. Does be­ing a ra­tio­nal­ist mean your be­liefs are ra­tio­nal, your be­hav­iors are ra­tio­nal, or both? I think be­hav­ing ra­tio­nally, even with high prob­a­bil­ity pri­ors, is still very hard for us hu­mans in a lot of cir­cum­stances. Un­til we have full con­trol of our sub­con­scious minds and can re­pro­gram our cog­ni­tive sys­tems, it is a strug­gle to will our­selves to act in com­pletely ra­tio­nal ways.

To spread ra­tio­nal­ity, amongst hu­mans at least, we might want to con­sider a di­vide and con­quer ap­proach fo­cus­ing on peo­ple main­tain­ing ra­tio­nal be­liefs first, and max­i­miz­ing ra­tio­nal be­hav­ior sec­ond.

• Ra­tion­al­ity is the art of the op­ti­mal.

Ra­tion­al­ity is op­ti­mally sys­tem­atized sys­tem­atiz­ing. (:-))

• Like William, I think “win­ning” is the prob­lem, though for differ­ent rea­sons : “win­ning” has ex­tra con­no­ta­tions, and tends to call up the image of the guy who climbs the cor­po­rate lad­der through dishon­esty and be­trayal rather than try­ing to lead a happy and fulfilling life. Or some­one who tries to win all de­bates by hu­mil­i­at­ing his op­po­nent till no­body wants to speak to him any more.

Win­ning of­ten doesn’t mean get­ting what you want, but win­ning at some­thing defined ex­ter­nally, or com­pet­ing with oth­ers—which may in­deed not always the ra­tio­nal thing to do.

The thread on the Ra­tion­al­ity Ques­tion­aire seemed to have this prob­lem—some ques­tions seemed more fo­cused on “win­ning as un­der­stood by so­ciety” rather then get­ting what you want.

• Win­ning is all about choos­ing the right tar­get. There will be dis­agree­ment about which tar­get is right. After hit­ting the tar­get it will some­times be re­vealed that it was the wrong tar­get. Not hit­ting the right tar­get will some­times be win­ning. Ra­tion­al­ity lies in the eval­u­a­ton be­fore, af­ter and whilst aiming.

Some­what like the game of darts.

• “aban­don rea­son­able­ness” is never nec­es­sary; though I think we may be us­ing rea­son­able some­what differ­ently. I think “rea­son­able” in­cludes the idea of “ap­pro­pri­ate to the situ­a­tion”

quot­ing my­self : “There is a sup­posed “old Chi­nese say­ing”: The wise man defends him­self by never be­ing at­tacked. Which is ex­cel­lent, if in­com­plete, ad­vice. I com­pleted it my­self with “But only an idiot counts on not be­ing at­tacked.” Don’t use vi­o­lence un­less you re­ally need to, but if you need to don’t hold back.” http://​​willi­amb­swift.blogspot.com/​​2009/​​03/​​vi­o­lence.html

As to your over­all point, I agree that ra­tio­nal­ists should win. Gen­eral ran­dom­ness, un­knowns, and op­po­si­tion from other agents pre­vent con­sis­tent vic­to­ries in the real world. But if you are not win­ning more than los­ing you definitely are not be­ing ra­tio­nal.

• Don’t use vi­o­lence un­less you re­ally need to, but if you need to don’t hold back.

By corol­lary:

“Rule #6: If vi­o­lence wasn’t your last re­sort, you failed to re­sort to enough of it.” —The Seven Habits of Highly Effec­tive Pirates

• Stan­dard proverb: “If you would have peace, pre­pare for war.”

• There are two pos­si­ble in­ter­pre­ta­tions of “Ra­tion­al­ists should win”, and it’s likely the con­fu­sion is com­ing about from the sec­ond.

One use of “should” is to in­di­cate a gen­eral so­cial obli­ga­tion: “peo­ple should be nice to each other”, and the other is to in­di­cate a per­sonal en­ti­tle­ment: “you should be nice to me.” i.e., “should” = “I de­serve it”

It ap­pears that some peo­ple may be us­ing the lat­ter in­ter­pre­ta­tion, i.e., “I’m ra­tio­nal so I should win”—plac­ing the obli­ga­tion on the uni­verse rather than on them­selves.

Per­haps “Ra­tion­al­ists choose to win”, or “Win­ning is bet­ter than be­ing right”?

• “Win­ning is bet­ter than be­ing right”

I think Eliezer’s point is closer to “Win­ning is the same as be­ing right”; i.e., the ev­i­dence that you’re right is that you won.

• “Win­ning” and “be­ing right” are differ­ent con­cepts. That is the point of dis­t­in­guish­ing be­tween epistemic and in­stru­men­tal ra­tio­nal­ity.

• Ac­tu­ally the prob­lem is an am­bi­guity in “right”—you can take the “right” course of ac­tion (in­stru­men­tal ra­tio­nal­ity, or ethics), or you can have “right” be­lief (epistemic ra­tio­nal­ity).

• Ra­tion­al­ity leads di­rectly to effec­tive­ness. Or: Ra­tion­al­ity faces the truth and there­fore pro­duces effec­tive­ness. Or: Ra­tion­al­ity is mea­sured by how much effec­tive­ness it pro­duces.

• Typo re­port: “hoard of bar­bar­ians” should be re­placed by “horde of bar­bar­ians.”

• Hey, you never know when you might need a bar­bar­ian… you don’t want to run out!

• It seems that most the dis­cus­sion here is caught up on Omega be­ing able to “pre­dict” your de­ci­sion would re­quire re­verse-time causal­ity which some mod­els of re­al­ity can­not al­low to ex­ist.

As­sum­ing that Omega is a “suffi­ciently ad­vanced” pow­er­ful be­ing, then the boxes could act in ex­actly the way that the “re­verse time” model stipu­lates with­out re­quiring any such bend­ing of causal­ity through tech­nol­ogy that can de­stroy the con­tents of a box faster than hu­man per­cep­tion time or use the clas­si­cal many-wor­lds in­ter­pre­ta­tion method of end­ing the uni­verse where things don’t work out the way you want (the uni­verse doesn’t even need to end, some­thing like a quan­tum vac­uum col­lapse would have the same effect of stop­ping any in­for­ma­tion leak­age in non-con­form­ing uni­verses).

This makes the not-quite-a-ra­tio­nal­ist ar­gu­ment of “the boxes are already what they are so my de­ci­sion doesn’t mat­ter, I’ll take both” no longer hold true.

• Your as­sump­tions mean that the more likely an­swer is “Omega is suffi­ciently pow­er­ful to mess with me any way it likes; why am I play­ing this game?”

That is, prob­lems con­tain­ing Omega are more con­trived and less rele­vant to any­thing re­sem­bling real life the more one looks at them.

Note that think­ing too much about Omega can lead to los­ing in real life, as one for­gets that Omega is hy­po­thet­i­cal and can­not pos­si­bly ex­ist, and ac­tu­ally goes so far as to at­tributes the qual­ities of Omega to what is in fact a ma­nipu­la­tive hu­man. One ex­am­ple that I found quite jaw­drop­ping. This is a case I think could quite fairly be de­scribed as rea­son­ing one­self more in­effec­tive. Peo­ple who act like that are a rea­son to get out of the situ­a­tion, not to in­voke TDT.

• One prob­lem is that “Ra­tion­al­ists should win” has two ob­vi­ous in­ter­pre­ta­tions for me:

1. Ra­tion­al­ists should win, there­fore if ra­tio­nal­ists aren’t win­ning, there’s some­thing wrong with the world.

2. Ra­tion­al­ists should win, there­fore if you aren’t win­ning, you’re not a ra­tio­nal­ist.

Com­pare with:

1. Peo­ple should donate money to Africa, there­fore if peo­ple aren’t donat­ing to Africa, there’s some­thing wrong with the world.

2. Peo­ple should donate money to Africa, there­fore if you don’t donate to Africa, you’re not a per­son.

and

1. Pro­tons should have an elec­tric charge of 1, there­fore if pro­tons don’t have an elec­tric charge of 1, there’s some­thing wrong with the world.

2. Pro­tons should have an elec­tric charge of 1, there­fore if you don’t have an elec­tric charge of 1, you are not a pro­ton.

• Should­ness in “Ra­tion­al­ists should win” is a much more de­tailed no­tion than cor­re­spon­dence to win­ning situ­a­tions. It refers to a prop­erty of achiev­ing goals, as seen un­der un­cer­tainty, in our case im­ple­mented by cog­ni­tive al­gorithms that search the solu­tion-space for the right plans. Ra­tion­al­ists should-win, have a good mea­sure of win-should-ness.

Look­ing over it all again, I should add that “ra­tio­nal­ity is about win­ning” is also an im­mensely sim­pler sen­ti­ment, that still seems to re­tain the gist of the mes­sage for which the “ra­tio­nal­ists should win” motto was de­vised.

• What “Ra­tion­al­ists should WIN” needs is a stake through its heart; it is mis­in­ter­preted so much more of­ten than it is cor­rectly used that we may need to do with­out it al­to­gether.

• Has it been set­tled then, that in this New­comb’s Prob­lem, ra­tio­nal­ity and win­ning are at odds? I think it is quite rele­vant to this dis­cus­sion whether or not they ever can be at odds.

My last com­ment got voted down—pre­sum­ably be­cause whether or not ra­tio­nal­ity and win­ning are ever in con­flict has been dis­cussed in the pre­vi­ous post. (I’m a quick study and would like feed­back as to why I get voted down.) How­ever, was there some kind of con­sen­sus in the pre­vi­ous post? Do we just as­sume here that it is pos­si­ble that ra­tio­nal­ity is not always the win­ning strat­egy? I can­not!

Look­ing through the com­ments, it sounds like many peo­ple think it is most ra­tio­nal to pick both boxes be­cause of some as­sump­tion about how phys­i­cal re­al­ity can’t be al­tered. In a hy­po­thet­i­cal re­al­ity where that as­sump­tion doesn’t hold, it would be ir­ra­tional to in­sist on ap­ply­ing it.

• Not ev­ery­one wants to win! Or let’s put it an­other way: Every­one wants to win, but in differ­ent ways. The ra­tio­nal part is win­ning as one defines it. At least some of the two box­ers are win­ning—as they define it.

I sense you mean that ra­tio­naliy is un­put­down­able—always right always win­ning. But life is not a two di­men­sional re­it­er­ated PD. Fate plays with dice. Other play­ers can be MAD. Some­times a flower grows through the as­falt. We don’t live long enough to say on av­er­age that ra­tio­nal­ists always win.

• Am I miss­ing some­thing? I think this an­swer is very sim­ple: ra­tio­nal­ity and win­ning are never at odds.

(The only ex­cep­tion is when a ra­tio­nal be­ing has in­com­plete in­for­ma­tion. If in­for­ma­tion tells him that the blue box has \$100 and the red box has \$0, and it is the other way around, it is ra­tio­nal for him to pick the blue box even though he doesn’t win.)

• The only ex­cep­tion is when a ra­tio­nal be­ing has in­com­plete information

Even ra­tio­nal be­ings usu­ally don’t have com­plete in­for­ma­tion.

• Yes, I agree. I think be­ing ra­tio­nal is always be­ing aware that ev­ery­thing you “know” is a house of cards based on your as­sump­tions. A change in as­sump­tions will re­quire re­build­ing the house, and a false room means you need to challenge an as­sump­tion.

I’m just ar­gu­ing that a false room never means that ra­tio­nal­ity (de­duc­tion) it­self was wrong (i.e., not win­ning).

All a ra­tio­nal be­ing can do is base de­ci­sions on the in­for­ma­tion they have. A ques­tion: is a ra­tio­nal po­si­tion based upon in­com­plete in­for­ma­tion that leads to not win­ning re­ally an ex­am­ple of “ra­tio­nal­ity” not win­ning? I think that in this dis­cus­sion we are talk­ing about the re­la­tion­ship be­tween ra­tio­nal­ity and win­ning in the con­text of “enough” in­for­ma­tion.

• I already un­der­stood what you meant by “ra­tio­nal­ists should win”, Eliezer, but I don’t find New­comb’s prob­lem very con­vinc­ing as an ex­am­ple. The way I see it, if you one-box you’ve lost. You could have got­ten an ex­tra \$1000 but you chose not to.

• And yet those who one-box get \$999000 more than those who don’t. What gives? If there is a sys­tem­atic, pre­dictable thing that offers one-box­ers \$1000000 and offers two-box­ers \$1000, and there is not a sys­tem­atic, pre­dictable thing that pro­vides some sort of coun­ter­ing offer to two-box­ers, by one-box­ing you still get more money.

I can’t think of some­thing of equal power level (ex­am­in­ing your de­ci­sions, not your method of ar­riv­ing at those de­ci­sions) which would be able to provide the coun­ter­ing offer to two-box­ers.

• Of course the one-box­ers get more money: They were put in a situ­a­tion in which they could ei­ther get \$1 000 000 or \$1 001 000, whereas the two-box­ers were put in a situ­a­tion in which they could get \$0 or \$1000.

It makes no sense to com­pare the two de­ci­sions the way you and Eliezer do. It’s like or­ga­niz­ing a swim­ming com­pe­ti­tion be­tween an Olympic ath­lete who has to swim ten kilo­me­ters to win and an un­trained fatass who only has to swim a hun­dred me­ters to win, and con­clud­ing that be­cause the fatass wins more of­ten than the ath­lete, there­fore fatasses clearly make bet­ter swim­mers than ath­letes.

• Of course the one-box­ers get more money: They were put in a situ­a­tion in which they could ei­ther get \$1 000 000 or \$1 001 000, whereas the two-box­ers were put in a situ­a­tion in which they could get \$0 or \$1000.

When faced with this de­ci­sion, you are ei­ther in the real world, in which case you can get an ex­tra \$1000 by two box­ing, or you are in a simu­la­tion, in which case you can ar­range so your self in the real world gets and ex­tra \$1,000,000 by one box­ing. Given that you can’t tell which of these is the case, and that you are de­ter­minis­tic, you will make the same de­ci­sion in both situ­a­tions. So your choice is to ei­ther one box and gain \$1,000,000 or two box and gain \$1000. If you like hav­ing more money, it seems clear which of those choices is more ra­tio­nal.

• But if you were put into said hy­po­thet­i­cal com­pe­ti­tion, and could some­how de­cide just be­fore the con­test be­gan whether to be an Olympic ath­lete or an un­trained fatass, which would you choose?

I think you’re get­ting overly dis­tracted by the de­tails of the prob­lem con­struc­tion and miss­ing the point.

• If my only goal were to win that par­tic­u­lar com­pe­ti­tion (and not to be a good swim­mer), of course I’d choose to turn into a fatass and lose all my train­ing. Like­wise, if I could choose to pre­com­mit to one-box­ing in New­comb-like prob­lems, I would, be­cause pre-com­mit­ment has an effect on what will be in box B (whereas the ac­tual de­ci­sion does not).

The de­tails are what makes New­comb’s prob­lem what it is, so I don’t see how it’s pos­si­ble to get “overly dis­tracted” by them. Cor­rect me if I’m wrong, but pre-com­mit­ment isn’t an op­tion in New­comb’s prob­lem, so the best, the most ra­tio­nal, the win­ning de­ci­sion is two-box­ing.

• Cor­rect me if I’m wrong, but pre-com­mit­ment isn’t an op­tion in New­comb’s prob­lem, so the best, the most ra­tio­nal, the win­ning de­ci­sion is two-box­ing.

By con­struc­tion, Omega’s pre­dic­tions are known to be es­sen­tially in­fal­lible. Given that, what­ever you choose, you can safely as­sume Omega will have cor­rectly pre­dicted that choice. To what ex­tent, then, is pre-com­mit­ment dis­t­in­guish­able from de­cid­ing on the spot?

In a sense there is an im­plicit pre-com­mit­ment in the struc­ture of the prob­lem; while you have not pre-com­mit­ted to a choice on this spe­cific prob­lem, you are es­sen­tially pre-com­mit­ted to a de­ci­sion-mak­ing al­gorithm.

Eliezer’s ar­gu­ment, if I un­der­stand it, is that any de­ci­sion-mak­ing al­gorithm that re­sults in two-box­ing is by defi­ni­tion ir­ra­tional due to giv­ing a pre­dictably bad out­come.

• In a sense there is an im­plicit pre-com­mit­ment in the struc­ture of the prob­lem; while you have not pre-com­mit­ted to a choice on this spe­cific prob­lem, you are es­sen­tially pre-com­mit­ted to a de­ci­sion-mak­ing al­gorithm.

That’s an in­ter­est­ing, and pos­si­bly fruit­ful, way of look­ing at the prob­lem.

• Pre-com­mit­ment is differ­ent from de­cid­ing on the spot be­cause once you’re on the spot, there is noth­ing, ab­solutely noth­ing you can do to change what’s in box B. It’s over. It’s a done deal. It’s be­yond your con­trol.

My un­der­stand­ing of Eliezer’s ar­gu­ment is the same as yours. My ob­jec­tion is that two-box­ing doesn’t ac­tu­ally give a bad out­come. It gives the best out­come you can get given the situ­a­tion you’re in. That you don’t know what situ­a­tion you’re in un­til af­ter you’ve opened box B doesn’t change that fact. As Eliezer is so fond of say­ing, the map isn’t the ter­ri­tory.

• Pre-com­mit­ment is differ­ent from de­cid­ing on the spot be­cause once you’re on the spot, there is noth­ing, ab­solutely noth­ing you can do to change what’s in box B.

If your de­ci­sion on the spot is 100 per­cent pre­dictable ahead of time, as is ex­plic­itly as­sumed in the prob­lem con­struc­tion, you are effec­tively pre-com­mit­ted as far as Omega is con­cerned. You, ap­par­ently, have es­sen­tially pre-com­mit­ted to open­ing two boxes.

My ob­jec­tion is that two-box­ing doesn’t ac­tu­ally give a bad out­come. It gives the best out­come you can get given the situ­a­tion you’re in.

And yet, ev­ery­one who opens one box does bet­ter than the peo­ple who open two boxes.

You seem to have a very pe­cu­liar defi­ni­tion of “best out­come”.

• If your de­ci­sion on the spot is 100 per­cent pre­dictable ahead of time, as is ex­plic­itly as­sumed in the prob­lem con­struc­tion, you are effec­tively pre-com­mit­ted as far as Omega is con­cerned. You, ap­par­ently, have es­sen­tially pre-com­mit­ted to open­ing two boxes.

What I meant by ‘pre-com­mit­ment’ is a de­ci­sion that we can make if and only if we know about New­comb-like prob­lems be­fore be­ing faced with one. In other words, it’s a de­ci­sion that can af­fect what Omega will put in box B. That Omega can de­duce what my de­ci­sion will be doesn’t mean that the de­ci­sion is already taken.

And yet, ev­ery­one who opens one box does bet­ter than the peo­ple who open two boxes.

And ev­ery fatass who com­petes against an Olympic ath­lete in the sce­nario I de­scribed above does ‘bet­ter’ than the ath­lete. So what? Un­less the ath­lete knows about the com­pe­ti­tion’s rules ahead of time and eats non-stop to turn him­self into a fatass, there’s not a damn thing he can do about it, ex­cept try his best once the com­pe­ti­tion starts.

You seem to have a very pe­cu­liar defi­ni­tion of “best out­come”.

It seems too ob­vi­ous to say, but I guess I have to say it. “The best out­come” in this con­text is “the best out­come that it is pos­si­ble to achieve by mak­ing a de­ci­sion”. If box B con­tains noth­ing, then the best out­come that it is pos­si­ble to achieve by mak­ing a de­ci­sion is to win a thou­sand dol­lars. If box B con­tains a mil­lion dol­lars, then the best out­come that it is pos­si­ble to achieve by mak­ing a de­ci­sion is to win one mil­lion and one thou­sand dol­lars.

Well, I don’t see how I can ex­plain my­self more clearly than this, so this will be my last com­ment on this sub­ject. In this thread. This week. ;)

• This ex­change has fi­nally im­parted a bet­ter un­der­stand­ing of this prob­lem for me.

If you pre-com­mit now to always one-box – and you be­lieve that about your­self – then de­cid­ing to one-box when Omega asks you is the best de­ci­sion.

If you are un­cer­tain of your com­mit­ment then you prob­a­bly haven’t re­ally pre-com­mit­ted! I haven’t tried to math it, but I think your ac­tual de­ci­sion when Omega ar­rives would de­pend on the strength of your be­lief about your own pre-com­mit­ment. [Though a more-in­con­ve­nient pos­si­ble world is the one in which you’ve never heard of this, or similar, puz­zles!]

Now I grok why ra­tio­nal­ity should be self-con­sis­tent un­der re­flec­tion.

• Small nit­pick: If you’ve re­ally pre-com­mit­ted to one-box­ing, there is no de­ci­sion to be made once Omega has set up the boxes. In fact, the thought of mak­ing a de­ci­sion won’t even cross your mind. If it does cross your mind, you should two-box. But if you two-box, you now know that you haven’t re­ally pre-com­mit­ted to one-box­ing. Ac­tu­ally, even if you de­cide to (mis­tak­enly) one-box, you’ll still know you haven’t re­ally pre-com­mit­ted, or you wouldn’t have had to de­cide any­thing on the spot.

In other words, New­comb’s prob­lem can only ever in­volve a sin­gle true de­ci­sion. If you’re ca­pa­ble of pre-com­mit­ment (that is, if you know about New­comb-like prob­lems in ad­vance and if you have the means to re­ally pre-com­mit), it’s the de­ci­sion to pre-com­mit, which pre­cludes any ul­te­rior de­ci­sion. If you aren’t ca­pa­ble of pre-com­mit­ment (that is, if at least one of the above con­di­tions is false), it’s the on-the-spot de­ci­sion.

• Eliezer’s ar­gu­ment, if I un­der­stand it, is that any de­ci­sion-mak­ing al­gorithm that re­sults in two-box­ing is by defi­ni­tion ir­ra­tional due to giv­ing a pre­dictably bad out­come.

So he’s as­sum­ing the con­clu­sion that you get a bad out­come? Golly.

• True, we don’t know the out­come. But we should still pre­dict that it will be bad, due to Omega’s 99% ac­cu­racy rate.

Don’t mess with Omega.

• The re­sult of two-box­ing is a thou­sand dol­lars. The re­sult of one-box­ing is a mil­lion dol­lars. By defi­ni­tion, a mind that always one-boxes re­ceives a bet­ter pay­out than one that always two-boxes, and there­fore one-box­ing is more ra­tio­nal, by defi­ni­tion.

• See Ar­gu­ing “By Defi­ni­tion”. It’s par­tic­u­larly prob­le­matic when the defi­ni­tion of “ra­tio­nal” is pre­cisely what’s in dis­pute.

• The re­sult of two-box­ing is a thou­sand dol­lars more than you would have got­ten oth­er­wise. The re­sult of one-box­ing is a thou­sand dol­lars less than you would have got­ten oth­er­wise. There­fore two-box­ing is more ra­tio­nal, by defi­ni­tion.

What de­ter­mines whether you’ll be in a 1M/​1M+1K situ­a­tion or in a 0/​1K situ­a­tion is the kind of mind you have, but in New­comb’s prob­lem you’re not given the op­por­tu­nity to af­fect what kind of mind you have (by pre-com­mit­ing to one-box­ing, for ex­am­ple), you can only de­cide whether to get X or X+1K, re­gard­less of X’s value.

• Sup­pose for a mo­ment that one-box­ing is the Foo thing to do. Two-box­ing is the ex­pected-util­ity-max­i­miz­ing thing to do. Omega de­cided to try to re­ward those minds which it pre­dicts will choose to do the Foo thing with a de­ci­sion be­tween do­ing the Foo thing and gain­ing \$1000000, and do­ing the unFoo thing and gain­ing \$1001000, while giv­ing those minds which will choose to do the unFoo thing a de­ci­sion be­tween do­ing the Foo thing and gain­ing \$0 and do­ing the unFoo thing and gain­ing \$1000.

The rele­vant ques­tion is whether there is a gen­er­al­iza­tion of the com­pu­ta­tion Foo which we can im­ple­ment that doesn’t screw us over on all sorts of non-New­comb prob­lems. Drescher for in­stance claims that act­ing eth­i­cally im­plies, among other things, do­ing the Foo thing, even when it is ob­vi­ously not the ex­pected-util­ity-max­i­miz­ing thing.

• You’re as­sum­ing that you can just choose how you go about mak­ing de­ci­sions ev­ery time you make a de­ci­sion. If you’re not granted that as­sump­tion, Fur­cas’s anal­y­sis is spot on. Two-box­ers suc­ceed in other places and also on New­comb; one-box­ers fail in many situ­a­tions that are similar to New­comb but not as nice. So you need to de­cide what sort of de­ci­sions you’ll make in gen­eral, and that will (ar­guably) de­ter­mine how much money is in the boxes in this par­tic­u­lar ex­per­i­ment.

• one-box­ers fail in many situ­a­tions that are similar to New­comb but not as nice.

Such as?

(Is this meant to re­fer to failures of ev­i­den­tial de­ci­sion the­ory? There are other op­tions.)

• This premise is not ac­cepted by the 1-box con­tin­gent. Oc­ca­sion­ally they claim there’s a rea­son.

• Can you please elab­o­rate? I’m try­ing to catch up!

• You mean they don’t ac­cept that the de­ci­sion doesn’t af­fect what’s in box B?

• Sim­ple: most situ­a­tions in real life aren’t like this. If you be­lieve Omega and one-box, you’ll lose when he’s ly­ing. If your de­ci­sion the­ory works bet­ter in hy­po­thet­i­cal situ­a­tions and worse in real life, then it doesn’t make you win.

• Also, I don’t think Eliezer keeps harp­ing on New­comb’s prob­lem be­cause he an­ti­ci­pates ex­pe­rienc­ing pre­cisely that sce­nario. I see sev­eral im­por­tant points that I don’t think have been clearly made (not that I’m the one to do so):

1. We can choose whether and when to im­ple­ment cer­tain de­ci­sion al­gorithms, in­clud­ing clas­si­cal causal de­ci­sion the­ory (CCDT). This choice may in fact be triv­ial, or it may be sub­tle, but it is a wor­thy ques­tion for a ra­tio­nal­ist.

2. Although, for any fixed set of op­tions, im­ple­ment­ing CCDT max­i­mizes your re­turn, there are in fact cases where the op­tions you have de­pend on the out­come of a model of your de­ci­sion al­gorithm. I’m not talk­ing about Omega, I’m talk­ing about hu­man so­cial life. We base a large por­tion of our in­ter­ac­tions with oth­ers on our an­ti­ci­pa­tions of how they might re­spond. (This isn’t of­ten done ra­tio­nally by any­one’s stan­dards, but it can be.)

3. It gets con­fus­ing (in par­tic­u­lar, Hofs­tad­te­rian) here, but a plau­si­bly bet­ter out­come might be reached in the Pri­soner’s Dilemma by self­ish non-strangers mu­tu­ally mod­el­ing the other’s likely de­ci­sion pro­cess, and rec­og­niz­ing that only C-C and D-D are sta­ble out­comes un­der mu­tual mod­el­ing.

Of course, I still feel a bit un­com­fortable with this line of rea­son­ing.

• Please … New­comb is a toy non-math­ema­ti­z­able prob­lem and not a valid ar­gu­ment for any­thing at all. There must be a bet­ter ex­am­ple, or the en­tire prob­lem is in­valid.

• There must be a bet­ter ex­am­ple, or the en­tire prob­lem is in­valid.

I’ve long thought that vot­ing in gen­eral is largely iso­mor­phic to New­comb’s. If you cop out and don’t vote, then ev­ery­one like you will rea­son the same way and not vote, and your fa­vored can­di­dates/​poli­cies will fail; but if you vote then the re­verse might hap­pen; and if you then carry it one more step… If you could just de­cide to one-box/​vote then maybe ev­ery­one else like you will.

• Sorry, in vot­ing you don’t play the sin­gu­lar boss role that you play in New­comb’s prob­lem. But it’s amus­ing how far democ­racy pro­po­nents will go to con­vince them­selves that their vote mat­ters. :-)

• I haven’t worked it out rigor­ously (else you would’ve seen a post on it by now!), but it seems to me in close elec­tions (Florida 2000, say) that thought pro­cess could be valid. Con­sid­er­ing how small the mar­gins some­times are, and how much of the elec­torate doesn’t vote, it doesn’t strike me as im­plau­si­ble that there are enough peo­ple think­ing like me to make a differ­ence.

And of course we could just spec­ify as a con­di­tion that you and yours are a bloc pow­er­ful enough to af­fect the elec­tion. (Maybe you’re nu­mer­ous, maybe there’re only a few elec­tors, what­ever.)

But it’s amus­ing how far democ­racy pro­po­nents will go to con­vince them­selves that their vote mat­ters.

The prob­lem with ir­rele­vant ad hominems is that they’re very of­ten based on flimsy ev­i­dence and so of­ten wrong. I didn’t even vote last year be­cause I figured my vote didn’t mat­ter. I was not sur­prised.

• In New­comb’s prob­lem you’re the boss, e.g. you can as­sign your­self a suit­able util­ity func­tion be­fore­hand to keep the mil­lion and screw the thou­sand. Not so in vot­ing—no mat­ter what you think, other peo­ple won’t change. They don’t have any­thing con­di­tioned on the out­come of your thought pro­cess, as in New­comb’s. No, not even if “peo­ple think­ing like you” are a bloc. You still can’t in­fluence them. It’s a co­or­di­na­tion game, not New­comb’s.

Your rea­son­ing re­sem­bles the “twins fal­lacy” in the Pri­soner’s Dilemma: the idea that just by choos­ing to co­op­er­ate you can mag­i­cally force your iden­ti­cal part­ner to do the same. Come to think of it, PD sounds like a bet­ter model for vot­ing to me.

Up­date: Eliezer seems to think PD and New­comb’s are re­lated. Not sure why.

• Please … New­comb is a toy non-math­ema­ti­z­able prob­lem and not a valid ar­gu­ment for any­thing at all.

Why?

• As far as I can tell, New­comb prob­lem ex­ists only in English, and only be­cause a com­pletely aphys­i­cal causal­ity loop is in­tro­duced. Every math­ema­ti­za­tion I’ve ever seen col­lapses it to ei­ther triv­ial one-box­ing prob­lem, or triv­ial two-box­ing prob­lem.

If any­body wants this prob­lem to be treated se­ri­ously, maths first to show the prob­lem is real! Other­wise, we’re re­ally not much bet­ter than if we were dis­cussing quotes from the Bible.

• If you’ve seen for­mal­iza­tions, then it is for­mal­iz­able. What are the for­mal­iza­tions?

Since I think the an­swer is ob­vi­ously one-box, it doesn’t sur­prise me that there is a for­mal­iza­tion in which that an­swer is ob­vi­ous. I have never seen a for­mal­iza­tion in which the an­swer is two-box. I have seen the ar­gu­ment that “causal de­ci­sion the­ory” (?) chooses to two-box. Peo­ple jump from that to the con­clu­sion that the an­swer is two-box, but that is an idiotic con­clu­sion. Given the premise, the cor­rect con­clu­sion is that this de­ci­sion the­ory is in­ad­e­quate. Any­how, I don’t be­lieve the ar­gu­ment. I in­ter­pret it sim­ply as the de­ci­sion the­ory failing to be­lieve the state­ment of the prob­lem. There is a dis­con­nect be­tween the words and the for­mal­iza­tion of that de­ci­sion the­ory.

The is­sue is not about for­mal­iz­ing New­comb’s prob­lem; the prob­lem is cre­at­ing a for­mal de­ci­sion the­ory that can un­der­stand a class of sce­nar­ios in­clud­ing New­comb’s prob­lem. (It should be pos­si­ble to tweak the usual de­ci­sion the­ory to make it ca­pa­ble of be­liev­ing New­comb’s prob­lem, but I don’t think that would be ad­e­quate for a larger class of prob­lems.)

• “If you fail to achieve a cor­rect an­swer, it is fu­tile to protest that you acted with pro­pri­ety.”

But “achiev­ing a cor­rect an­swer” isn’t the same thing as win­ning. Thus, the phrase “ra­tio­nal­ists should win” is not a proper equiv­alence for the idea you wished to com­mu­ni­cate. Some­times act­ing with pro­pri­ety in­volves los­ing—at least in a limited, spe­cific con­text. Ar­guably, if you act with pro­pri­ety, you always win.

It’s not about win­ning or los­ing, it’s how you play the game. Ex­cept that there may not be a game, and we’re not sure what the rules are, or even that they are.

• Some­times act­ing with pro­pri­ety in­volves los­ing—at least in a limited, spe­cific con­text. Ar­guably, if you act with pro­pri­ety, you always win.

Th­ese two sen­tences seem in­con­sis­tent. Care to un­pack?

EDIT: re­placed ‘con­tra­dic­tory’ with ‘in­con­sis­tent’. Log­i­cal quib­ble.

• De­stroy­ing an Em­pire to win a war is no vic­tory. And end­ing a bat­tle to save an Em­pire is no defeat. - at­tributed to Kahless the Unforgettable

There is such a thing as a Pyrrhic vic­tory. Like­wise, some kinds of failure can be more valuable than os­ten­si­ble suc­cess.

There is always a greater per­spec­tive. From that greater per­spec­tive, what a lesser per­spec­tive judges to be a win may be a loss, and vice versa.