# behemoth

Karma: 7 (LW), 0 (AF)
• Russ, I think that if you take the ex­am­ple liter­ally, the price would be 91%, not 50%, and you wouldn’t ex­pect to make money.

Eliezer, the PS definitely clar­ifies mat­ters.

Although I also think the ex­am­ple is ac­tu­ally in­struc­tive if taken liter­ally too. In par­tic­u­lar, if you see nine heads in a row, each ad­di­tional head means you ex­pect a higher chance of heads next flip. But you do not ex­pect an in­crease in the price of the con­tract that pays \$1 if heads comes up. THAT still has an ex­pected price change of zero, even thought we ex­pect more heads go­ing for­ward.

In other words, fu­ture EVENTS can be pre­dictable, but fu­ture PRICES can­not.

• Eliezer, your main point is cor­rect and in­ter­est­ing, but the coin flip ex­am­ple is definitely wrong. The mar­ket’s be­liefs don’t af­fect the bias of the coin! The map doesn’t af­fect the ter­ri­tory.

The rele­vant FINANCE ques­tion is ‘how much would you pay for a con­tract that pays \$1 if the coin comes up heads?’. This is then the clas­sic pre­dic­tion mar­ket type con­tract.

The price should in­deed be ten elevenths. Of course, you don’t ex­pect to make money buy­ing this con­tract, which was ex­actly your point.

What WILL be true is that the ex­pected change in the price of the con­tract from one pe­riod to the next will be zero. This need not mean that it goes up 50% of the time, but the ex­pected value next pe­riod (in this case) is the cur­rent price.

The first proof that I know of this was done by Paul Sa­muel­son in 1965, in his pa­per ‘Proof that Prop­erly An­ti­ci­pated Prices Fluc­tu­ate Ran­domly’.

• Na­ture sounds a bit like a ver­sion of Rory Breaker from ‘Lock, Stock and Two Smok­ing Bar­rels’:

“If you hold back any­thing, I’ll kill ya. If you bend the truth or I think your bend­ing the truth, I’ll kill ya. If you for­get any­thing I’ll kill ya. In fact, you’re gonna have to work very hard to stay al­ive, Nick. Now do you un­der­stand ev­ery­thing I’ve said? Be­cause if you don’t, I’ll kill ya. ”

• You say ‘That’s not how it works.’ But I think that IS how it works!

If progress were only ever made by peo­ple as smart as E.T. Jaynes, hu­man­ity would never have got­ten any­where. Even with fat tails, in­tel­li­gence is still roughly nor­mally dis­tributed, and there just aren’t that many 6 sigma events. The vast ma­jor­ity of sci­en­tific progress is in­cre­men­tal, notwith­stand­ing that it’s only the rev­olu­tion­ary achieve­ments that are salient.

The real ques­tion is, do you want Friendly A.I. to be achieved? Or do you just want friendly A.I. to be achieved by YOU? There’s no shame in the lat­ter one, but the preclu­sion of the lat­ter speaks lit­tle about progress to­wards the former (which I hap­pen to think this blog is im­mensely valuable to­wards).

• While your point about a world that hadn’t used nu­clear weapons be­ing safer to­day is un­clear, I think your claim that ‘you wouldn’t drop the bomb’ is driven by hind­sight bias. At the time, the far more press­ing is­sue from Tru­man’s per­spec­tive was how to end the war with a min­i­mum loss of US life, and the long term con­se­quences of the bomb were far from clear.

I also think that memo­ri­als like Hiroshima Day per­vert the over­all moral per­spec­tive on World War 2. Be­cause it was a large, salient act of de­struc­tion, it gets re­mem­bered. The Burma Railway and the Rape of Nank­ing (bru­tal­ity which didn’t even serve any strate­gic pur­pose) don’t have any memo­ri­als. It is a gross dis­tor­tion for Hiroshima to let the Ja­panese be pri­mar­ily viewed as the vic­tims of World War 2. EVERYTHING about World War 2 was hor­rible, but you can’t only em­pha­sise one bit of that hor­ror with­out af­fect­ing the over­all per­cep­tion.

As to the ques­tion of why you didn’t want to just have a show of force, I re­mem­ber Vic­tor Davis Han­son ar­gu­ing that in or­der to pre­vent con­flicts from re-start­ing later, there is a psy­cholog­i­cal im­por­tance in the en­emy re­al­is­ing that they are well and truly beaten. Without this, he ar­gued, it’s pos­si­ble for re­vi­sion­ists to re-stoke the con­flict later. Hitler did ex­actly this when he claimed that the Ger­man army in WW1 was on the verge of vic­tory when it was stabbed in the back by poli­ti­ci­ans at home, in­stead of ac­tu­ally be­ing days away from to­tal defeat. Say what you will about the bomb, but it cer­tainly let the Ja­panese know that they were beaten, and Ja­panese mil­i­tarism hasn’t re­sur­faced since.

In a re­peated game of Pri­son­ers Dilemma, Tit-For-Tat seems to be a dom­i­nant strat­egy. With Hiroshima, the Ja­panese found out that pay­back’s a bitch. The only in­jus­tice is to the ex­tent that the in­di­vi­d­u­als who bore the brunt of the at­tack weren’t per­son­ally the ones who in­sti­gated it, but this is true of ev­ery war in his­tory. I feel for their suffer­ing, but no more or less than any other civili­ans in World War 2, or any­where else.

• I’m quite con­vinced about how you an­a­lyze the prob­lem of what moral­ity is and how we should think about it, up un­til the point about how uni­ver­sally it ap­plies. I’m just not sure that ‘hu­mans differ­ent shards of god shat­ter’ add up to the same thing across peo­ple, a point that I think would be­come ap­par­ent as soon as you started to spec­ify what the huge com­pu­ta­tion ac­tu­ally WAS.

I would think of the out­put as not be­ing a yes/​no an­swer, but some­thing akin to ‘What per­centage of hu­man be­ings would agree that this was a good out­come, or be able to be thus con­vinced by some set of ar­gu­ments?’. Some things, like sav­ing a child’s life, would re­ceive very wide­spread agree­ment. Others, like a global Is­lamic cal­iphate or wide­spread promis­cu­ous sex would have more dis­agree­ment, in­clud­ing po­ten­tially dis­agree­ment that can­not be re­solved by pre­sent­ing any con­ceiv­able ar­gu­ment to the par­ties.

The ques­tion of ‘how much’ each per­son views some­thing as moral comes into play as well. If differ­ent peo­ple can’t all be con­vinced of a par­tic­u­lar out­come’s moral­ity, the ques­tion ends up seem­ing re­mark­ably similar to the ques­tion in eco­nomics of how to ag­gre­gate many peo­ple’s prefer­ences for goods. Be­cause you never ob­serve prefer­ences in to­tal, you let ev­ery­one trade and ex­press their de­sires through re­vealed prefer­ence to get a pareto solu­tion. Here, a solu­tion might be to as­sign them a cer­tain amount of moral­ity dol­lars to each out­come, let them spend as they wish, and add it all up. Like eco­nomics, there’s still the ques­tion of how to al­lo­cate the ini­tial wealth (in this case, how much to weigh the opinions of each per­son).

I don’t know how much I’m dis­tort­ing what you meant—it al­most feels like we’ve just re­placed ‘moral­ity as prefer­ence’ with ‘moral­ity as ag­gre­gate prefer­ence’, and I don’t think that’s what you had in mind.

• I’ll be in­ter­ested to see what your meta­moral­ity is. The one thing that I think has been miss­ing so far from the dis­cus­sion is the ques­tion that with­out some meta­moral­ity, what lan­guage do we have to con­demn some­one else who chooses a differ­ent moral­ity from ours? Ob­vi­ously you can’t ar­gue moral­ity into a rock, but we’re not try­ing to do that, only ar­gue it into an­other hu­man who shares fun­da­men­tally similar ar­chi­tec­ture, but not nec­es­sar­ily moral­ity.

More­over, to say that one can aban­don a meta­moral­ity with­out af­fect­ing one’s un­der­ly­ing moral­ity doesn’t im­ply that so­ciety as a whole can ditch a par­tic­u­lar meta­moral­ity (eg Judeo-Chris­tian wor­ld­views) and still ex­pect the next gen­er­a­tion’s moral­ity to stay un­changed. If you ex­plic­itly re­ject any meta­moral­ity, why should your chil­dren bother to listen to what you say any­way? Isn’t their moral­ity just as good as yours?

It may be pos­si­ble that re­li­gious meta­moral­ity serve as a ba­sis to in­cul­cate a par­tic­u­lar set of moral teach­ings, which only then al­lows the origi­nal meta­moral­ity to be aban­doned. eg It causes at least some of the pop­u­la­tion to do the right thing for the wrong rea­sons, when they oth­er­wise might not have done the right thing at all.