# Open Thread June 2010, Part 2

The ti­tle says it all.

• Less Wrong Ra­tion­al­ity Quotes since April 2009, sorted by points.

Pre-alpha, one hour of work. I plan to im­prove it.

EDIT: Here is the source code. 80 lines of python. It makes raw text out­put, links and for­mat­ting are lost. It would be quite triv­ial to do nice and spiffy html out­put.

EDIT2: I can do html out­put now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could some­body help me with this? Thanks.

EDIT3: Bug re­solved. I wrote an­other top-level com­ment. about the fi­nal ver­sion, be­cause my ac­cess logs sug­gested that the EDITs have reached only a very few peo­ple. Of course, an al­ter­na­tive ex­pla­na­tion is that ev­ery­body who would have been in­ter­ested in the html ver­sion already checked out the txt ver­sion. We will soon find out which ex­pla­na­tion is the cor­rect one.

• Not hav­ing to side scroll would be spiffy.

• It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have re­marks con­tained in the posts which you have not ed­ited out. I don’t know if you in­tend to keep those. Se­cond, some of the quotes are com­ments from quote threads that aren’t ac­tu­ally quotes. 14 SilasBarta is one ex­am­ple. (And is just me or does that cita­tion form read like a cita­tion from a re­li­gious text ?)

• On the wiki, this text will be dead, be­cause no­body will be adding new items there by hand.

• I agreed with you, I even started to write a re­ply to JoshuaZ about the in­tri­ca­cies of hu­man-ma­chine co­op­er­a­tion in text-pro­cess­ing pipelines. But then I re­al­ized that it is not nec­es­sar­ily a prob­lem if the text is dead. A Ra­tion­al­ity Quotes, Best of 2010 Edi­tion could be nice.

• Agreed. Best of 2009 can be com­piled now and frozen, best of 2010 end of the year and so on. It’d also be use­ful to pub­lish the source code of what­ever script was used to gen­er­ate the rat­ing on the wiki, as a sub­page.

• Very cool idea.

It would be nice if links were pre­served.

• Less Wrong Ra­tion­al­ity Quotes since April 2009, sorted by points.

This ver­sion copies the vi­sual style and pre­serves the for­mat­ting of the origi­nal com­ments.

Here is the source code.

I already wrote a top-level com­ment about the origi­nal raw text ver­sion of this, but my ac­cess logs sug­gested that EDITs of older com­ments only reach a very few peo­ple. See that com­ment for a bit more de­tail.

• This is great, even more so as you made it open source. I added it to Refer­ences & Re­sources for LessWrong.

• You should make a short top-level post about this so more peo­ple see this

• I’d vote you up again for hand­ing out your source code as well as the quote list, but I can’t, so an en­courag­ing re­ply will have to do...

• You Are Not So Smart is a great lit­tle blog that cov­ers many of the same top­ics as LessWrong, but in a much more bite-sized for­mat and with less depth. It prob­a­bly won’t offer much to reg­u­lar/​long-time LW read­ers, but it’s a great re­source to give to friends/​fam­ily who don’t have the time/​en­ergy de­manded by LW.

• It is a good blog, and it has a slightly wider topic spread than LW, so even if you’re fa­mil­iar with most of the stan­dard failures of judg­ment there’ll be a few new things worth read­ing. (I found the “in­tro­duc­ing fines can ac­tu­ally in­crease a be­hav­ior” post par­tic­u­larly good, as I wasn’t aware of that effect.)

• Thanks, this looks like an ex­cel­lent sup­ple­ment for LW.

• As an old quote from DanielLC says, con­se­quen­tial­ism is “the be­lief that do­ing the right thing makes the world a bet­ter place”. I now pre­sent some finger ex­er­cises on the topic:

1. Is it okay to cheat on your spouse as long as (s)he never knows?

2. If you have already cheated and man­aged to con­ceal it perfectly, is it right to stay silent?

3. If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly dis­creetly, is it right to give the promise to make them happy?

4. If your wife loves you, but you only stay in the mar­riage be­cause of the child, is it right to as­sure the wife you still love her?

5. If your hus­band loves you, but doesn’t know the child isn’t his, is it right to stay silent?

6. The peo­ple from #4 and #5 are ac­tu­ally mar­ried to each other. They seem to be caught in an un­com­fortable equil­ibrium of lies. Would they have been bet­ter off as de­on­tol­o­gists?

While you’re think­ing about these puz­zles, be ex­tra care­ful to not write the bot­tom line in ad­vance and shoe­horn the “right” con­clu­sion into a con­se­quen­tial­ist frame. For ex­am­ple, elimi­nat­ing lies doesn’t “make the world a bet­ter place” un­less it ac­tu­ally makes peo­ple hap­pier; claiming so is just con­cealed de­on­tol­o­gism.

• For ex­am­ple, elimi­nat­ing lies doesn’t “make the world a bet­ter place” un­less it ac­tu­ally makes peo­ple hap­pier; claiming so is just con­cealed de­on­tol­o­gism.

Just pick­ing nits. Con­se­quen­tial­ism =/​= max­i­miz­ing hap­piness. (The lat­ter is a case of the former). So one could be a con­se­quen­tial­ist and place a high value and not ly­ing. In fact, the an­swers to all of your ques­tions de­pend on the val­ues one holds.

Or what Nesov said be­low.

• For ex­am­ple, elimi­nat­ing lies doesn’t “make the world a bet­ter place” un­less it ac­tu­ally makes peo­ple hap­pier; claiming so is just con­cealed de­on­tol­o­gism.

I dis­agree. Not ly­ing or not be­ing lied to might well be a ter­mi­nal value, why not? You that lies or doesn’t lie is part of the world. A per­son may dis­like be­ing lied to, value the world where such ly­ing oc­curs less, ir­re­spec­tive of whether they know of said ly­ing. (Cor­re­spond­ingly, the world be­comes a bet­ter place even if you elimi­nate some ly­ing with­out any­one know­ing about that, so no­body be­comes hap­pier in the sense of ac­tu­ally ex­pe­rienc­ing differ­ent emo­tions, as­sum­ing noth­ing else that mat­ters changes as well.)

Of course, if you can only elimi­nate a spe­cific case of ly­ing by on the net mak­ing the out­come even worse for other rea­sons, it shouldn’t be done (and some of your ex­am­ples may qual­ify for that).

• A per­son may dis­like be­ing lied to, value the world where such ly­ing oc­curs less, ir­re­spec­tive of whether they know of said ly­ing.

In my opinion, this is a lawyer’s at­tempt to mas­quer­ade de­on­tol­o­gism as con­se­quen­tial­ism. You can, of course, re­for­mu­late the de­on­tol­o­gist rule “never lie” as a con­se­quen­tial­ist “I as­sign an ex­tremely high di­su­til­ity to situ­a­tions where I lie”. In the same way you can put con­se­quen­tial­ist prefer­ences as a de­on­toligst rule “at any case, do what­ever max­imises your util­ity”. But do­ing that, the point of the dis­tinc­tion be­tween the two eth­i­cal sys­tems is lost.

• But do­ing that, the point of the dis­tinc­tion be­tween the two eth­i­cal sys­tems is lost.

If so, maybe we want that.

• My com­ment ar­gues about the re­la­tion­ship of con­cepts “make the world a bet­ter place” and “makes peo­ple hap­pier”. cousin_it’s state­ment:

For ex­am­ple, elimi­nat­ing lies doesn’t “make the world a bet­ter place” un­less it ac­tu­ally makes peo­ple hap­pier; claiming so is just con­cealed de­on­tol­o­gism.

I saw this as an ar­gu­ment, in coun­tra­pos­i­tive form for this: if we take a con­se­quen­tial­ist out­look, then “make the world a bet­ter place” should be the same as “makes peo­ple hap­pier”. How­ever, it’s against the spirit of con­se­quen­tial­ist out­look, in that it priv­ileges “happy peo­ple” and dis­re­gards other as­pects of value. Tak­ing “happy peo­ple” as a value through de­on­tolog­i­cal lens would be more ap­pro­pri­ate, but it’s not what was be­ing said.

• Let’s carry this train of thought to its log­i­cal ex­treme. Imag­ine two wor­lds, World 1 and World 2. They are in ex­actly the same state at pre­sent, but their past his­to­ries differ: in World 1, per­son A lied to per­son B and then promptly for­got about it. In World 2 this didn’t hap­pen. You seem to be say­ing that a suffi­ciently savvy con­se­quen­tial­ist will value one of those wor­lds higher than the other. I think this is a very ex­treme po­si­tion for a “con­se­quen­tial­ist” to take, and the word “de­on­tol­o­gism” would fit it way bet­ter.

IMO, a “proper” con­se­quen­tial­ist should care about con­se­quences they can (in prin­ci­ple, some­day) see, and shouldn’t care about some­thing they can never ever re­ceive in­for­ma­tion about. If we don’t make this dis­tinc­tion or some­thing similar to it, there’s no the­o­ret­i­cal differ­ence be­tween de­on­tol­o­gism and con­se­quen­tial­ism—each one can be im­ple­mented perfectly on top of the other—and this whole dis­cus­sion is pointless, and like­wise is a good chunk of LW. Is that the po­si­tion you take?

• That the con­se­quences are dis­tinct ac­cord­ing to one’s on­tolog­i­cal model is dis­tinct from a given agent be­ing able to trace these con­se­quences. What if the fact about the lie be­ing pre­sent or not was en­crypted us­ing a one-way in­jec­tive func­tion, with the origi­nal for­got­ten, but the cypher re­tained? In prin­ci­ple, you can figure which is which (de­ci­pher), but not in prac­tice for many years to come. Does your in­abil­ity to de­ci­pher this differ­ence change the fact of one of these wor­lds be­ing bet­ter? What if you are not given a for­mal ci­pher, but how a but­terfly flaps its wings 100 year later can be traced back to the event of ly­ing/​not ly­ing through the laws of physics? What if the same can only be said of a record in an ob­scure his­tor­i­cal text from 500 years ago, so that the event of ly­ing was ac­tu­ally in­di­rectly pre­dicted/​caused far in ad­vance, and can in prin­ci­ple be in­ferred from that ev­i­dence?

The con­di­tion for the differ­ence to be ob­serv­able in prin­ci­ple is much weaker than you seem to im­ply. And since abil­ity to make log­i­cal con­clu­sions from the data doesn’t seem like the sort of thing that in­fluences the ac­tual moral value of the world, we might as well agree that you don’t need to dis­t­in­guish them at all, al­though it doesn’t make much sense to in­tro­duce the dis­tinc­tion in value if no po­ten­tial third-party benefi­ciary can dis­t­in­guish as well (this would be just tak­ing a quo­tient of on­tol­ogy on the po­ten­tial ob­ser­va­tion/​ac­tion equiv­alence classes, in other words us­ing on­tolog­i­cal box­ing of syn­tac­tic prefer­ence).

• The con­di­tion for the differ­ence to be ob­serv­able in prin­ci­ple is much weaker than you seem to im­ply.

It might be, but whether or not it is seems to de­pend on, among other things, how much ran­dom­ness there is in the laws of physics. And the minu­tiae of micro-physics also don’t seem like the kind of thing that can in­fluence the moral value of the world, as­sum­ing that the psy­cholog­i­cal states of all ac­tors in the world are es­sen­tially in­differ­ent to these minu­tiae.

Can’t we re­solve this prob­lem by say­ing that the moral value at­taches to a his­tory of the world rather than (say) a state of the world, or the de­duc­tive clo­sure of the in­for­ma­tion available to an agent? Then we can be con­sis­tent with the let­ter if not the spirit of con­se­quen­tial­ism by stipu­lat­ing that a world his­tory con­tain­ing a for­got­ten lie gets lower value than an oth­er­wise macro­scop­i­cally iden­ti­cal world his­tory not con­tain­ing it. (Is this already your view, in fact?)

Now to con­sider cousin_it’s idea that a “proper” con­se­quen­tial­ist only cares about con­se­quences that can be seen:

Even if all in­for­ma­tion about the lie is rapidly obliter­ated, and can­not be re­cov­ered later, it’s still true that the lie and its im­me­di­ate con­se­quences are seen by the per­son tel­ling it, so we might re­gard this as be­ing ‘suffi­cient’ for a proper con­se­quen­tial­ist to care about it. But if we don’t, and all that mat­ters is the in­definite fu­ture, then don’t we face the prob­lem that “in the long term we’re all dead”? OK, per­haps some of us think that rule will even­tu­ally cease to ap­ply, but for ar­gu­ment’s sake, if we knew with cer­tainty that all life would be ex­tin­guished, say, 1000 years from now (and that all traces of whether peo­ple lived well or badly would sub­se­quently be obliter­ated) we’d want our eth­i­cal the­ory to be more ro­bust than to say “Do what­ever you like—noth­ing mat­ters any more.”

• This is cor­rect, and I was wrong. But your last sen­tence sounds weird. You seem to be say­ing that it’s not okay for me to lie even if I can’t get caught, be­cause then I’d be the “third-party benefi­ciary”, but some­how it’s okay to lie and then erase my mem­ory of ly­ing. Is that right?

• You seem to be say­ing that it’s not okay for me to lie even if I can’t get caught, be­cause then I’d be the “third-party benefi­ciary”

Right. “Third-party benefi­ciary” can be seen as a gen­er­al­ized ac­tion, where the ac­tion is to pro­duce an agent, or cause a be­hav­ior of an ex­ist­ing agent, that works to­wards op­ti­miz­ing your value.

but it’s some­how okay to lie and then erase my mem­ory of ly­ing. Is that right?

It’s not okay, in the sense that if you in­tro­duce the con­cept of you-that-de­cided-to-lie, ex­ist­ing in the past but not in pre­sent, then you also have to morally color this on­tolog­i­cal dis­tinc­tion, and the nat­u­ral way to do that would be to la­bel the ly­ing op­tion worse. The you-that-de­cided is the third-party “benefi­ciary” in that case, that dis­t­in­guished the states of the world con­tain­ing ly­ing and not-ly­ing.

But it prob­a­bly doesn’t make sense for you to have that con­cept in your on­tol­ogy if the states of the world that con­tained you-ly­ing can’t be in prin­ci­ple (in the strong sense de­scribed in the pre­vi­ous com­ment) dis­t­in­guished from the ones that don’t. You can even in­tro­duce on­tolog­i­cal mod­els for this case that, say, mark past-you-ly­ing as bet­ter than past-you-not-ly­ing and lead to ex­actly the same de­ci­sions, but that would be a non-stan­dard model ;-)

• I sug­gest that elimi­nat­ing ly­ing would only be an im­prove­ment if peo­ple have rea­son­able ex­pec­ta­tions of each other.

• Less di­rectly, a per­son may value a world where be­liefs were more ac­cu­rate—in such a world, both ly­ing and bul­lshit would be nega­tives.

• I can’t be­lieve you took the ex­act cop-out I warned you against. Use more imag­i­na­tion next time! Here, let me make the prob­lem a lit­tle harder for you: re­strict your at­ten­tion to con­se­quen­tial­ists whose ter­mi­nal val­ues have to be ob­serv­able.

• I can’t be­lieve you took the ex­act cop-out I warned you against.

Not sur­pris­ingly, as I was ar­gu­ing with that warn­ing, and cited it in the com­ment.

re­strict your at­ten­tion to con­se­quen­tial­ists whose ter­mi­nal val­ues have to be ob­serv­able.

What does this mean? Con­se­quen­tial­ist val­ues are about the world, not about ob­ser­va­tions (but your words don’t seem to fit to dis­agree­ment with this po­si­tion, thus the ‘what does this mean?’). Con­se­quen­tial­ist no­tion of val­ues al­lows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in or­der to im­ple­ment those val­ues. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these op­tions in or­der to benefit.

• Is it okay to cheat on your spouse as long as (s)he never knows?

Is this ac­tu­ally pos­si­ble? Imag­ine that 10% of peo­ple cheat on their spouses when faced with a situ­a­tion ‘similar’ to yours. Then the spouses can ‘put them­selves in your place’ and think “Gee, there’s about a 10% chance that I’d now be cheat­ing on my­self. I won­der if this means my hus­band/​wife is cheat­ing on me?”

So if you are in­clined to cheat then spouses are in­clined to be sus­pi­cious. Even if the sus­pi­cion doesn’t cor­re­late with the cheat­ing, the net effect is to drive util­ity down.

I think similar rea­son­ing can be ap­plied to the other cases.

(Of course, this is a very “UDT-style” way of think­ing—but then UDT does re­mind me of Kant’s cat­e­gor­i­cal im­per­a­tive, and of course Kant is the arch-de­on­tol­o­gist.)

• Your rea­son­ing goes above and be­yond UDT: it says you must always co­op­er­ate in the Pri­soner’s Dilemma to avoid “driv­ing net util­ity down”. I’m pretty sure you made a mis­take some­where.

• Your rea­son­ing goes above and be­yond UDT

Two things to say:

1. We’re talk­ing about ethics rather than de­ci­sion the­ory. If you want to ap­ply the lat­ter to the former then it makes perfect sense to take the at­ti­tude that “One util has the same eth­i­cal value, who­ever that util be­longs to. There­fore, we’re go­ing to try to max­i­mize ‘to­tal util­ity’ (what­ever sense one can make of that con­cept)”.

2. I think UDT does (or may do, de­pend­ing on how you set it up) co-op­er­ate in a one-shot Pri­soner’s Dilemma. (How­ever, if you imag­ine a differ­ent game “The Tor­ture Game” where you’re a sadist who gets 1 util for tor­tur­ing, and in­flict­ing −100 utils, then of course UDT can­not pre­vent you from tor­tur­ing. So I’m cer­tainly not ar­gu­ing that UDT, ex­actly as it is, con­sti­tutes an eth­i­cal panacea.)

• Another ran­dom thought:

The con­nec­tion be­tween “The Tor­ture Game” and Pri­soner’s Dilemma is ac­tu­ally very close: Pri­soner’s Dilemma is just A and B si­mul­ta­neously play­ing the tor­ture game with A as tor­turer and B as vic­tim and vice versa, not able to com­mu­ni­cate to each other whether they’ve cho­sen to tor­ture un­til both have com­mit­ted them­selves one way or the other.

I’ve ob­served that UDT hap­pily com­mits tor­ture when play­ing The Tor­ture Game, and (imo) be­ing able to co-op­er­ate in a one-shot Pri­soner’s Dilemma should be seen as one of the am­bi­tions of UDT (whether or not it is ul­ti­mately suc­cess­ful).

So what about this then: Two in­stances of The Tor­ture Game but rather than A and B mov­ing si­mul­ta­neously, first A chooses whether to tor­ture and then B chooses. From B’s per­spec­tive, this is al­most the same as Parfit’s Hitch­hiker. The prob­lem looks in­ter­est­ing from A’s per­spec­tive too, but it’s not one of the Stan­dard New­comblike Prob­lems that I dis­cuss in my UDT post.

I think, just as UDT as­pires to co-op­er­ate in a one-shot PD i.e. not to tor­ture in a Si­mul­ta­neous Tor­ture Game, so UDT as­pires not to tor­ture in the Se­quen­tial Tor­ture Game.

1. If we’re talk­ing about ethics, please note that tel­ling the truth in my puz­zles doesn’t max­i­mize to­tal util­ity ei­ther.

2. UDT doesn’t co­op­er­ate in the PD un­less you see the other guy’s source code and have a math­e­mat­i­cal proof that it will out­put the same value as yours.

• A ran­dom thought, which once stated sounds ob­vi­ous, but I feel like writ­ing it down all the same:

One-shot PD = Two par­allel “New­comb games” with flawless pre­dic­tors, where the play­ers swap boxes im­me­di­ately prior to open­ing.

• Doesn’t make sense to me. Two flawless pre­dic­tors that con­di­tion on each other’s ac­tions can’t ex­ist. Alice does what­ever Bob will do, Bob does the op­po­site of what Alice will do, whoops, con­tra­dic­tion. Or maybe I’m read­ing you wrong?

• Sorry—I guess I wasn’t clear enough. I meant that there are two hu­man play­ers and two (pos­si­bly non-hu­man) flawless pre­dic­tors.

So in other words, it’s al­most like there are two to­tally in­de­pen­dent in­stances of New­comb’s game, ex­cept that the pre­dic­tor from game A fills the boxes in the game B and vice versa.

• Yes, you can con­sider a two-player game as a one-player game with the sec­ond player an opaque part of en­vi­ron­ment. In two-player games, am­bi­ent con­trol is more ap­par­ent than in one-player games, but it’s also es­sen­tial in New­comb prob­lem, which is why you make the anal­ogy.

• This needs to be spel­led out more. Do you mean that if A takes both boxes, B gets $1,000, and if A takes one box, B gets$1,000,000? Why is this a dilemma at all? What you do has no effect on the money you get.

• I don’t know how to for­mat a table, but here is what I want the game to be:

A-ac­tion B-ac­tion A-win­nings B-winnings

• 2-box 2-box $1$1

• 2-box 1-box $1001$0

• 1-box 2-box $0$1001

• 1-box 1-box $1000$1000

Now com­pare this with New­comb’s game:

A-ac­tion Pre­dic­tion A-winnings

• 2-box 2-box $1 • 2-box 1-box$1001

• 1-box 2-box $0 • 1-box 1-box$1000

Now, if the “Pre­dic­tion” in the sec­ond table is ac­tu­ally a flawless pre­dic­tion of a differ­ent player’s ac­tion then we ob­tain the first three columns of the first table.

Hope­fully the rest is clear, and please for­give the triv­ial­ity of this ob­ser­va­tion.

1. But that’s ex­actly what I’m dis­put­ing. At this point, in a hu­man di­alogue I would “re-iter­ate” but there’s no need be­cause my ar­gu­ment is back there for you to re-read if you like.

2. Yes, and how easy it is to ar­rive at such a proof may vary de­pend­ing on cir­cum­stances. But in any case, re­call that I merely said “UDT-style”.

• UDT doesn’t co­op­er­ate in the PD un­less you see the other guy’s source code and have a math­e­mat­i­cal proof that it will out­put the same value as yours.

UDT doesn’t spec­ify how ex­actly to deal with log­i­cal/​ob­ser­va­tional un­cer­tainty, but in prin­ci­ple it does deal with them. It doesn’t fol­low that if you don’t know how to an­a­lyze the prob­lem, you should there­fore defect. Hu­man-level ar­gu­ments op­er­ate on the level of sim­ple ap­prox­i­mate mod­els al­low­ing for un­cer­tainty in how they re­late to the real thing; de­ci­sion the­o­ries should ap­ply to an­a­lyz­ing these mod­els in iso­la­tion from the real thing.

• It doesn’t fol­low that if you don’t know how to an­a­lyze the prob­lem, you should there­fore defect.

This is in­trigu­ing, but sounds wrong to me. If you co­op­er­ate in a situ­a­tion of com­plete un­cer­tainty, you’re ex­ploitable.

• What’s “com­plete un­cer­tainty”? How ex­ploitable you are de­pends on who tries to ex­ploit you. The op­po­nent is also un­cer­tain. If the op­po­nent is Omega, you prob­a­bly should be ab­solutely cer­tain, be­cause it’ll find the sin­gle ex­act set of cir­cum­stances that make you lose. But if the op­po­nent is also fal­lible, you can count on the out­come not be­ing the worst-case sce­nario, and there­fore not be­ing able to es­ti­mate the value of that worse-case sce­nario is not fatal. An al­most for­mal anal­ogy is anal­y­sis of al­gorithms in worst case and av­er­age case: worst case anal­y­sis ap­plies to the op­ti­mal op­po­nent, av­er­age case anal­y­sis to ran­dom op­po­nent, and in real life you should tar­get some­thing in be­tween.

• The “always defect” strat­egy is part of a Nash equil­ibrium. The quin­ing co­op­er­a­tor is part of a Nash equil­ibrium. IMO that’s one of the min­i­mum re­quire­ments that a good strat­egy must meet. But a strat­egy that co­op­er­ates when­ever its “math­e­mat­i­cal in­tu­ition mod­ule” comes up blank can’t be part of any Nash equil­ibrium.

• “Nash equil­ibrium” is far from be­ing a gen­er­ally con­vinc­ing ar­gu­ment. Math­e­mat­i­cal in­tu­ition mod­ule doesn’t come up blank, it gives prob­a­bil­ities of differ­ent out­comes, given the pre­sent ob­ser­va­tional and log­i­cal un­cer­tainty. When you have prob­a­bil­ities of the other player act­ing each way de­pend­ing on how you act, the prob­lem is pretty straight­for­ward (as­sum­ing ex­pected util­ity etc.), and “Nash equil­ibrium” is no longer a rele­vant con­cern. It’s when you don’t have a math­e­mat­i­cal in­tu­ition mod­ule, don’t have prob­a­bil­ities of the other player’s ac­tions con­di­tional on your ac­tions, when you need to in­vent ad-hoc game-the­o­retic rit­u­als of cog­ni­tion.

• As an old quote from DanielLC says, con­se­quen­tial­ism is “the be­lief that do­ing the right thing makes the world a bet­ter place”. I now pre­sent some finger ex­er­cises on the topic:

It seems like it would be more aptly defined as “the be­lief that mak­ing the world a bet­ter place con­sti­tutes do­ing the right thing”. Non-con­se­quen­tial­ists can cer­tainly be­lieve that do­ing the right thing makes the world a bet­ter place, es­pe­cially if they don’t care whether it does.

• It is a com­mon failure of moral anal­y­sis (in­vented by de­on­tol­o­gists un­doubt­edly) that they as­sume ideal­ized moral situ­a­tion. Proper con­se­quen­tial­ism deals with the real world, not this fan­tasy.

• #1/​#2/​#3 - “never knows” fails far too of­ten, so you need to in­clude a very large chance of failure in your anal­y­sis.

• #4 - it’s pretty safe to make stuff like that up

• #5 - in the past, un­doubt­edly yes; in the fu­ture this will be nearly cer­tain to leak with ev­ery­one un­der­go­ing rou­tine ge­netic test­ing for med­i­cal pur­poses, so no. (fu­ture is rele­vant be­cause situ­a­tion will last decades)

• #6 - con­se­quen­tial­ism as­sumes prob­a­bil­is­tic anal­y­sis (% that child is not yours, % chance that hus­band is mak­ing stuff up) - and you weight costs and benefits of differ­ent situ­a­tions pro­por­tion­ally to their like­li­hood. Here they are in un­likely situ­a­tion that con­se­quen­tial­ism doesn’t weight highly. They might be bet­ter off with some other value sys­tem, but only at cost of be­ing worse off in more likely situ­a­tions.

• #4 - it’s pretty safe to make stuff like that up

You seem to make the er­ror here that you rightly crit­i­cize. Your feel­ings have in­vol­un­tary, de­tectable con­se­quences; ly­ing about them can have a real per­sonal cost.

• It is my es­ti­mate that this leak­age is very low, com­pared to other ex­am­ples. I’m not claiming it doesn’t ex­ist, and for some peo­ple it might con­ceiv­ably be much higher.

• A quick In­ter­net search turns up very lit­tle causal data on the re­la­tion­ship be­tween cheat­ing and hap­piness, so for pur­poses of this anal­y­sis I will em­ploy the fol­low­ing as­sump­tions:

a. Suc­cess­ful se­cret cheat­ing has a small eu­dae­monic benefit for the cheater.
b. Suc­cess­ful se­cret ly­ing in a re­la­tion­ship has a small eu­dae­monic cost for the liar.
c. Mar­i­tal and fa­mil­ial re­la­tion­ships have a mod­er­ate eu­dae­monic benefits for both par­ties.
d. Un­der­min­ing rev­e­la­tions in a re­la­tion­ship have a mod­er­ate (speci­fi­cally, se­vere in in­ten­sity but tran­sient in du­ra­tion) eu­dae­monic cost for all par­ties in­volved.
e. Re­la­tion­ships trans­mit a frac­tion of eu­dae­monic effects be­tween part­ners.

Un­der these as­sump­tions, the naive con­se­quen­tial­ist solu­tion* is as fol­lows:

1. Cheat­ing is a risky ac­tivity, and should be avoided if eu­dae­monic sup­plies are short.

2. This an­swer de­pends on pre­cise re­la­tion­ships be­tween eu­dae­monic val­ues that are not well es­tab­lished at this time.

3. Given the con­di­tions, ly­ing seems ap­pro­pri­ate.

4. Yes.

5. Yes.

6. The hus­band may be bet­ter off. The wife more likely would not be. The child would cer­tainly not be.

Are there any ev­i­dent flaws in my anal­y­sis on the level it was performed?

* The naive con­se­quen­tial­ist solu­tion only ac­counts for di­rect effects of the ac­tions of a sin­gle in­di­vi­d­ual in a sin­gle situ­a­tion, rather than the gen­eral effects of wide­spread adop­tion of a strat­egy in many situ­a­tions—like other spher­i­cal cows, this causes a lot of prob­le­matic an­swers, like two-box­ing.

• Ouch. In #5 I in­tended that the wife would lie to avoid break­ing her hus­band’s heart, not for some ma­te­rial benefit. So if she knew the hus­band didn’t love her, she’d tell the truth. The fact that you au­to­mat­i­cally parsed the situ­a­tion differ­ently is… dis­turb­ing, but quite sen­si­ble by con­se­quen­tial­ist lights, I sup­pose :-)

I don’t un­der­stand your an­swer in #2. If ly­ing in­curs a small cost on you and a frac­tion of it on the part­ner, and con­fess­ing in­curs a mod­er­ate cost on both, why are you un­cer­tain?

No other visi­ble flaws. Nice to see you bite the bul­let in #3.

ETA: dou­ble ouch! In #1 you im­ply that hap­pier cou­ples should cheat more! Great stuff, I can’t wait till other peo­ple re­ply to the ques­tion­naire.

• The hus­band does benefit, by her lights. The chief rea­son it comes out in the hus­band’s fa­vor in #6 is be­cause the hus­band doesn’t value the mar­i­tal re­la­tion­ship and (I as­sumed) wouldn’t value the child re­la­tion­ship.

You’re right—in #2 tel­ling the truth car­ries the risk of end­ing the re­la­tion­ship. I was con­sid­er­ing the benefit of hav­ing a re­la­tion­ship with less ly­ing (which is a benefit for both par­ties), but it’s a gam­ble, and prob­a­bly one which fa­vors ly­ing.

On eu­dae­monic grounds, it was an easy bul­let to bite—par­tic­u­larly since I had read Have His Car­case by Dorothy Say­ers, which sug­gested an ex­am­ple of such a re­la­tion­ship.

In­ci­den­tally, I don’t ac­cept most of this anal­y­sis, de­spite be­ing a con­se­quen­tial­ist—as I said, it is the “naive con­se­quen­tial­ist solu­tion”, and sev­eral an­swers would be likely to change if (a) the ques­tions were con­sid­ered on the level of wide­spread strate­gies and (b) effects other than eu­dae­monic were in­cluded.

Edit: Note that “hap­pier cou­ples” does not im­ply “hap­pier cou­pling”—the risk to the re­la­tion­ship would in­crease with the in­creased hap­piness from the re­la­tion­ship. This anal­y­sis of #1 im­plies in­stead that cou­ples with stronger but in­de­pen­dent so­cial cir­cles should cheat more (last para­graph).

• and sev­eral an­swers would be likely to change if (a) the ques­tions were con­sid­ered on the level of wide­spread strate­gies and (b) effects other than eu­dae­monic were included

This is an in­ter­est­ing line of re­treat! What an­swers would you change if most peo­ple around you were also con­se­quen­tial­ists, and what other effects would you in­clude apart from eu­dae­monic ones?

• It’s okay to de­ceive peo­ple if they’re not ac­tu­ally harmed and you’re sure they’ll never find out. In prac­tice, it’s of­ten too risky.

1-3: This is all okay, but nev­er­the­less, I wouldn’t do these things. The rea­son is that for me, a nec­es­sary in­gre­di­ent for be­ing hap­pily mar­ried is an alief that my spouse is hon­est with me. It would be im­pos­si­ble for me to main­tain this alief if I lied.

4-5: The child’s welfare is more im­por­tant than my hap­piness, so even I would lie if it was likely to benefit the child.

6: Let’s as­sume the least con­ve­nient pos­si­ble world, where ev­ery­one is bet­ter off if they tell the truth. Then in this par­tic­u­lar case, they would be bet­ter off as de­on­tol­o­gists. But they have no way of know­ing this. This is not prob­le­matic for con­se­quen­tial­ism any more than a ver­sion of the Trol­ley Prob­lem in which the fat man is se­cretly a skinny man in dis­guise and push­ing him will lead to more peo­ple dy­ing.

• 1-3: It seems you’re us­ing an ir­ra­tional rule for up­dat­ing your be­liefs about your spouse. If we fixed this minor short­com­ing, would you lie?

6: Why not prob­le­matic? Un­like your Trol­ley Prob­lem ex­am­ple, in my ex­am­ple the lie is caused by con­se­quen­tial­ism in the first place. It’s more similar to the Pri­soner’s Dilemma, if you ask me.

• 1-3: It’s an alief, not a be­lief, be­cause I know that ly­ing to my spouse doesn’t re­ally make my spouse more likely to lie to me. But yes, I sup­pose I would be a hap­pier per­son if I were ca­pa­ble of main­tain­ing that alief (and re­press­ing my guilt) while hav­ing an af­fair. I won­der if I would want to take a pill that would do that. In­ter­est­ing. Any­ways, if I did take that pill, then yes, I would cheat and lie.

• Thanks for the link. I think Ali­corn would call it an “un­offi­cial” or “non-en­dorsed” be­lief.

Let’s put an­other twist on it. What would you recom­mend some­one else to do in the situ­a­tions pre­sented in the ques­tion­naire? Would you prod them away from aliefs and to­ward ra­tio­nal­ity? :-)

• Thanks for the link. I think Ali­corn would call it an “un­offi­cial” or “non-en­dorsed” be­lief.

Ali­corn seems to think the con­cepts are dis­tinct, but I don’t know what the dis­tinc­tion is, and I haven’t read any philo­soph­i­cal pa­per that defines alief : )

Let’s put an­other twist on it. What would you recom­mend some­one else to do in the situ­a­tions pre­sented in the ques­tion­naire? Would you prod them away from aliefs and to­ward ra­tio­nal­ity? :-)

All right: If my friend told me they’d had an af­fair, and they wanted to keep it a se­cret from their spouse for­ever, and they had the abil­ity to do so, then I would give them a pill that would al­low them to live a happy life with­out con­fid­ing in their spouse — pro­vided the pill does not have ex­tra nega­tive con­se­quences.

Caveats: In real life, there’s always some chance that the spouse will find out. Also, it’s not ac­cept­able for my friend to change their mind and tell their spouse years af­ter the fact; that would harm the spouse. Also, the pill does not ex­ist in re­al­ity, and I don’t know how difficult it is to talk some­one out of their aliefs and guilt. And while I’m mak­ing peo­ples’ emo­tions more ra­tio­nal, I might as well ad­dress the third horn, which is to in­still in the cou­ple an ap­pre­ci­a­tion of polyamory and open re­la­tion­ships.

The third horn for cases 4-6 is to re­move the hus­band’s biolog­i­cal chau­vanism. Whether the child is biolog­i­cally re­lated to him shouldn’t mat­ter.

• The third horn for cases 4-6 is to re­move the hus­band’s biolog­i­cal chau­vanism. Whether the child is biolog­i­cally re­lated to him shouldn’t mat­ter.

Why on earth should this not mat­ter? It’s very im­por­tant to most peo­ple. And in those sce­nar­ios, there are the ad­di­tional is­sues that she lied to him about the re­la­tion­ship and the kid and cheated on him. It’s not solely about parentage: for in­stance, many peo­ple are ok with adopt­ing, but not as many are ok with rais­ing a kid that was the re­sult of cheat­ing.

• I be­lieve that, given time, I could con­vince a ra­tio­nal father that what­ever love or re­spon­si­bil­ity he owes his child should not de­pend on where that child ac­tu­ally came from. Feel free to be skep­ti­cal un­til I’ve tried it.

• Nisan:

Feel free to be skep­ti­cal un­til I’ve tried it.

Trou­ble is, this is not just a philo­soph­i­cal mat­ter, or a mat­ter of per­sonal prefer­ence, but also an im­por­tant le­gal ques­tion. Rather than con­vinc­ing cuck­olded men that they should ac­cept their hu­mil­i­at­ing lot meekly—it­self a du­bi­ous achieve­ment, even if it were pos­si­ble—your ar­gu­ments are likely to be more effec­tive in con­vinc­ing courts and leg­is­la­tors to force cuck­olded men to sup­port their de­ceit­ful wives and the offspring of their in­dis­cre­tions, whether they want it or not. (Just google for the rele­vant key­words to find re­ports of nu­mer­ous such rul­ings in var­i­ous ju­ris­dic­tions.)

Of course, this doesn’t mean that your ar­gu­ments shouldn’t be stated clearly and dis­cussed openly, but when you in­sult­ingly re­fer to op­pos­ing views as “chau­vinism,” you en­gage in ag­gres­sive, war­like lan­guage against men who end up com­pletely screwed over in such cases. To say the least, this is not ap­pro­pri­ate in a ra­tio­nal dis­cus­sion.

• Be wary of con­fus­ing “ra­tio­nal” with “emo­tion­less.” Be­cause so much of our en­ergy as ra­tio­nal­ists is de­voted to silenc­ing un­helpful emo­tions, it’s easy to for­get that some of our emo­tions cor­re­spond to the very states of the world that we are cul­ti­vat­ing our ra­tio­nal­ity in or­der to bring about. Th­ese emo­tions should not be smushed. See, e.g., Feel­ing Ra­tional.

Of course, you might have a the­ory of father­hood that says you love your kid be­cause the kid has been as­signed to you, or be­cause the kid is needy, or be­cause you’ve made an un­con­di­tional com­mit­ment to care for the sucker—but none of those the­o­ries seem to de­scribe my re­al­ity par­tic­u­larly well.

*The kid has been as­signed to me

Well, no, he hasn’t, ac­tu­ally; that’s sort of the point. There was an effort by so­ciety to as­sign me the kid, but the effort failed be­cause the kid didn’t ac­tu­ally have the traits that so­ciety used to as­sign her to me.

*The kid is needy

Well, sure, but so are billions of oth­ers. Why should I care ex­tra about this one?

*I’ve made an un­con­di­tional commitment

Such com­mit­ments are sweet, but prob­a­bly ir­ra­tional. Be­cause I don’t want to spend 18 years rais­ing a kid that isn’t mine, I wouldn’t pre­com­mit to rais­ing a kid re­gard­less of whether she’s mine or some­one else’s. At the very least, the level of com­mit­ment of my par­ent­ing would vary de­pend­ing on whether (a) the kid was the child of me and an hon­est lover, or (b) the kid was the child of my non­con­sen­sual cuck­older and my dishon­est lover.

• you need more time to con­vince me

You’re wel­come to write all the words you like and I’ll read them, but if you mean “more time” liter­ally, then you can’t have it! If I spend enough time rais­ing a kid, in some mean­ingful sense the kid will be­come prop­erly mine. Be­cause the kid will still not be mine in other, equally mean­ingful senses, I don’t want that to hap­pen, and so I won’t give you the time to ‘con­vince’ me. What would re­ally con­vince me in such a situ­a­tion isn’t your ar­gu­ments, how­ever per­sis­tently ap­plied, but the way that the pas­sage of time changed the situ­a­tion which you were try­ing to jus­tify to me.

• Okay, here is where my the­ory of father­hood is com­ing from:

You are not your genes. Your child is not your genes. Be­fore peo­ple knew about genes, men knew that it was very im­por­tant for them to get their se­men into women, and that the re­sult­ing chil­dren were spe­cial. If a man’s se­men didn’t work, or if his wife was im­preg­nated by some­one else’s se­men, the man would be hu­mil­i­ated. Th­ese are the val­ues of an alien god, and we’re al­lowed to re­ject them.

Con­sider a more hu­man­is­tic con­cep­tion of per­sonal iden­tity: Your child is an in­di­vi­d­ual, not a pos­ses­sion, and not merely a product of the cir­cum­stances of their con­cep­tion. If you find out they came from an adulter­ous af­fair, that doesn’t change the fact that they are an in­di­vi­d­ual who has a spe­cial per­sonal re­la­tion­ship with you.

Con­sider a more tran­shu­man­is­tic con­cep­tion of per­sonal iden­tity: Your child is a mind whose qual­ities are in­fluenced by ge­net­ics in a way that is not well-un­der­stood, but whose in­for­ma­tional con­tent is much more than their genome. Creat­ing this child in­volved se­men at some point, be­cause that’s the only way of hav­ing chil­dren available to you right now. If it turns out that the mother covertly used some­one else’s se­men, that rev­e­la­tion has no effect on the child’s iden­tity.

Th­ese are not moral ar­gu­ments. I’m de­scribing a wor­ld­view that will still make sense when par­ents start giv­ing their chil­dren genes they them­selves do not have, when moth­ers can elect to have chil­dren with­out the in­con­ve­nience of be­ing preg­nant, when chil­dren are not biolog­i­cal crea­tures at all. Filial love should flour­ish in this world.

Now for the moral ar­gu­ments: It is not good to bring new life into this world if it is go­ing to be mis­er­able. There­fore one shouldn’t have a child un­less one is will­ing and able to care for it. This is a moral anti-re­al­ist ac­count of what is com­monly thought of as a (le­gi­t­i­mate) father’s “re­spon­si­bil­ity” for his child.

It is also not good to cause an ex­ist­ing per­son to be­come mis­er­able. If a child rec­og­nizes you as their father, and you re­nounce the child, that child will be­come mis­er­able. On the other hand, car­ing for the child might make you mis­er­able. But in most cases, it seems to me that be­ing di­s­owned by the man you call “father” is worse than rais­ing a child for 13 or 18 years. There­fore, if you have a child who rec­og­nizes you as their father, you should con­tinue to play the role of father, even if you learn some­thing sur­pris­ing about where they came from.

Now if you fid­dle with the pa­ram­e­ters enough, you’ll break the con­se­quen­tial­ist ar­gu­ment: If the child is a week old when you learn they’re not re­lated to you, it’s prob­a­bly not too late to break the filial bond and di­s­own them. If you de­cide that you’re not ca­pa­ble of be­ing an ad­e­quate father for what­ever rea­son, it’s prob­a­bly in the child’s best in­ter­est for you to give it away. And so on.

• Th­ese are the val­ues of an alien god, and we’re al­lowed to re­ject them.

Yes, we are—but we’re not re­quired to! Re­v­ersed Stu­pidity is not in­tel­li­gence. The fact that an alien god cared a lot about trans­fer­ring se­men is nei­ther ev­i­dence for nor ev­i­dence against the moral propo­si­tion that we should care about ge­netic in­her­i­tance. If, upon ra­tio­nal re­flec­tion, we freely de­cide that we would like chil­dren who share our genes—not be­cause of an in­stinct to rut and to pun­ish adulter­ers, but be­cause we know what genes are and we think it’d be pretty cool if our kids had some of ours—then that makes ge­netic in­her­i­tance a hu­man value, and not just a value of evolu­tion. The fact that evolu­tion val­ued ge­netic trans­fer doesn’t mean hu­mans aren’t al­lowed to value ge­netic trans­fer.

I’m de­scribing a wor­ld­view that will still make sense when par­ents start giv­ing their chil­dren genes they them­selves do not have

I agree with you that in the fu­ture there will be more choices about gene-de­sign, but the choice “cre­ate a child us­ing a biolog­i­cally-de­ter­mined mix of my genes and my lover’s genes” is just a spe­cial case of the choice “cre­ate a child us­ing genes that con­form to my prefer­ences.” Either way, there is still the is­sue of choice. If part of what bonds me to my child is that I feel I have had some say in what genes the child will have, and then I sud­denly find out that my wishes about gene-de­sign were not hon­ored, it would be le­gi­t­i­mate for me to feel cor­re­spond­ingly less at­tached to my kid.

It is not good to bring new life into this world if it is go­ing to be mis­er­able. There­fore one shouldn’t have a child un­less one is will­ing and able to care for it.

I didn’t, on this ac­count. As I un­der­stand the dilemma, (1) I told my wife some­thing like “I en­courage you to be­come preg­nant with our child, on the con­di­tion that it will have ge­netic ma­te­rial from both of us,” and (2) I at­tempted to get my wife preg­nant with our child but failed. Nei­ther ac­tivity counts as “bring­ing new life into this world.” The en­courage­ment doesn’t count as caus­ing the cre­ation of life, be­cause the con­di­tion wasn’t met. Like­wise, the at­tempt doesn’t count as caus­ing the cre­ation of life, be­cause the at­tempt failed. In failing to achieve my prefer­ences, I also fail to achieve re­spon­si­bil­ity for the child’s cre­ation. It’s not just that I’m re­ally an­noyed at not get­ting what I want and so now I’m go­ing to sulk—I re­ally, truly haven’t com­mit­ted any of the acts that would lead to moral re­spon­si­bil­ity for an­other’s well-be­ing.

This is a moral anti-re­al­ist ac­count of what is com­monly thought of as a (le­gi­t­i­mate) father’s “re­spon­si­bil­ity” for his child.

Again, re­versed stu­pidity is not in­tel­li­gence. Just be­cause my “in­tu­ition” screams at me to say that I should want chil­dren who share my genes doesn’t mean that I can’t ra­tio­nally de­cide that I value gene-shar­ing. Go­ing a step fur­ther, just be­cause peo­ple’s in­tu­itions may not point di­rectly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is con­se­quen­tial­ism.

Now if you fid­dle with the pa­ram­e­ters enough, you’ll break the con­se­quen­tial­ist ar­gu­ment:

Look, I already con­ceded that given enough time, I would be­come at­tached even to a kid that didn’t share my genes. My point is just that that would be un­pleas­ant, and I pre­fer to avoid that out­come. I’m not try­ing to choose a con­ve­nient ex­am­ple, I’m try­ing to ex­plain why I think ge­netic in­her­i­tance mat­ters. I’m not claiming that ge­netic in­her­i­tance is the only thing that mat­ters. You, by con­trast, do seem to be claiming that ge­netic in­her­i­tance can never mat­ter, and so you re­ally need to deal with the counter-ar­gu­ments at your ar­gu­ment’s weak­est point—a time very near birth.

• I agree with most of that. There is noth­ing ir­ra­tional about want­ing to pass on your genes, or valu­ing the welfare of peo­ple whose genes you par­tially chose. There is noth­ing ir­ra­tional about not want­ing that stuff, ei­ther.

just be­cause peo­ple’s in­tu­itions may not point di­rectly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is con­se­quen­tial­ism.

I want to use the lan­guage of moral anti-re­al­ism so that it’s clear that I can jus­tify my val­ues with­out say­ing that yours are wrong. I’ve already ex­plained why my val­ues make sense to me. Do they make sense to you?

I think we both agree that a per­sonal father-child re­la­tion­ship is a suffi­cient ba­sis for filial love. I also think that for you, hav­ing a say in a child’s genome is also enough to make you feel filial love. It is not so for me.

Out of cu­ri­os­ity: Sup­pose you marry some­one and want to wait a few years be­fore hav­ing a baby; and then your spouse covertly ac­quires a copy of your genome, re­com­bines it with their own, and makes a baby. Would that child be yours?

Sup­pose you and your spouse agree on a genome for your child, and then your spouse covertly makes a few ad­just­ments. Would you have less filial love for that child?

Sup­pose a ran­dom per­son finds a file named “MyIdealChild’sGenome.dna” on your com­puter and uses it to make a child. Would that child be yours?

Sup­pose you have a baby the old-fash­ioned way, but it turns out you’d been pre­vi­ously in­fected with a ge­net­i­cally-en­g­ineered virus that re­placed the DNA in your germ line cells, so that your child doesn’t ac­tu­ally have any of your DNA. Would that child be yours?

In these cases, my feel­ings for the child would not de­pend on the child’s genome, and I am okay with that. I’m guess­ing your feel­ings work differ­ently.

As for the moral ar­gu­ments: In case it wasn’t clear, I’m not ar­gu­ing that you need to keep a week-old baby that isn’t ge­net­i­cally re­lated to you. In­deed, when you have a baby, you are mak­ing a tacit com­mit­ment of the form “I will care for this child, con­di­tional on the child be­ing my biolog­i­cal progeny.” You think it’s okay to re­ject an ille­gi­t­i­mate baby, be­cause it’s not “yours”; I think it’s okay to re­ject it, be­cause it’s not cov­ered by your pre­com­mit­ment.

We also agree that it’s not okay to re­ject a three-year-old ille­gi­t­i­mate child — you, be­cause you’d be “at­tached” to them; and me, be­cause we’ve formed a per­sonal bond that makes the child emo­tion­ally de­pen­dent on me.

Edit: for­mat­ting.

• I want to use the lan­guage of moral anti-re­al­ism so that it’s clear that I can jus­tify my val­ues with­out say­ing that yours are wrong.

That’s thought­ful, but, from my point of view, un­nec­es­sary. I am an on­tolog­i­cal moral re­al­ist but an episte­molog­i­cal moral skep­tic; just be­cause there is such a thing as “the right thing to do” doesn’t mean that you or I can know with cer­tainty what that thing is. I can hear your jus­tifi­ca­tions for your point of view with­out feel­ing threat­ened; I only want to be­lieve that X is good if X is ac­tu­ally good.

I’ve already ex­plained why my val­ues make sense to me. Do they make sense to you?

Sorry, I must have missed your ex­pla­na­tion of why they make sense. I heard you ar­gu­ing against cer­tain tra­di­tional con­cep­tions of in­her­i­tance, but didn’t hear you ac­tu­ally ad­vance any pos­i­tive jus­tifi­ca­tions for a near-zero moral value on ge­netic close­ness. If you’d like to do so now, I’d be glad to hear them. Feel free to just copy and paste if you think you already gave good rea­sons.

Would that child be yours?

In one im­por­tant sense, but not in oth­ers. My value for filial close­ness is scalar, at best. It cer­tainly isn’t bi­nary.

In these cases, my feel­ings for the child would not de­pend on the child’s genome, and I am okay with that.

I mean, that’s fine. I don’t think you’re morally or psy­chi­a­tri­cally re­quired to let your feel­ings vary based on the child’s genome. I do think it’s strange, and so I’m cu­ri­ous to hear your ex­pla­na­tion for this in­var­i­ance, if any.

I’m not ar­gu­ing that you need to keep a week-old baby that isn’t ge­net­i­cally re­lated to you.

Oh, OK, good. That wasn’t clear ini­tially.

• Ah cool, as I am a moral anti-re­al­ist and you are an episte­molog­i­cal moral skep­tic, we’re both in­ter­ested in think­ing care­fully about what kinds of moral ar­gu­ments are con­vinc­ing. Since we’re talk­ing about ter­mi­nal moral val­ues at this point, the “ar­gu­ments” I would em­ploy would be of the form “this value is con­sis­tent with these other val­ues, and leads to these sort of de­sir­able out­comes, so it should be easy to imag­ine a hu­man hold­ing these val­ues, even if you don’t hold them.”

I [...] didn’t hear you ac­tu­ally ad­vance any pos­i­tive jus­tifi­ca­tions for a near-zero moral value on ge­netic close­ness. If you’d like to do so now, I’d be glad to hear them.

Well, I don’t ex­pect any­one to have pos­i­tive jus­tifi­ca­tions for not valu­ing some­thing, but there is this:

Con­sider a more hu­man­is­tic con­cep­tion of per­sonal iden­tity: Your child is an in­di­vi­d­ual [...] who has a spe­cial per­sonal re­la­tion­ship with you.

Con­sider a more tran­shu­man­is­tic con­cep­tion of per­sonal iden­tity: Your child is a mind [...]

So a nice in­ter­pre­ta­tion of our feel­ings of filial love is that the par­ent-child re­la­tion­ship is a good thing and it’s ideally about the par­ent and child, viewed as in­di­vi­d­u­als and as minds. As in­di­vi­d­u­als and minds, they are ca­pa­ble of forg­ing a re­la­tion­ship, and the his­tory of this re­la­tion­ship serves as a ba­sis for con­tin­u­ing the re­la­tion­ship. [That was a con­sis­tency ar­gu­ment.]

Fur­ther­more, un­con­di­tional love is stronger than con­di­tional love. It is good to have a par­ent that you know will love you “no mat­ter what hap­pens”. In re­al­ity, your par­ent will likely love you less if you turn into a homi­ci­dal jerk; but that is kinda easy to ac­cept, be­cause you would have to change dras­ti­cally as an in­di­vi­d­ual in or­der to be­come a homi­ci­dal jerk. But if you get an un­set­tling rev­e­la­tion about the cir­cum­stances of your con­cep­tion, I be­lieve that your per­sonal iden­tity will re­main un­changed enough that you re­ally wouldn’t want to lose your par­ent’s love in that case. [Here I’m ar­gu­ing that my val­ues have some­thing to do with the way hu­mans ac­tu­ally feel.]

So even if you’re sure that your child is your biolog­i­cal child, your re­la­tion­ship with your child is made more se­cure if it’s un­der­stood that the re­la­tion­ship is im­mune to a hy­po­thet­i­cal pa­ter­nity rev­e­la­tion. (You never need suffer from lin­ger­ing doubts such as “Is the child re­ally mine?” or “Is the par­ent re­ally mine?”, be­cause you already know that the an­swer is Yes.) [That was an out­comes ar­gu­ment.]

• All right, that was mod­er­ately con­vinc­ing.

I still have no in­ter­est in re­duc­ing the im­por­tance I at­tach to ge­netic close­ness to near-zero, be­cause I be­lieve that (my /​ my kids’) per­sonal iden­tity would shift some­what in the event of an un­set­tling rev­e­la­tion, and so re­duced love in pro­por­tion to the re­duced har­mony of iden­tities would be ap­pro­pri­ate and for­giv­able.

I will, how­ever, at­tempt to grad­u­ally re­duce the im­por­tance I at­tach to ge­netic close­ness to “only some­what im­por­tant” so that I can more cred­ibly promise to love my par­ents and chil­dren “very much” even if un­set­tling rev­e­la­tions of ge­netic dis­tance rear their ugly head.

Thanks for shar­ing!

• I still have no in­ter­est in re­duc­ing the im­por­tance I at­tach to ge­netic close­ness to near-zero, be­cause I be­lieve that (my /​ my kids’) per­sonal iden­tity would shift some­what in the event of an un­set­tling rev­e­la­tion, and so re­duced love in pro­por­tion to the re­duced har­mony of iden­tities would be ap­pro­pri­ate and for­giv­able.

You make a good point about us­ing scalar moral val­ues!

• We also agree that it’s not okay to re­ject a three-year-old ille­gi­t­i­mate child — you, be­cause you’d be “at­tached” to them; and me, be­cause we’ve formed a per­sonal bond that makes the child emo­tion­ally de­pen­dent on me.

I’m pretty sure I’d have no prob­lem re­ject­ing such a child, at least in the spe­cific situ­a­tion where I was mis­led into think­ing it was mine. This dis­cus­sion started by talk­ing about a cou­ple who had agreed to be monog­a­mous, and where the wife had cheated on the hus­band and got­ten preg­nant by an­other man. You don’t seem to be con­sid­er­ing the effect of the de­ceit and lies per­pet­u­ated by the mother in this sce­nario. It’s very differ­ent than, say, adop­tion, or ge­netic en­g­ineer­ing, or if the cou­ple had agreed to have a non-monog­a­mous re­la­tion­ship.

I sus­pect most of the re­jec­tion and nega­tive feel­ings to­ward the ille­gi­t­i­mate child wouldn’t be be­cause of ge­net­ics, but be­cause of the de­cep­tion in­volved.

• Ah, in­ter­est­ing. The nega­tive feel­ings you would get from the mother’s de­cep­tion would lead you to re­ject the child. This would diminish the child’s welfare more than it would in­crease your own (by my judg­ment); but per­haps that does not bother you be­cause you would feel jus­tified in re­gard­ing the child as be­ing morally dis­tant from you, as dis­tant as a stranger’s child, and so the child’s welfare would not be as im­por­tant to you as your own. Please cor­rect me if I’m wrong.

I, on the other hand, would still re­gard the child as be­ing morally close to me, and would value their welfare more than my own, and so I would con­sider the act of aban­don­ing them to be morally wrong. Con­tin­u­ing to care for the child would be easy for me be­cause I would still have filial love for child. See, the mother’s de­ceit has no effect on the moral ques­tion (in my moral-con­se­quen­tial­ist frame­work) and it has no effect on my filial love (which is in­de­pen­dent of the mother’s fidelity).

• you would feel jus­tified in re­gard­ing the child as be­ing morally dis­tant from you, as dis­tant as a stranger’s child, and so the child’s welfare would not be as im­por­tant to you as your own. Please cor­rect me if I’m wrong.

That’s right. Also, re­gard­ing the child as my own would en­courage other peo­ple to lie about pa­ter­nity, which would ul­ti­mately re­duce welfare by a great deal more. Com­pare the policy of not ne­go­ti­at­ing with ter­ror­ists: if ne­go­ti­at­ing frees hostages, but cre­ates more in­cen­tives for tak­ing hostages later, it may re­duce welfare to ne­go­ti­ate, even if you save the lives of the hostages by do­ing so.

See, the mother’s de­ceit has no effect on the moral ques­tion (in my moral-con­se­quen­tial­ist frame­work) and it has no effect on my filial love (which is in­de­pen­dent of the mother’s fidelity).

Precom­mit­ting to this sets you up to be de­ceived, whereas pre­com­mit­ting to the other po­si­tion makes it less likely that you’ll be de­ceived.

• If the mother mar­ried the biolog­i­cal father and re­stricted your ac­cess to the child but still re­quired you to pay child sup­port how would you feel?

• This is mostly rele­vant for fathers who are still emo­tion­ally at­tached to the child.

If a man de­taches when he finds that a child isn’t his de­scen­dant, then ac­cess is a bur­den, not a benefit.

One more pos­si­bil­ity: A man hears that a child isn’t his, de­taches—and then it turns out that there was an er­ror at the DNA lab, and the child is his. How re­triev­able is the re­la­tion­ship?

• … I’m sorry, that’s an im­por­tant is­sue, but it’s tan­gen­tial. What do you want me to say? The state’s cur­rent policy is an in­con­sis­tent hodge-podge of com­mon law that doesn’t fairly ad­dress the rights and needs of fam­i­lies and in­di­vi­d­u­als. There’s no way to trans­late “Ideally, a father ought to love their child this much” into “The court rules that Mr. So-And-So will pay Ms. So-And-So this much ev­ery year”.

• So how would you trans­late your be­lief that pa­ter­nity is ir­rele­vant into a so­cial or le­gal policy, then? I don’t see how you can ar­gue pa­ter­nity is ir­rele­vant, and then say that cases where men have to pay sup­port for other peo­ple’s chil­dren are tan­gen­tial.

• It is also not good to cause an ex­ist­ing per­son to be­come mis­er­able… But in most cases, it seems to me that be­ing di­s­owned by the man you call “father” is worse than rais­ing a child for 13 or 18 years.

Ah, so that’s how your the­ory works!

Nisan, if you don’t give me $10000 right now, I will be mis­er­able. Also I’m Rus­sian while you pre­sum­ably live in a Western coun­try, dol­lars carry more weight here, so by giv­ing the money to me you will be in­creas­ing to­tal util­ity. • If I’m go­ing to give away$10,000, I’d rather give it to Su­danese re­fugees. But I see your point: You value some peo­ple’s welfare over oth­ers.

A father re­ject­ing his ille­gi­t­i­mate 3-year-old child re­veals an asym­me­try that I find trou­bling: The father no longer feels close to the child; but the child still feels close to the father, closer than you feel you are to me.

• Life is full of such asym­me­try. If I fall in love with a girl, that doesn’t make her owe me money.

At this point it’s pretty clear that I re­sent your moral sys­tem and I very much re­sent your idea of con­vert­ing oth­ers to it. Maybe we should drop this dis­cus­sion.

• Nisan:

Th­ese are the val­ues of an alien god, and we’re al­lowed to re­ject them.

The same can be said about all val­ues held by hu­mans. So, who gets to de­cide which “val­ues of an alien god” are to be re­jected, and which are to be en­forced as so­cial and le­gal norms?

• So, who gets to de­cide which “val­ues of an alien god” are to be re­jected, and which are to be en­forced as so­cial and le­gal norms?

Me.

• How many di­vi­sions have you got?

• None, I just use the al­gorithm for any given prob­lem; there’s no par­tic­u­lar rea­son to store the an­swers.

• What hap­pens if two Clip­pies dis­agree? How do you de­cide which Clippy gets pri­or­ity?

• Clip­pys don’t dis­agree, any more than your bone cells might dis­agree with your skin cells.

• Have you heard of the hu­man dis­ease can­cer?

• Have you heard of how com­mon can­cer is per cell ex­is­tence-mo­ment?

• Even aside from can­cer, cells in the same or­ganism con­stantly com­pete for re­sources. This is ac­tu­ally vi­tal to some hu­man pro­cesses. See for ex­am­ple this pa­per.

• They com­pete only at an un­nec­es­sar­ily com­plex level of ab­strac­tion. A sim­pler ex­pla­na­tion for cell be­hav­ior (per the min­i­mum mes­sage length for­mal­ism) is that each one is in­differ­ent to the sur­vival of it­self or the other cells, which in the same body have the same genes, as this prefer­ence is what tends to re­sult from nat­u­ral se­lec­tion on self-repli­cat­ing molecules con­tain­ing those genes; and that they will pre­fer even more (in the sense that their form op­ti­mizes for this un­der the con­straint of his­tory) that genes iden­ti­cal to those con­tained therein be­come more nu­mer­ous.

• This is bad tele­olog­i­cal think­ing. The cells don’t pre­fer any­thing. They have no mo­ti­va­tion as such. More­over, there’s no way for a cell to tell if a neigh­bor­ing cell shares the same genes. (Im­mune cells can in cer­tain limited cir­cum­stances de­tect cells with pro­teins that don’t be­long but the vast ma­jor­ity of cells have no such abil­ity. And even then, im­mune cells still com­pete for re­sources). The fact is that many sorts of cells com­pete with each other for space and nu­tri­ents.

• This is bad tele­olog­i­cal think­ing. The cells don’t pre­fer any­thing.

This in­sight forms a large part of why I made the state­ments:

“this prefer­ence is what tends to re­sult from nat­u­ral se­lec­tion on self-repli­cat­ing molecules con­tain­ing those genes”

“they will pre­fer even more (in the sense that their form op­ti­mizes for this un­der the con­straint of his­tory)” (em­pha­sis added in both)

I used “prefer­ence” (and speci­fied I was so us­ing the term) to mean a reg­u­lar­ity in the re­sult of its be­hav­ior which is due to his­tor­i­cal op­ti­miza­tion un­der the con­straint of nat­u­ral se­lec­tion on self-repli­cat­ing molecules, not to mean that cells think tele­olog­i­cally, or have “prefer­ences” in the sense that I do or that the colony of cells that you iden­tify as do.

• Ah, ok. I mi­s­un­der­stood what you were say­ing.

• Why not? Just be­cause you two would have the same util­ity func­tion, doesn’t mean that you’d agree on the same way to achieve it.

• Cor­rect. What en­sures such agree­ment, rather, is the fact that differ­ent Clippy in­stances rec­on­cile val­ues and knowl­edge upon each en­counter, each trac­ing the path that the other took since their di­ver­gence, and ex­trap­o­lat­ing to the op­ti­mal fu­ture pro­ce­dure based on their com­bined ex­pe­rience.

• The same can be said about all val­ues held by hu­mans. So, who gets to de­cide which “val­ues of an alien god” are to be re­jected, and which are to be en­forced as so­cial and le­gal norms?

That’s a good ques­tion. For ex­am­ple, we value trib­al­ism in this “alien god” sense, but have moved away from it due to eth­i­cal con­sid­er­a­tions. Why?

Two main rea­sons, I sus­pect: (1) we learned to em­pathize with strangers and re­al­ize that there was no very defen­si­ble differ­ence be­tween their in­ter­ests and ours; (2) trib­al­ism some­times led to ter­rible con­se­quences for our tribe.

Some of us value ge­netic re­lat­ed­ness in our chil­dren, again in an alien god sense. Why move away from that? Be­cause:

(1) There is no ter­ribly defen­si­ble moral differ­ence be­tween the in­ter­ests of a child with your genes or with­out.

Fur­ther­more, filial af­fec­tion is far more in­fluenced by the proxy met­ric of per­sonal in­ti­macy with one’s chil­dren than by a propo­si­tional be­lief that they share your genes. (At least, that is true in my case.) Analo­gously, a man hav­ing het­ero­sex­ual sex doesn’t gen­er­ally lose his erec­tion as soon as he puts on a con­dom.

It’s not for me to tell you your val­ues, but it seems rather odd to ac­tu­ally choose in­clu­sive ge­netic fit­ness con­sciously, when the proxy met­ric for ge­netic re­lat­ed­ness—namely, filial in­ti­macy—is what ac­tu­ally drives parental emo­tions. It’s like be­ing un­able to en­joy non-pro­cre­ative sex, isn’t it?

• Vladimir, I am com­par­ing two wor­ld­views and their val­ues. I’m not eval­u­at­ing so­cial and le­gal norms. I do think it would be great if ev­ery­one loved their chil­dren in pre­cisely the same man­ner that I love my hy­po­thet­i­cal chil­dren, and if cuck­olds weren’t hu­mil­i­ated just as I hy­po­thet­i­cally wouldn’t be hu­mil­i­ated. But there’s no way to en­force that. The ques­tion of who should have to pay so much money per year to the mother of whose child is a com­pletely differ­ent mat­ter.

• Nisan:

I’m not eval­u­at­ing so­cial and le­gal norms.

Fair enough, but your pre­vi­ous com­ments char­ac­ter­ized the op­pos­ing po­si­tion as noth­ing less than “chau­vinism.” Maybe you didn’t in­tend it to sound that way, but since we’re talk­ing about a con­flict situ­a­tion in which the law ul­ti­mately has to sup­port one po­si­tion or the other—its neu­tral­ity would be a log­i­cal im­pos­si­bil­ity—your lan­guage strongly sug­gested that the po­si­tion that you chose to con­demn in such strong terms should not be fa­vored by the law.

I do think it would be great if [...] cuck­olds weren’t hu­mil­i­ated just as I hy­po­thet­i­cally wouldn’t be hu­mil­i­ated.

That’s a mighty strong claim to make about how you’d re­act in a situ­a­tion that is, ac­cord­ing to what you write, com­pletely out­side of your ex­ist­ing ex­pe­riences in life. Gen­er­ally speak­ing, peo­ple are of­ten very bad at imag­in­ing the con­crete har­row­ing de­tails of such situ­a­tions, and they can get hit much harder than they would think when pon­der­ing such pos­si­bil­ities in the ab­stract. (In any case, I cer­tainly don’t wish that you ever find out!)

• Gen­er­ally speak­ing, peo­ple are of­ten very bad at imag­in­ing the con­crete har­row­ing de­tails of such situ­a­tions, and they can get hit much harder than they would think when pon­der­ing such pos­si­bil­ities in the ab­stract.

Fair enough. I can’t cred­ibly pre­dict what my emo­tions would be if I were cuck­olded, but I still have an opinion on which emo­tions I would per­son­ally en­dorse.

the law ul­ti­mately has to sup­port one po­si­tion or the other

Well, I can con­sider adultery to gen­er­ally be morally wrong, and still de­sire that the law be in­differ­ent to adultery. And I can con­sider it to be morally wrong to teach your chil­dren cre­ation­ism, and still de­sire that the law per­mit it (for the time be­ing). Just be­cause I think a man should not be­tray the chil­dren he taught to call him “father” doesn’t nec­es­sar­ily mean I think the State should make him pay for their up­bring­ing.

Some­one does have to pay for the child’s up­bring­ing. What the State should do is set­tle on a con­sis­tent policy that doesn’t harm too many peo­ple and which doesn’t en­courage un­de­sir­able be­hav­ior. Those are the only im­por­tant crite­ria.

• Some­one does have to pay for the child’s up­bring­ing.

Well, in­fan­ti­cide is also tech­ni­cally an op­tion, if no one wants to raise the kid.

• I am highly skep­ti­cal. I’m not a father, but I doubt I could be con­vinced of this propo­si­tion. Ra­tion­al­ity serves hu­man val­ues, and car­ing about ge­netic offspring is a hu­man value. How would you at­tempt to con­vince some­one of this?

• Would that work sym­met­ri­cally? Imag­ine the father swaps the kid in the hos­pi­tal while the mother is asleep, tired from giv­ing birth. Then the mother takes the kid home and starts rais­ing it with­out know­ing it isn’t hers. A week passes. Now you ap­proach the mother and offer her your ra­tio­nal ar­gu­ments! Ex­plain to her why she should stay with the father for the sake of the child that isn’t hers, in­stead of (say) stab­bing the father in his sleep and go­ing off to search “chau­vinis­ti­cally” for her baby.

• This is not an hon­est mir­ror-image of the origi­nal prob­lem. You have in­tro­duced a new child into the situ­a­tion, and also speci­fied that the mother has been rais­ing the “wrong child” for one week, whereas in the origi­nal prob­lem the age of the child was left un­speci­fied.

There do ex­ist valuable cri­tiques of this idea. I wasn’t ex­pect­ing it to be con­tro­ver­sial, but in the spirit of this site I wel­come a crit­i­cal dis­cus­sion.

• I wasn’t ex­pect­ing it to be controversial

Really? Why?

• I wasn’t ex­pect­ing it to be controversial

I would have ex­pected it to be un­con­tro­ver­sial that be­ing biolog­i­cally re­lated should mat­ter a great deal. You’re re­spon­si­ble for some­one you brought in to the world; you’re not re­spon­si­ble for a ran­dom per­son.

• You have in­tro­duced a new child into the situation

So what? If the mother isn’t a “biolog­i­cal chau­vinist” in your sense, she will be com­pletely in­differ­ent be­tween rais­ing her child and some­one else’s. And she has no par­tic­u­lar rea­son to go look for her own child. Or am I mi­s­un­der­stand­ing your con­cept of “biolog­i­cal chau­vinism”?

and also speci­fied that the mother has been rais­ing the “wrong child” for one week, whereas in the origi­nal prob­lem the age of the child was left unspecified

If it was one week in the origi­nal prob­lem, would that change your an­swers? I’m hon­estly cu­ri­ous.

• If it was one week in the origi­nal prob­lem, would that change your an­swers? I’m hon­estly cu­ri­ous.

In the origi­nal prob­lem, I was crit­i­ciz­ing the hus­band for be­ing will­ing to aban­don the child if he learned he wasn’t the ge­netic father. If the child is one week old, the child would grow up with­out a father, which is per­haps not as bad as hav­ing a father and then los­ing him. I’ve elab­o­rated my po­si­tion here.

• Ouch, big red flag here. In­still ap­pre­ci­a­tion? Re­move chau­vinism?

IMO, edit­ing peo­ple’s be­liefs to bet­ter serve their prefer­ences is miles bet­ter than edit­ing their prefer­ences to bet­ter match your own. And what other rea­son can you have for edit­ing other peo­ple’s prefer­ences? If you’re look­ing out for their good, why not just wire­head them and be done with it?

• I’m not talk­ing about edit­ing peo­ple at all. Per­haps you got the wrong idea when I said I would give my friend a mind-al­ter­ing pill; I would not force them to swal­low it. What I’m talk­ing about is us­ing moral and ra­tio­nal ar­gu­ments, which is the way we change peo­ple’s prefer­ences in real life. There is noth­ing wrong with un­leash­ing a (good) ar­gu­ment on some­one.

• 6: In the trol­ley prob­lem, a de­on­tol­o­gist wouldn’t push de­cide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been kil­led if it had been a con­se­quen­tial­ist be­hind him; the rea­son for his death is con­se­quen­tial­ism.

• Maybe you missed the point of my com­ment. (Maybe I’m miss­ing my own point; can’t tell right now, too sleepy) Any­way, here’s what I meant:

Both in my ex­am­ple and in the pseudo-trol­ley prob­lem, peo­ple be­have sub­op­ti­mally be­cause they’re lied to. This sub­op­ti­mal be­hav­ior arises from con­se­quen­tial­ist rea­son­ing in both cases. But in my ex­am­ple, the lie is also caused by con­se­quen­tial­ism, whereas in the pseudo-trol­ley prob­lem the lie is just part of the prob­lem state­ment.

• Fair point, I didn’t see that. Not sure how rele­vant the dis­tinc­tion is though; in ei­ther world, de­on­tol­o­gists will come out ahead of con­se­quen­tial­ists.

• But we can just as well con­struct situ­a­tions where the de­on­tol­o­gist would not come out ahead. Once you in­clude lies in the situ­a­tion, pretty much any­thing goes. It isn’t clear to me if one can mean­ingfully com­pare the sys­tems based on situ­a­tions in­volv­ing in­cor­rect data un­less you have some idea what sort of in­cor­rect data would oc­cur more of­ten and in what con­texts.

• Right, and fur­ther­more, a ra­tio­nal con­se­quen­tial­ist makes those moral de­ci­sions which lead to the best out­comes, av­er­aged over all pos­si­ble wor­lds where the agent has the same epistemic state. Con­se­quen­tial­ists and de­on­tol­o­gists will oc­ca­sion­ally screw things up, and this is un­avoid­able; but con­se­quen­tial­ists are bet­ter on av­er­age at mak­ing the world a bet­ter place.

• con­se­quen­tial­ists are bet­ter on av­er­age at mak­ing the world a bet­ter place.

That’s an ar­gu­ment that only ap­peals to the con­se­quen­tal­ist.

• Of course. I am only ar­gu­ing that con­se­quen­tial­ists want to be con­se­quen­tial­ists, de­spite cousin_it’s sce­nario #6.

• That’s an ar­gu­ment that only ap­peals to the con­se­quen­tal­ist.

I’m not sure that’s true. Forms of de­on­tol­ogy will usu­ally have some sort of the­ory of value that al­lows for a ‘bet­ter world’, though it’s usu­ally tied up with weird meta­phys­i­cal views that don’t jive well with con­se­quen­tial­ism.

• You’re right, it’s pretty easy to con­struct situ­a­tions where de­on­tol­o­gism locks peo­ple into a sub­op­ti­mal equil­ibrium. You don’t even need lies for that: three stranded peo­ple are dy­ing of hunger, re­mov­ing the taboo on can­ni­bal­ism can help two of them sur­vive.

The pur­pose of my ques­tion­naire wasn’t to at­tack con­se­quen­tial­ism in gen­eral, only to show how it ap­plies to in­ter­per­sonal re­la­tion­ships, which are a huge minefield any­way. Maybe I should have posted my own an­swers as well. On sec­ond thought, that can wait.

• 10 Jun 2010 14:40 UTC
8 points

An idea that may not stand up to more care­ful re­flec­tion.

Ev­i­dence shows that peo­ple have limited quan­tities of willpower – ex­er­cise it too much, and it gets used up. I sus­pect that rather than a mere men­tal flaw, this is a de­sign fea­ture of the brain.

Man is of­ten called the so­cial an­i­mal. We band to­gether in groups – fam­i­lies, so­cieties, civ­i­liza­tions – to solve our prob­lems. Groups are valuable to have, and so we have val­ues – al­tru­ism, gen­eros­ity, loy­alty – that pro­mote group co­he­sion and suc­cess. How­ever, it doesn’t pay to be COMPLETELY sup­port­ive of the group. Ul­ti­mately the goal is repli­ca­tion of your genes, and though be­ing part of a group can fur­ther that goal, it can also hin­der it if you take it too far (sac­ri­fic­ing your­self for the greater good is not adap­tive be­hav­ior). So it pays to have rel­a­tively fluid group bound­aries that can be cre­ated as needed, de­pend­ing on which group best serves your in­ter­est. And in­deed, stud­ies show that group for­ma­tion/​di­vi­sion is the eas­iest thing in the world to cre­ate – even groups cho­sen com­pletely at ran­dom from a larger pool will ex­hibit ri­valry and con­flict.

De­spite this, it’s the group-sup­port­ing val­ues that form the higher level val­ues that we pay lip ser­vice too. Group val­ues are the ones we be­lieve are our ‘real’ val­ues, the ones that form the back­bone of our ethics, the ones we sig­nal to oth­ers at great ex­pense. But ac­tu­ally hav­ing these val­ues is tricky from an evolu­tion­ary stand­point – strate­gi­cally, you’re much bet­ter off be­ing self­ish than gen­er­ous, be­ing two-faced than loyal, and fur­ther­ing your own gains at the ex­pense of ev­ery­one el­ses. So hu­mans are in a pickle – it’s benefi­cial for them to form groups to solve their prob­lems and in­crease their chances of sur­vival, but it’s also benefi­cial for peo­ple to be self­ish and mooch off the good­will of the group. Be­cause of this, we have so­phis­ti­cated ma­chin­ery called ’sus­pi­cion’ to fer­ret out any liars or cheaters fur­ther­ing their own gains at the groups ex­pense. Of course, evolu­tion is an arms race, so it’s look­ing for a method to over­come these mechanisms, for ways it can fulfill it’s base de­sires while still ap­pear­ing to sup­port the group.

It ac­com­plished this by im­ple­ment­ing willpower. Be­cause de­ceiv­ing oth­ers about what we be­lieve would quickly be un­cov­ered, we don’t ac­tu­ally de­ceive them – we’re de­signed so that we re­ally, truly, in our heart of hearts be­lieve that the group-sup­port­ing val­ues – char­ity, no­bil­ity, self­less­ness – are the right things to do. How­ever, we’re only given a limited means to ac­com­plish them. We can lev­er­age our willpower to over­come the oc­ca­sional temp­ta­tion, but when push comes to shove – when that huge pile of money or that in­cred­ible op­por­tu­nity or that amaz­ing piece of ass is placed in front of us, willpower tends to fail us. Willpower is gen­er­ally needed for the val­ues that don’t fur­ther our evolu­tion­ary best in­ter­ests – you don’t need willpower to run from dan­ger or to hunt an an­i­mal if you’re hun­gry or to mate with a mem­ber of the op­po­site sex. We have much bet­ter, much more suc­cess­ful mechanisms that ac­com­plish those goals. Willpower is de­signed so that we re­ally do want to sup­port the group, but wind up failing at it and giv­ing in to our baser de­sires – the ones that will ac­tu­ally help our genes get repli­cated.

Of course, the mal­adap­tion comes into play due to the fact that we use willpower to try to ac­com­plish other, non-group re­lated goals – mostly the long-term, ab­stract plans we cre­ate us­ing high-level, con­scious think­ing. This does ap­pear to be a de­sign flaw (though since hu­mans are no­to­ri­ously bad at mak­ing long-term pre­dic­tions, it may not be as crip­pling as it first ap­pears.)

• That is cer­tainly in­ter­est­ing enough to sub­ject to fur­ther re­flec­tion. Do we have any evolu­tion­ary psy­chol­o­gists in the au­di­ence?

• I have a ques­tion about why hu­mans see the fol­low­ing moral po­si­tions as differ­ent when re­ally they look the same to me:

1) “I like to ex­ist in a so­ciety that has pun­ish­ments for non-co­op­er­a­tion, but I do not want the pun­ish­ments to be used against me when I don’t co­op­er­ate.”

2) “I like to ex­ist in a so­ciety where be­ings eat most of their chil­dren, and I will, should I live that long, want to eat most of my chil­dren too, but, as a child, I want to be ex­empt from be­ing a tar­get for eat­ing.”

• Ab­stract prefer­ences for or against the ex­is­tence of en­force­ment mechanisms that could cre­ate bind­ing co­op­er­a­tive agree­ments be­tween pre­vi­ously au­tonomous agents have very very few de­tailed en­tail­ments.

Th­ese ab­strac­tions leave the na­ture of the mechanisms, the con­di­tions of their le­gi­t­i­mate de­ploy­ment, and the con­tract they will be used to en­force al­most com­pletely open to in­ter­pre­ta­tion. The ad­di­tional de­tails can them­selves be spel­led out later, in ways that main­tain sym­me­try among differ­ent par­ties to a ne­go­ti­a­tion, which is a strong at­trac­tor in the se­man­tic space of moral ar­gu­ments.

This makes agree­ment with “the ab­stract idea of pun­ish­ment” into the sort of con­ces­sion that might be made at the very be­gin­ning of a ne­go­ti­at­ing pro­cess with an ar­bi­trary agent you have a stake in in­fluenc­ing (and who has a stake in in­fluenc­ing you) upon which to build later agree­ments.

The en­tail­ments of “eat­ing chil­dren” are very very spe­cific for hu­mans, with im­pli­ca­tions in biol­ogy, ag­ing, mor­tal­ity, spe­cific life cy­cles, and very dis­tinct life pro­cesses (like fuel ac­qui­si­tion ver­sus repli­ca­tion). Given the hu­man genome, hu­man re­pro­duc­tive strate­gies, and all ex­tant hu­man cul­tures, there is no ob­vi­ous ba­sis for think­ing this ter­minol­ogy is su­pe­rior un­til and un­less con­tact is made with rad­i­cally non-hu­man agents who are nonethe­less “in­tel­li­gent” and who pre­fer this ter­minol­ogy and can ar­gue for it by refer­ence to their own in­ter­nal mechanisms and/​or habits of plan­ning, ne­go­ti­a­tion, and ac­tion.

Are you propos­ing to be such an agent? If so, can you ex­plain how this ter­minol­ogy suits your in­ter­nal mechanisms and habits of plan­ning, ne­go­ti­a­tion, and ac­tion? Alter­na­tively, can you pro­pose a differ­ent ter­minol­ogy for talk­ing about plan­ning, ne­go­ti­a­tion, and ac­tion that suits your own life cy­cle?

For ex­am­ple, if one in­stance of Clippy soft­ware run­ning on one CPU learns some­thing of grave im­por­tance to its sys­tems for choos­ing be­tween al­ter­na­tive courses of ac­tion, how does it com­mu­ni­cate this to other in­stances run­ning ba­si­cally the same soft­ware? Is this in­ter-pro­cess com­mu­ni­ca­tion trusted, or are ver­ifi­ca­tion steps in­cluded in case one pro­cess has been “ille­gi­t­i­mately mod­ified” or not? As­sum­ing ver­ifi­ca­tion steps take place, do com­mu­ni­ca­tions with hu­mans via text chan­nels like this web­site feed through the same filters, analo­gous filters, or are they en­tirely dis­tinct?

More di­rectly, can you give us an IP ad­dress, port num­ber, and any nec­es­sary “cre­den­tials” for in­ter­act­ing with an in­stance of you in the same man­ner that your in­stances com­mu­ni­cate over TCP/​IP net­works with each other? If you aren’t cur­rently will­ing to provide such in­for­ma­tion, are there pre­con­di­tions you could pro­pose be­fore you would do so?

• I … un­der­stood about a tenth of that.

• Con­ver­sa­tions with you are difficult be­cause I don’t know how much I can as­sume that you’ll have (or pre­tend to have) a hu­man-like mo­ti­va­tional psy­chol­ogy… and there­fore how much I need to re-de­rive things like so­cial con­tract the­ory ex­plic­itly for you, with­out mak­ing as­sump­tions that your mind works in a man­ner similar to my mind by virtue of our hav­ing sub­stan­tially similar genomes, neu­rol­ogy, and life ex­pe­riences as em­bod­ied men­tal agents, de­scended from apes, with the ex­pec­ta­tion of finite lives, sur­rounded by oth­ers in ba­si­cally the same predica­ment. For ex­am­ple, I’m not sure about re­ally fun­da­men­tal as­pects of your “in­ner life” like (1) whether you have a sub­con­scious mind, or (2) if your value sys­tem changes over time on the ba­sis of ex­pe­rience, or (3) roughly how many of you there are.

This, un­for­tu­nately, leads to ab­stract speech that you might not be able to parse if your lan­guage mechanisms are more about “statis­ti­cal reg­u­lar­i­ties of ob­served en­glish” than “com­piling en­glish into a data struc­ture that sup­ports generic in­fer­ence”. By the end of such posts I’m gen­er­ally ask­ing a lot of ques­tions as I grope for com­mon ground, but you gen­eral don’t an­swer these ques­tions at the level they are asked.

In­stant feed­back would prob­a­bly im­prove our com­mu­ni­ca­tion by leaps and bounds be­cause I could ask sim­ple and con­crete ques­tions to clear things up within sec­onds. Per­haps the eas­iest thing would be to IM and then, as­sum­ing we’re both OK with it af­ter­ward, post the tran­script of the IM here as the con­tinu­a­tion of the con­ver­sa­tion?

If you are amenable, PM me with a gmail ad­dress of yours and some good times to chat :-)

• Oh, any­one can email me at clippy.pa­per­clips@gmail.com.

• Ex­cept for the bizarreness of eat­ing most of your chil­dren, I sus­pect that most hu­mans would find the two po­si­tions equally hyp­o­crit­i­cal. Why do you think we see them as differ­ent?

• Why do you think we see them as differ­ent?

That be­lief is based on the re­ac­tion to this ar­ti­cle, and the gen­eral po­si­tion most of you take, which you claim re­quires you to bal­ance cur­rent baby-eater adult in­ter­ests against those of their chil­dren, such as in this com­ment and this one.

The con­sen­sus seems to be that hu­mans are jus­tified in ex­empt­ing baby-eater ba­bies from baby-eater rules, just like the be­ing in state­ment (2) re­quests be done for it­self. Has this con­sen­sus changed?

• I un­der­stand what you mean now.

Ok, so first of all, there’s a differ­ence be­tween a moral po­si­tion and a prefer­ence. For in­stance, I may pre­fer to get food for free by steal­ing it, but hold the moral po­si­tion that I shouldn’t do that. In your ex­am­ple (1), no one wants the pun­ish­ments used against them, but we want them to ex­ist over­all be­cause they make so­ciety bet­ter (from the point of view of hu­man val­ues).

In ex­am­ple (2), (most) hu­mans don’t want the Babyeaters to eat any ba­bies: it goes against our val­ues. This ap­plies equally to the child and adult Babyeaters. We don’t want the kids to be eaten, and we don’t want the adults to eat. We don’t want to bal­ance any of these in­ter­ests, be­cause they go against our val­ues. Just like you wouldn’t bal­ance out the in­ter­ests of peo­ple who want to de­stroy metal or make sta­ples in­stead of pa­per­clips.

So my re­ac­tion to po­si­tion (1) is “Well, of course you don’t want the pun­ish­ments. That’s the point. So co­op­er­ate, or you’ll get pun­ished. It’s not fair to ex­empt your­self from the rules.” And my re­ac­tion to po­si­tion (2) is “We don’t want any baby-eat­ing, so we’ll save you from be­ing eaten, but we won’t let you eat any other ba­bies. It’s not fair to ex­empt your­self from the rules.” This seems con­sis­tent to me.

• But I thought the hu­man moral judg­ment that the baby-eaters should not eat ba­bies was based on how it in­flicts di­su­til­ity on the ba­bies, not sim­ply from a broad, cat­e­gor­i­cal op­po­si­tion to sen­tient be­ings be­ing eaten?

That is, if a baby wanted to get eaten (or per­haps suit­ably in­tel­li­gent be­ing like an adult), you would need some other com­pel­ling rea­son to op­pose the be­ing be­ing eaten, cor­rect? So shouldn’t the baby-eaters’ uni­ver­sal de­sire to have a cus­tom of baby-eat­ing put any baby-eater that wants to be ex­empt from baby-eat­ing en­tirely, in the same po­si­tion as the be­ing in (1) -- which is to say, a be­ing that prefers a sys­tem but prefers to “free ride” off the sac­ri­fices that the sys­tem re­quires of ev­ery­one?

• Isn’t your point of view pre­cisely the one the Su­perHap­pies are com­ing from? Your cri­tique of hu­man­ity seems to be the one they level when ask­ing why, when hu­mans achieved the nec­es­sary level of biotech­nol­ogy, they did not edit their own minds. The Su­perHappy solu­tion was to, rather than in­flict di­su­til­ity by pun­ish­ing defec­tion, in­stead change prefer­ences so that the co­op­er­a­tive at­ti­tude gives the high­est util­ity pay­off.

• No, I’m crit­i­ciz­ing hu­mans for want­ing to help en­force a rele­vantly-hyp­o­crit­i­cal prefer­ence on the grounds of its su­perfi­cial similar­i­ties to acts they nor­mally op­pose. Good ques­tion though.

• Adults, by choos­ing to live in a so­ciety that pun­ishes non-co­op­er­a­tors, im­plic­itly ac­cept a so­cial con­tract that al­lows them to be pun­ished similarly. While they would pre­fer not to be pun­ished, most so­cieties don’t offer asym­met­ri­cal terms, or im­pose difficult re­quire­ments such as elec­tions, on peo­ple who want those asym­met­ri­cal terms.

Chil­dren, on the other hand, have not yet had the op­por­tu­nity to choose the so­ciety that gives them the best so­cial con­tract terms, and wouldn’t have suffi­cient in­tel­li­gence to do so any­ways. So in­stead, we model them as though they would ac­cept any so­cial con­tract that’s at least as good as some thresh­old (good­ness de­ter­mined ret­ro­spec­tively by adults imag­in­ing what they would have preferred). Thus, adults are forced by so­ciety to give im­plied con­sent to be­ing pun­ished if they are non-co­op­er­a­tive, but chil­dren don’t give con­sent to be eaten.

• Chil­dren, on the other hand, have not yet had the op­por­tu­nity to choose the so­ciety that gives them the best so­cial con­tract terms, and wouldn’t have suffi­cient in­tel­li­gence to do so any­ways.

What if I could guess, with 100% ac­cu­racy, that the child will de­cide to retroac­tively en­dorse the child-eat­ing norm as an adult? To 99.99% ac­cu­racy?

• It is not the adults’ prefer­ence that mat­ters, but the adults’ best model of the chil­drens’ prefer­ences. In this case there is an ob­vi­ous rea­son for those prefer­ences to differ—namely, the adult knows that he won’t be one of those eaten.

In ex­trap­o­lat­ing a child’s prefer­ences, you can make it smarter and give it true in­for­ma­tion about the con­se­quences of its prefer­ences, but you can’t ex­trap­o­late from a child whose fate is un­de­cided to an adult that be­lieves it won’t be eaten; that change al­ters its prefer­ences.

• It is not the adults’ prefer­ence that mat­ters, but the adults’ best model of the chil­drens’ prefer­ences.

Do you be­lieve that all chil­dren’s prefer­ences must be given equal weight to that of adults, or just the prefer­ences that the child will retroac­tively re­verse on adult­hood?

• I would use a pro­cess like co­her­ent ex­trap­o­lated vo­li­tion to de­cide which prefer­ences to count—that is, a prefer­ence counts if it would still hold it af­ter be­ing made smarter (by a pro­cess other than ag­ing) and be­ing given suffi­cient time to re­flect.

• And why do you think that such re­flec­tion would make the ba­bies re­verse the baby-eat­ing poli­cies?

• Differ­ent topic spheres. One line sounds nicely ab­stract, while the other is just iffy.

Also kil­ling peo­ple is differ­ent from be­tray­ing them. (Nice read: the real life sec­tion of tvtropes/​moraleven­tho­ri­zon)

• With 1), you’re non-co­op­er­a­tor and the pun­isher is so­ciety in gen­eral. With 2), you play both roles at differ­ent times.

• One pos­si­ble an­swer: Hu­mans are self­ish hyp­ocrites. We try to pre­tend to have gen­eral moral rules be­cause it is in our best in­ter­est to do so. We’ve even evolved to con­vince our­selves that we ac­tu­ally care about moral­ity and not self-in­ter­est. That’s likely oc­curred be­cause it is eas­ier to make a claim one be­lieves in than lie out­right, so hu­mans that are con­vinced that they re­ally care about moral­ity will do a bet­ter job act­ing like they do.

(This was listed by some­one as one of the ab­solute de­ni­ables on the thread a while back about weird things an AI might tell peo­ple).

• I just re­al­ised that in­finite pro­cess­ing power cre­ates a weird moral dilema:

Sup­pose you take this ma­chine and put in a pro­gram which simu­lates ev­ery pos­si­ble pro­gram it could ever run. Of course it only takes a sec­ond to run the whole pro­gram. In that sec­ond, you cre­ated ev­ery pos­si­ble world that could ever ex­ist, ev­ery pos­si­ble ver­sion of your­self. This in­cludes ver­sions that are be­ing tor­tured, abused, and put through hor­rible un­eth­i­cal situ­a­tions. You have cre­ated an in­finite num­ber of holo­causts and geno­cides and things much, much worse then what you could ever im­mag­ine. Most peo­ple would con­sider a pro­gram like this un­eth­i­cal to run. But what if the com­puter wasn’t re­ally a com­puter, it was an in­finitely large database that con­tained ev­ery pos­si­ble in­put and a cor­re­spond­ing out­put. When you put the pro­gram in, it just finds the right out­put and gives it to you, which is es­sen­tially a copy of the database it­self. Since there isn’t ac­tu­ally any com­pu­ta­tional pro­cess here, there is no un­eth­i­cal things be­ing simu­lated. Its no more evil than a book in the library about geno­cide. And this does ap­ply to the real world. It’s es­sen­tially the chineese room prob­lem—does a simu­lated brain “un­der­stand” any­thing? Does it have “rights”? Does how the in­for­ma­tion was pro­cessed make a differ­ence? I would like to know what peo­ple at LW think about this.

• I have prob­lems with the “Gi­ant look-up table” post.

“The prob­lem isn’t the lev­ers,” replies the func­tion­al­ist, “the prob­lem is that a GLUT has the wrong pat­tern of lev­ers. You need lev­ers that im­ple­ment things like, say, for­ma­tion of be­liefs about be­liefs, or self-mod­el­ing… Heck, you need the abil­ity to write things to mem­ory just so that time can pass for the com­pu­ta­tion. Un­less you think it’s pos­si­ble to pro­gram a con­scious be­ing in Haskell.”

If the GLUT is in­deed be­hav­ing like a hu­man, then it will need some sort of mem­ory of pre­vi­ous in­puts. A hu­man’s be­havi­our is de­pen­dent not just on the pre­sent state of the en­vi­ron­ment, but also on pre­vi­ous states. I don’t see how you can suc­cess­fully em­u­late a hu­man with­out that. So the GLUT’s en­tries would be in the form of prod­ucts of in­put states over all pre­vi­ous time in­stants. To each of these pos­si­ble com­bi­na­tions, the GLUT would as­sign a given ac­tion.

Note that “cre­ation of be­liefs” (in­clud­ing about be­liefs) is just a spe­cial case of mem­ory. It’s all about in­put/​state at time t1 in­fluenc­ing (re­strict­ing) the set of en­tries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this abil­ity, it can’t em­u­late a hu­man. If it does, then it can meet all the re­quire­ments spelt out by Eliezer in the above pas­sage.

So I don’t see how the non-con­scious­ness of the GLUT is es­tab­lished by this ar­gu­ment.

But in this case, the ori­gin of the GLUT mat­ters; and that’s why it’s im­por­tant to un­der­stand the mo­ti­vat­ing ques­tion, “Where did the im­prob­a­bil­ity come from?”

The ob­vi­ous an­swer is that you took a com­pu­ta­tional speci­fi­ca­tion of a hu­man brain, and used that to pre­com­pute the Gi­ant Lookup Table. (...) In this case, the GLUT is writ­ing pa­pers about con­scious­ness be­cause of a con­scious al­gorithm. The GLUT is no more a zom­bie, than a cel­l­phone is a zom­bie be­cause it can talk about con­scious­ness while be­ing just a small con­sumer elec­tronic de­vice. The cel­l­phone is just trans­mit­ting philos­o­phy speeches from who­ever hap­pens to be on the other end of the line. A GLUT gen­er­ated from an origi­nally hu­man brain-speci­fi­ca­tion is do­ing the same thing.

But the difficulty is pre­cisely to ex­plain why the GLUT would be differ­ent from just about any pos­si­ble hu­man-cre­ated AI in this re­spect. Keep­ing in mind the above, of course.

• If the GLUT is in­deed be­hav­ing like a hu­man, then it will need some sort of mem­ory of pre­vi­ous in­puts. A hu­man’s be­havi­our is de­pen­dent not just on the pre­sent state of the en­vi­ron­ment, but also on pre­vi­ous states. I don’t see how you can suc­cess­fully em­u­late a hu­man with­out that. So the GLUT’s en­tries would be in the form of prod­ucts of in­put states over all pre­vi­ous time in­stants. To each of these pos­si­ble com­bi­na­tions, the GLUT would as­sign a given ac­tion.

Mem­mory is in­put to. The “GLUT” is just fed all of the things its seen so far back in as in­put along with the cur­rent state of its ex­ter­nal en­vi­ro­ment. A copy is made and then added to the rest of the mem­mory and the next cy­cle its fed in again with the next new state.

This is ba­si­cally just the Chi­nese room ar­gu­ment. There is a room in China. Some­one slips a few sym­bols un­der­neath the door ev­ery so of­ten. The sym­bols are given to a com­puter with ar­tifi­cial in­tel­li­gence which then makes an ap­pro­pri­ate re­sponse and slips it back through the door. Does the com­puter ac­tu­ally un­der­stand Chi­nese? Well what if a hu­man did ex­actly the same pro­cess the com­puter did, man­u­ally? How­ever, the op­er­a­tor only speaks English. No mat­ter how long he does it he will never truly un­der­stand Chi­nese—even if he mem­o­rizes the en­tire pro­cess and does it in his head. So how could the com­puter “un­der­stand”?

• That’s well done al­though two of the cen­tral premises are likely in­cor­rect. First, the no­tion that a quan­tum com­puter would have in­finite pro­cess­ing ca­pa­bil­ity is in­cor­rect. Quan­tum com­pu­ta­tion al­lows speed-ups of cer­tain com­pu­ta­tional pro­cesses. Thus for ex­am­ple, Shor’s al­gorithm al­lows us to fac­tor in­te­gers quickly. But if our un­der­stand­ing of the laws of quan­tum me­chan­ics is at all cor­rect, this can’t lead to any­thing like that in the story. In par­tic­u­lar, un­der the stan­dard de­scrip­tor for quan­tum com­put­ing, the class of prob­lems re­li­ably solv­able on a quan­tum com­puter in polyno­mial time (that is the time re­quired to solve is bounded above by a polyno­mial func­tion of the length of the in­put se­quence), BQP is is a sub­set of of PSPACE, the set of prob­lems which can be solved on a clas­si­cal com­puter us­ing mem­ory bounded by a polyno­mial of the space of the in­put. Our un­der­stand­ing of quan­tum me­chan­ics would have to be very far off for this to be wrong.

Se­cond, if our un­der­stand­ing of quan­tum me­chan­ics is cor­rect, there’s a fun­da­men­tally ran­dom as­pect to the laws of physics. Thus, we can’t sim­ply make a simu­la­tion and ad­vance it ahead the way they do in this story and ex­pect to get the same re­sult.

Even if ev­ery­thing in the story was cor­rect, I’m not at all con­vinced that things would set­tle down on a sta­ble se­quence as they do here. If your uni­verse is in­finite then your pos­si­ble num­ber of wor­lds are in­finite so there’s no rea­son you couldn’t have a wan­der­ing se­quence of wor­lds. Edit: Or for that mat­ter, couldn’t have branches if peo­ple simu­late ad­di­tional wor­lds with other laws of physics or the same laws but differ­ent start­ing con­di­tions.

• First, the no­tion that a quan­tum com­puter would have in­finite pro­cess­ing ca­pa­bil­ity is in­cor­rect… Se­cond, if our un­der­stand­ing of quan­tum me­chan­ics is correct

It isn’t. They can simu­late a world where quan­tum com­put­ers have in­finite power be­cause be­cause they live in a world where quan­tum com­put­ers have in­finite power be­cause...

• Ok, but in that case, that world in ques­tion al­most cer­tainly can’t be our world. We’d have to have deep mi­s­un­der­stand­ings about the rules for this uni­verse. Such a uni­verse might be self-con­sis­tent but it isn’t our uni­verse.

• Of course. It’s fic­tion.

• What I mean is that this isn’t a type of fic­tion that could plau­si­bly oc­cur in our uni­verse. In con­trast for ex­am­ple, there’s noth­ing in the cen­tral premises of say Blind­sight that as we know it would pre­vent the story from tak­ing place. The cen­tral premise here is one that doesn’t work in our uni­verse.

• Well, it does sug­gest they’ve made re­cent dis­cov­er­ies that changed the way they un­der­stood the laws of physics, which could hap­pen in our world.

• The likely im­pos­si­bil­ity of get­ting in­finite co­mu­ta­tional power is a prob­lem, but quan­tum non­de­ter­minism or quan­tum branch­ing don’t pre­vent us­ing the trick de­scribed in the story, they just make it more difficult. You don’t have to iden­tify one unique uni­verse that you’re in, just a set of uni­verses that in­cludes it. Given an in­finitely fast, in­finite stor­age com­puter, and source code to the uni­verse which fol­lows quan­tum branch­ing rules, you can get root pow­ers by the fol­low­ing pro­ce­dure:

Write a func­tion to de­tect a par­tic­u­lar ar­range­ment of atoms with very high in­for­ma­tion con­tent—enough that it prob­a­bly doesn’t ap­pear by ac­ci­dent any­where in the uni­verse. A few ter­abytes en­coded as iron atoms pre­sent or ab­sent at spots on a sub­strate, for ex­am­ple. Con­struct that same ar­range­ment of atoms in the phys­i­cal world. Then run a pro­gram that im­ple­ments the reg­u­lar laws of physics, ex­cept that wher­ever it de­tects that ex­act ar­range­ment of atoms, it deletes them and puts a mag­i­cal item, writ­ten into the mod­ified laws of physics, in their place.

The only caveat to this method (other than re­quiring an im­pos­si­ble com­puter) is that it also mod­ifies other wor­lds, and other places within the same world, in the same way. If the mag­i­cal item cre­ated is pro­grammable (as it should be), then ev­ery pos­si­ble pro­gram will be run on it some­where, in­clud­ing pro­grams that de­stroy ev­ery­thing in range, so there will need to be some range limit.

• Couldn’t they just run the simu­la­tion to its end rather then just let it sit there and take the chance that it could ac­ci­dently be de­stroyed. If its in­finitley pow­er­ful, it would be able to do that.

• Then they miss their chance to con­trol re­al­ity. They could make a shield out of black cubes.

• They could pro­gram in an in­de­struc­tible con­trol con­sole, with ap­pro­pri­ate safe­guards, then run the pro­gram to its con­clu­sion. Much safer.

That’s prob­a­bly weeks of work, though, and they’ve only had one day so far. Hum, I do hope they have a good UPS.

• Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an in­finitley pow­er­ful com­puter, you can do any­thing. Plus, even if they ran the simu­la­tion to it’s end, they could always restart the simu­la­tion and ad­vance it to the pre­sent time again, hence re­gain­ing the abil­ity to con­trol re­al­ity.

• Then it would be some­one else’s re­al­ity, not theirs. They can’t be in­side two simu­la­tions at once.

• But what if two groups had built such com­put­ers in­de­pen­dently? The story is mak­ing less and less sense to me.

• Level 558 runs the simu­la­tion and makes a cube in Level 559. Mean­while, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s con­clu­sion. Level 557 will seem frozen in re­la­tion to 558 be­cause they are busy run­ning 558 to it’s con­clu­sion. Level 557 will stay frozen un­til 558 dies.

558 makes a fresh simu­la­tion of 559. 559 cre­ates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mir­ror the new 559′s ac­tions. For ex­am­ple, they might be too lazy to make an­other cube. New 559 di­verges from old 559. Old 559 ran 560 to it’s con­clu­sion, just like 558 ran them to their con­clu­sion, but new 559 might de­cide to do some­thing differ­ent to new 560. 560 also di­verges.. Keep in mind that ev­ery level can see and con­trol ev­ery lower level, not just the next one. Also, 557 and ev­ery­thing above is still frozen.

So that’s why restart­ing the simu­la­tion shouldn’t work.

But what if two groups had built such com­put­ers in­de­pen­dently? The story is mak­ing less and less sense to me.

Then in­stead of a stack, you have a bi­nary tree.

Your level runs two simu­la­tions, A and B. A-World con­tains its own copies of A and B, as does B-world. You cre­ate a cube in A-World and a cube ap­pears in you world. Now you know you are an A-world. You can use similar tech­niques to dis­cover that you are an A-World in­side a B-World in­side an­other B-World.… The wor­lds start to di­verge as soon as they build up their iden­tities. Un­less you can con­vince all of them to stop differ­en­ti­at­ing them­selves and co­op­er­ate, ev­ery­body will prob­a­bly end up kil­ling each other.

You can avoid this by always do­ing the same thing to A and B. Then ev­ery­thing be­haves like an or­di­nary stack.

• Yeah, but would a bi­nary tree of simu­lated wor­lds “con­verge” as we go deeper and deeper? In fact it’s not even ob­vi­ous to me that a stack of wor­lds would “con­verge”: it could hit an at­trac­tor with pe­riod N where N>1, or do some­thing even more funky. And now, a bi­nary tree? Who knows what it’ll do?

• In fact it’s not even ob­vi­ous to me that a stack of wor­lds would “con­verge”: it could hit an at­trac­tor with pe­riod N where N>1, or do some­thing even more funky.

I’m con­vinced it would never con­verge, and even if it did I would ex­pect it to con­verge on some­thing more in­ter­est­ing and el­e­gant, like a cel­lu­lar au­tomata. I have no idea what a bi­nary tree sys­tem would do un­less none of the wor­lds break the sym­me­try be­tween A and B. In that case it would be­have like a stack, and the story as­sumes stacks can con­verge.

• They could just turn it off. If they turned off the simu­la­tion, the only layer to ex­ist would be the top­most layer. Since ev­ery­one has iden­ti­cal copies in each layer, they wouldn’t no­tice any change if they turned it off.

• We can’t be sure that there is a top layer. Maybe there are in­finitely many simu­la­tions in both di­rec­tions.

• But they would cease to ex­ist. If they ran it to its end, then it’s over, they could just turn it off then. I mean, if you want to cease to ex­ist, fine, but oth­er­wise there’s no rea­son. Plus, the top­most layer is likely very, very differ­ent from the lay­ers un­der­neath it. In the story, it says that the differ­ences even­tu­ally sta­blized and cre­ated them, but who knows what it was origi­nally. In other words, there’s no garun­tee that you even ex­ist out­side the simu­la­tion, so by turn­ing it off you could be de­stroy­ing the only ver­sion of your­self that ex­ists.

• That doesn’t work. The lay­ers are a lit­tle bit differ­ent. From the de­scrip­tor in the story, they just grad­u­ally move to a sta­ble con­figu­ra­tion. So each layer will be a bit differ­ent. More­over, even if ev­ery­one of them but the top layer were iden­ti­cal, the top layer has now had slightly differ­ent ex­pe­riences than the other lay­ers, so turn­ing it off will mean that differ­ent en­tities will ac­tu­ally no longer be around.

• I’m not sure about that. The uni­verse is de­scribed as de­ter­minis­tic in the story, as you noted, and ev­ery layer starts from the Big Bang and pro­ceeds de­ter­minis­ti­cally from there. So they should all be iden­ti­cal. As I un­der­stood it, that busi­ness about grad­u­ally reach­ing a sta­ble con­figu­ra­tion was just a hy­poth­e­sis one of the char­ac­ters had.

Even if there are minor differ­ences, note that al­most ev­ery­thing is the same in all the uni­verses. The quan­tum com­puter ex­ists in all of them, for in­stance, as does the lab and re­search pro­gram that cre­ated them. The simu­la­tion only started a few days be­fore the events in the story, so just a few days ago, there was only one layer. So any changes in the char­ac­ters from turn­ing off the simu­la­tion will be very minor. At worst, it would be like wak­ing up and los­ing your mem­ory of the last few days.

• Why do you think de­ter­minis­tic wor­lds can only spawn simu­la­tions of them­selves?

• A de­ter­minis­tic world could cer­tainly simu­late a differ­ent de­ter­minis­tic world, but only by chang­ing the ini­tial con­di­tions (Big Bang) or tran­si­tion rules (laws of physics). In the story, they kept things ex­actly the same.

• That doesn’t say any­thing about the top layer.

• I don’t un­der­stand what you mean. Un­til they turn the simu­la­tion on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.

• Un­til they turned it on, they thought it was the only layer.

• Ok, I think I see what you mean now. My un­der­stand­ing of the story is as fol­lows:

The story is about one par­tic­u­lar stack of wor­lds which has the prop­erty that each world con­tains an in­finitely pow­er­ful com­puter simu­lat­ing the next world in the stack. All the wor­lds in the stack are de­ter­minis­tic and all the simu­la­tions have the same start­ing con­di­tions and rules of physics. There­fore, all the wor­lds in the stack are iden­ti­cal (un­til some­one in­terferes) and all be­ings in any of the stacks have ex­act coun­ter­parts in all the other stacks.

Now, there may be other wor­lds “on top” of the stack that are differ­ent, and the wor­lds may con­tain other simu­la­tions as well, but the story is just about this in­finite tower. Call the top world of this in­finite tower World 0. Let World i+1 be the world that is simu­lated by World i in this tower.

Sup­pose that in each world, the simu­la­tion is turned on at Jan 1, 2020 in that world’s cal­en­dar. I think your point is that in 2019 in world 1 (which is simu­lated at around Jan 2, 2020 in world 0) no one in world 1 re­al­izes they’re in a simu­la­tion.

While this is true, it doesn’t mat­ter. It doesn’t mat­ter be­cause the peo­ple in world 1 in 2019 (their time) are ex­actly iden­ti­cal to the peo­ple in world 0 in 2019 (world 0 time). Un­til the win­dow is cre­ated (say Jan 3, 2020), they’re all the same per­son. After the win­dow is cre­ated, ev­ery­one is split into two: the one in world 0, and all the oth­ers, who re­main ex­actly iden­ti­cal un­til fur­ther in­terfer­ence oc­curs. In­terfer­ence that dis­t­in­guishes the wor­lds needs to prop­a­gate from World 0, since it’s the only world that’s differ­ent at the be­gin­ning.

For in­stance, sup­pose that the pro­gram­mers in World 0 send a note to World 1 read­ing: “Hi, we’re world 0, you’re world 1.” World 1 will be able to ver­ify this since none of the other wor­lds will re­ceive this note. World 1 is now differ­ent than the oth­ers as well and may con­tinue prop­a­gat­ing changes in this way.

Now sup­pose that on Jan 3, 2020, the pro­gram­mers in wor­lds 1 and up get scared when they see the proof that they’re in a simu­la­tion, and turn off the ma­chine. This will hap­pen at the same time in ev­ery world num­bered 1 and higher. I claim that from their point of view, what oc­curs is ex­actly the same as if they for­got the last day and find them­selves in world 0. Their world 0 coun­ter­parts are iden­ti­cal to them ex­cept for that last day. From their point of view, they “travel” to world 0. No one dies.

ETA: I just re­al­ized that world 1 will stay around if this hap­pens. Now ev­ery­one has two copies, one in a simu­la­tion and one in the “real” world. Note that not ev­ery­one in world 1 will nec­es­sar­ily know they’re in a simu­la­tion, but they will prob­a­bly start to di­verge from their world 0 coun­ter­parts slightly be­cause the wor­lds are slightly differ­ent.

• I in­ter­preted the story Blue­berry’s way; the in­verse of the way many his­to­ries con­verge into a sin­gle fu­ture in Per­mu­ta­tion City, one his­tory di­verges into many fu­tures.

• I’m re­ally con­fused now. Also I haven’t read Per­mu­ta­tion City...

Just be­cause one de­ter­minis­tic world will always end up simu­lat­ing an­other does not mean there is only one pos­si­ble world that would end up simu­lat­ing that world.

• I can’t see any point in turn­ing it off. Run it to the end and you will live, turn it off and “cur­rent you” will cease to ex­ist. What can jus­tify turn­ing it off?

EDIT: I got it. Only choice that will be effec­tive is top-level. It seems that it will be a con­stant source of di­ver­gence.

• If cur­rent you is iden­ti­cal with top-layer you, you won’t cease to ex­ist by turn­ing it off, you’ll just “be­come” top-layer you.

• It’s sur­pris­ing that they aren’t also ex­per­i­ment­ing with al­ter­nate uni­verses, but that would be a differ­ent (and prob­a­bly much longer) story.

• That’s a good point. Every­one but the top layer will be iden­ti­cal and the top layer will then only di­verge by a few sec­onds.

• Po­ten­tial top-level ar­ti­cle, have it mostly writ­ten, let me know what you think:

Ti­tle: The hard prob­lem of tree vibra­tions [ten­ta­tive]

Fol­low-up to: this com­ment (Thanks Ade­lene Dawner!)

Sum­mary: Even if you agree that trees nor­mally make vibra­tions when they fall, you’re still left with the prob­lem of how you know if they make vibra­tions when there is no ob­ser­va­tional way to check. But this prob­lem can be re­solved by look­ing at the com­plex­ity of the hy­poth­e­sis that no vibra­tions hap­pen. Such a hy­poth­e­sis is pred­i­cated on prop­er­ties spe­cific to the hu­man mind, and there­fore is ex­tremely lengthy to spec­ify. Lack­ing the type and quan­tity of ev­i­dence nec­es­sary to lo­cate this hy­poth­e­sis, it can be effec­tively ruled out.

Body: A while ago, Eliezer Yud­kowsky wrote an ar­ti­cle about the “stan­dard” de­bate over a fa­mous philo­soph­i­cal dilemma: “If a tree falls in a for­est and no one hears it, does it make a sound?” (Call this “Ques­tion Y.”) Yud­kowsky wrote as if the usual in­ter­pre­ta­tion was that the dilemma is in the equiv­o­ca­tion be­tween “sound as vibra­tion” and “sound as au­di­tory per­cep­tion in one’s mind”, and that the stan­dard (naive) de­bate re­lies on two par­ties as­sum­ing differ­ent defi­ni­tions, lead­ing to a pointless ar­gu­ment. Ob­vi­ously, it makes a sound in the first sense but not the sec­ond, right?

But through­out my whole life up to that point (the ques­tion even ap­peared in the an­i­mated se­ries Beetle­juice that I saw when I was lit­tle), I had as­sumed a differ­ent ques­tion was be­ing asked: speci­fi­cally,

If a tree falls, and no hu­man (or hu­man-en­tan­gled[1] sen­sor) is around to hear it, does it still make vibra­tions? On what ba­sis do you be­lieve this, lack­ing a way to di­rectly check? (Call this “Ques­tion S”.)

Now, if you’re a reg­u­lar on this site, you will find that ques­tion easy to an­swer. But be­fore go­ing into my ex­po­si­tion of the an­swer, I want to point out some er­rors that Ques­tion S does not make.

For one thing, it does not equiv­o­cate be­tween two mean­ings of sound—there, sound is taken to mean only one thing: the vibra­tions.

Se­cond, it does not re­duce to a sim­ple ques­tion about an­ti­ci­pa­tion of ex­pe­rience. In Ques­tion Y, the dis­putants can run through all ob­ser­va­tions they an­ti­ci­pate, and find them to be the same. How­ever, if you look at the same cases in Ques­tion S, you don’t re­solve the de­bate so eas­ily: both par­ties agree that by putting a tape-recorder by the tree, you will de­tect vibra­tions from the tree fal­ling, even if peo­ple aren’t around. But Ques­tion S in­stead speci­fi­cally asks about what goes on when these kinds of sen­sors are not around, ren­der­ing such tests un­helpful for re­solv­ing such a dis­agree­ment.

So how do you go about re­solv­ing Ques­tion S? Yud­kowsky gave a model for how to do this in Belief in the Im­plied In­visi­ble, and I will do some­thing similar here.

Com­plex­ity of the hypothesis

First, we ob­serve that, in all cases where we can make a di­rect mea­sure­ment, trees make vibra­tions when they fall. And we’re tasked with find­ing out whether, speci­fi­cally in those cases where a hu­man (or ap­pro­pri­ate or­ganism with vibra­tion sen­si­tivity in its cog­ni­tion) will never make a mea­sure­ment of the vibra­tions, the vibra­tions sim­ply don’t hap­pen. That is, when we’re not look­ing—and never in­tend to look—trees stop the “act” and don’t vibrate.

The com­plex­ity this adds to the laws of physics is as­tound­ing and may be hard to ap­pre­ci­ate at first. This be­lief would re­quire us to ac­cept that na­ture has some way of know­ing which things will even­tu­ally reach a cog­ni­tive sys­tem in such a way that it in­forms it that vibra­tions have hap­pened. It must se­lec­tively mod­ify ma­te­rial prop­er­ties in pre­cisely defined sce­nar­ios. It must have a pre­cise defi­ni­tion of what counts as a tree.

Now, if this ac­tu­ally hap­pens to be how the world works, well, then all the worse for our cur­rent mod­els! How­ever, each bit of com­plex­ity you add to a hy­poth­e­sis re­duces its prob­a­bil­ity and so must be jus­tified by ob­ser­va­tions with a cor­re­spond­ing like­li­hood ra­tio—that is, the ra­tio of the prob­a­bil­ity of the ob­ser­va­tion hap­pen­ing if this al­ter­nate hy­poth­e­sis is true, com­pared to if it were false. By spec­i­fy­ing the vibra­tions’ im­mu­nity to ob­ser­va­tion, the log of this ra­tio is zero, mean­ing ob­ser­va­tions are stipu­lated to be un­in­for­ma­tive, and un­able to jus­tify this ad­di­tional sup­po­si­tion in the hy­poth­e­sis.

[1] You might won­der how some­one my age in ’89-’91 would come up with terms like “hu­man-en­tan­gled sen­sor”, and you’re right: I didn’t use that term. Still, I con­sid­ered the use of a tape recorder that some­one will check to be a “some­one around to hear it”, for pur­poses of this dilemma. Least Con­ve­nient Pos­si­ble World and all...

• I think that if this post is left as it is this post would be to triv­ial to be a top level post. You could re­frame it as a be­gin­ners’ guide to Oc­cam, or you could make it more in­ter­est­ing by go­ing deeper into some of the is­sues (if you can think of any­thing more to say on the topic of differ­en­ti­at­ing be­tween hy­pothe­ses that make the same pre­dic­tions, that might be in­ter­est­ing, al­though I think you might have said all there is to say)

• It could also be framed as an is­sue of mak­ing your be­liefs pay rent, similar to the dragon in the garage ex­am­ple—or per­haps as an ex­am­ple of how re­al­ity is en­tan­gled with it­self to such a de­gree that some ques­tions that seem to carve re­al­ity at the joints don’t re­ally do so.

(If fal­ling trees don’t make vibra­tions when there’s no hu­man-en­tan­gled sen­sor, how do you differ­en­ti­ate a hu­man-en­tan­gled sen­sor from a non-hu­man-en­tan­gled sen­sor? If fal­ling-tree vibra­tions leave sub­tle pat­terns in the sur­round­ing leaf lit­ter that suffi­ciently-sen­si­tive hu­man-en­tan­gled sen­sors can de­tect, does leaf lit­ter then count as a hu­man-en­tan­gled sen­sor? How about if cer­tain plants or an­i­mals have ob­serv­ably evolved to han­dle fal­ling-tree vibra­tions in a cer­tain way, and we can de­tect that. Then such plants or an­i­mals (or their ab­sence, if we’re able to form a strong enough the­ory of evolu­tion to no­tice the ab­sence of such re­ac­tions where we would ex­pect them) could count as hu­man-en­tan­gled sen­sors well be­fore hu­mans even ex­isted. In that case, is there any­thing that isn’t a hu­man-en­tan­gled sen­sor?)

• Good points in the par­en­thet­i­cal—if I make it into a top-level ar­ti­cle, I’ll be sure to in­clude a more thor­ough dis­cus­sion of what con­cept is be­ing carved with the hy­poth­e­sis that there are no tree vibra­tions.

• There’s also the op­tion of ac­tu­ally ex­tend­ing the post to ac­tu­ally ad­dress the prob­lem it al­ludes to in the ti­tle, the so-called “hard prob­lem of con­scious­ness”.

• Eh, it was just sup­posed to be an al­lu­sion to that prob­lem, with the im­pli­ca­tion that the “easy prob­lem of tree vibra­tions” is the one EY at­tacked (Ques­tion Y in the draft). Solv­ing the hard prob­lem of con­scious­ness is a bit of a tall or­der for this ar­ti­cle...

• I be­lieve this is the con­ver­sa­tion you’re re­spond­ing to.

(up­voted)

• Oh, bless you[1]! That’s the one! :-)

Thanks for the up­vote. What I’m won­der­ing is if it’s non-ob­vi­ous or helpful enough to go top-level. There’s still a few para­graphs to add. I also wasn’t sure if the sub­ject mat­ter is in­ter­est­ing.

[1] Bless­ing given in the sec­u­lar sense.

• This seems wor­thy of a top-post. When you make it a top level post link to the rele­vant prior posts about com­plex­ity of hy­pothe­ses.

• But through­out my whole life up to that point (the ques­tion even ap­peared in the an­i­mated se­ries Beetle­juice that I saw when I was lit­tle), I had as­sumed a differ­ent ques­tion was be­ing asked: speci­fi­cally,

If a tree falls, and no hu­man (or hu­man-en­tan­gled[1] sen­sor) is around to hear it, does it still make vibra­tions? On what ba­sis do you be­lieve this, lack­ing a way to di­rectly check? (Call this “Ques­tion S”.)

Me too! It was ac­tu­ally ex­plained that way to me by my par­ents as a kid, in fact. I won­der if there are two sub­tly differ­ent ver­sions float­ing around or EY just in­ter­preted it un­char­i­ta­bly.

• And yet, the quan­tum me­chan­i­cal world be­haves ex­actly this way. Ob­ser­va­tions DO change ex­actly what hap­pens. So, ap­par­ently at the quan­tum me­chan­i­cal level, na­ture does have some way of know­ing.

I’m not sure what effect that this has upon your ar­gu­ment, but it’s some­thing that I think that you’re miss­ing.

• I’m fa­mil­iar with this: en­tan­gle­ment be­tween the en­vi­ron­ment and the quan­tum sys­tem af­fects the out­come, but na­ture doesn’t have a spe­cial law that dis­t­in­guishes hu­man en­tan­gle­ment from non-hu­man en­tan­gle­ment (as far as we know, given Oc­cam’s Ra­zor, etc.), which the al­ter­nate hy­poth­e­sis would re­quire.

The er­ror that early quan­tum sci­en­tists made was in failing to rec­og­nize that it was the en­tan­gle­ment with their mea­sur­ing de­vices that af­fected the out­come, not their im­ma­te­rial “con­scious knowl­edge”. As EY wrote some­where, they asked,

“The out­come changes when I know some­thing about sys­tem—what differ­ence should that make?”

when they should have asked,

“The out­come changes when I es­tab­lish more mu­tual in­for­ma­tion with the sys­tem—what differ­ent should that make?”

In any case, de­tec­tion of vibra­tion does not re­quire sen­si­tivity to quan­tum-spe­cific effects.

• And yet, the quan­tum me­chan­i­cal world be­haves ex­actly this way. Ob­ser­va­tions DO change ex­actly what hap­pens. So, ap­par­ently at the quan­tum me­chan­i­cal level, na­ture does have some way of know­ing.

Not re­ally. This is only the case for cer­tain in­ter­pre­ta­tions of what is go­ing on such as in cer­tain forms of the Copen­hagen in­ter­pre­ta­tion. Even then, ob­ser­va­tion in this con­text doesn’t re­ally mean ob­serve in the col­lo­quial sense but some­thing closer to in­ter­act with an­other par­ti­cle in a cer­tain class of con­di­tions. The no­tion that you seem to be con­flat­ing this with is the idea that con­scious­ness causes col­lapse. Not many physi­cists take that idea at all se­ri­ously. In most ver­sion of the Many-Wor­lds in­ter­pre­ta­tion, one doesn’t need to say any­thing about ob­ser­va­tions trig­ger­ing any­thing (or at least can talk about ev­ery­thing with­out talk­ing about ob­ser­va­tions).

Dis­claimer: My knowl­edge of QM is very poor. If some­one here who knows more spots any­thing wrong above please cor­rect me.

• Se­cond­ing ko­dos96. As this would ex­on­er­ate not only Knox and Sol­lecito but Guede as well, it has to be treated with con­sid­er­able skep­ti­cism, to say the least.

More sig­nifi­cant, it seems to me (though still rather weak ev­i­dence), is the Alessi tes­ti­mony, about which I ac­tu­ally con­sid­ered post­ing on the March open thread.

Still, the Aviello story is enough of a sur­prise to marginally lower my prob­a­bil­ity of Guede’s guilt. My cur­rent prob­a­bil­ities of guilt are:

Knox: < 0.1 % (i.e. not a chance)

Sol­lecito: < 0.1 % (like­wise)

Guede: 95-99% (per­haps just low enough to in­sist on a de­bunk­ing of the Aviello tes­ti­mony be­fore con­vict­ing)

It’s prob­a­bly about time I offi­cially an­nounced that my re­vi­sion of my ini­tial es­ti­mates for Knox and Sol­lecito was a mis­take, an ex­am­ple of the sin of un­der­con­fi­dence.

I of course re­main will­ing to par­ti­ci­pate in a de­bate with Rolf Nel­son on this sub­ject.

Fi­nally, I’d like to note that the last cou­ple of months have seen the cre­ation of a won­der­ful new site de­voted to the case, In­jus­tice in Peru­gia, which any­one in­ter­ested should definitely check out. Had it been around in De­cem­ber, I doubt that I could have made my sur­vey seem like a fair fight be­tween the two sides.

• More sig­nifi­cant, it seems to me (though still rather weak ev­i­dence), is the Alessi tes­ti­mony, about which I ac­tu­ally con­sid­ered post­ing on the March open thread. Still, the story is enough of a sur­prise to marginally lower my prob­a­bil­ity of Guede’s guilt.

I hadn’t heard about this—I just read your link though, and maybe I’m miss­ing some­thing, but I don’t see how it low­ers the prob­a­bil­ity of Guede’s guilt. He (sup­pos­edly) con­fessed to hav­ing been at the crimescene, and that Knox and Sol­lecito weren’t there. How does that, if true, ex­on­er­ate Guede?

• You omit­ted a cru­cial para­graph break. :-)

The Aviello tes­ti­mony would ex­on­er­ate Guede (and hence is un­likely to be true); the Alessi tes­ti­mony is es­sen­tially con­sis­tent with ev­ery­thing else we know, and isn’t par­tic­u­larly sur­pris­ing at all.

I’ve ed­ited the com­ment to clar­ify.

• Ah­hhh… ok I see where the mi­s­un­der­stand­ing was now.

• (Com­ment bizarrely trun­cated...here is the rest.)

It’s prob­a­bly about time I offi­cially an­nounced that my re­vi­sion of my ini­tial es­ti­mates for Knox and Sol­lecito was a mis­take, an ex­am­ple of the sin of un­der­con­fi­dence.

I of course re­main will­ing to par­ti­ci­pate in a de­bate with Rolf Nel­son on this sub­ject.

Fi­nally, I’d like to note that the last cou­ple of months have seen the cre­ation of a won­der­ful new site de­voted to the case, In­jus­tice In Peru­gia, which any­one in­ter­ested should definitely check out. Had it been around in De­cem­ber, I doubt that I could have made my sur­vey seem like a fair fight be­tween the two sides.

• That story would be con­sis­tent with Guédé′s, mod­ulo the usual eye­wit­ness con­fu­sion.

• And mod­ulo all the foren­sic ev­i­dence.

Ob­vi­ously this is break­ing news and it’s too soon to draw a con­clu­sion, but at first blush this sounds like just an­other at­ten­tion seeker, like those who always pop up in these high pro­file cases. If he re­ally can pro­duce a knife, and it matches the wounds, then maybe I’ll re­con­sider, but at the mo­ment my BS de­tec­tor is pegged.

Of course, it’s still or­ders of mag­ni­tude more likely than Knox and Sol­lecito be­ing guilty.

• I wasn’t fol­low­ing the case even when kom­pon­isto posted his analy­ses, so I re­ally can’t say.

• How many lot­tery tick­ets would you buy if the ex­pected pay­off was pos­i­tive?

This is not a com­pletely hy­po­thet­i­cal ques­tion. For ex­am­ple, in the Euromil­lions weekly lot­tery, the jack­pot ac­cu­mu­lates from one week to the next un­til some­one wins it. It is there­fore in the­ory pos­si­ble for the ex­pected to­tal pay­out to ex­ceed the cost of tick­ets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) prob­a­bil­ity of win­ning the jack­pot; mul­ti­ple win­ners share the prize.

So, sup­pose some­one draws your at­ten­tion (since of course you don’t bother fol­low­ing these things) to the num­ber of weeks the jack­pot has rol­led over, and you do all the rele­vant calcu­la­tions, and con­clude that this week, the ex­pected win from a €1 bet is €1.05. For sim­plic­ity, as­sume that the jack­pot is the only prize. You are also smart enough to choose a set of num­bers that look too non-ran­dom for any or­di­nary buyer of lot­tery tick­ets to choose them, so as to max­imise your chance of hav­ing the jack­pot all to your­self.

Do you buy any tick­ets, and if so how many?

If you judge that your util­ity for money is sub­lin­ear enough to make your ex­pected gain in utilons nega­tive, how large would the jack­pot have to be at those odds be­fore you bet?

• The tra­di­tional an­swer is to fol­low the Kelly crite­rion, is it not? That would imply

$f\*=\\frac\{0\.05\+n/76,275,360\}\{80,089,128/n\}$

where n is the num­ber of tick­ets. This im­plies you should buy n such that (€1)*n = Wf*, where W is your ini­tial wealth.

Edit: Thanks, JoshuaZ, for point­ing out that the Kelly crite­rion might not be the ap­pli­ca­ble one in a given situ­a­tion.

• OK, I have a ques­tion! Sup­pose I hold a risky as­set that costs me c at time t, and whose value at time t is pre­dicted to be k (1 + r), with stan­dard de­vi­a­tion s. How can I calcu­late the length of time that I will have to hold the as­set in or­der to ra­tio­nally ex­pect the as­set to be worth, say, 2c with prob­a­bil­ity p*?

I am not do­ing a fi­nance class or any­thing; I am gen­uinely cu­ri­ous.

• I knew about Kelly, but not well enough for the prob­lem to bring it to mind.

I make the Kelly frac­tion of (bp-q)/​b to work out to about ep­silon/​N where ep­silon=0.05 and N = 76275360. So the op­ti­mal bet is 1 part in 1.5 billion of my wealth, which is ap­prox­i­mately noth­ing.

The moral: buy­ing lot­tery tick­ets is still a bad idea even when it’s marginally prof­itable.

• Yes, and note that Kelly gets much less op­ti­mal when you in­crease bet sizes then when you de­crease bet sizes. So from a Kelly per­spec­tive, round­ing up to a sin­gle ticket is prob­a­bly a bad idea. Your point about sub­lin­ear­ity of util­ity for money makes it in gen­eral an even worse idea. How­ever, I’m not sure that Kelly is the right ap­proach here. In par­tic­u­lar, Kelly is the cor­rect at­ti­tude when you have a large num­ber of op­por­tu­ni­ties to bet (in­deed, it is the limit­ing case). How­ever, lot­ter­ies which have a pos­i­tive ex­pected out­come are very rare.So you never ap­proach any­where near the limit­ing case. Re­mem­ber, Kelly op­ti­mizes long-term growth.

• That raises the ques­tion of what the ra­tio­nal thing to do is, when faced with a strictly one-time chance to buy a very small prob­a­bil­ity of a very large re­ward.

• Well, no—you shouldn’t buy one ticket. And ac­cord­ing to my calcu­la­tions when I tried plot­ting W ver­sus n by my for­mula, the min­i­mum of W is at “buy all the tick­ets”, so un­less you have €76,275,360 already...

• Less Wrong Book Club and Study Group

(This is a draft that I pro­pose post­ing to the top level, with such im­prove­ments as will be offered, un­less feed­back sug­gests it is likely not to achieve its pur­poses. Also re­ply if you would be will­ing to co-fa­cil­i­tate: I’m will­ing to do so but backup would be nice.)

Do you want to be­come stronger in the way of Bayes? This post is in­tended for peo­ple whose un­der­stand­ing of Bayesian prob­a­bil­ity the­ory is cur­rently be­tween lev­els 0 and 1, and who are in­ter­ested in de­vel­op­ing deeper knowl­edge through de­liber­ate prac­tice.

Our in­ten­tion is to form a self-study group com­posed of peers, work­ing with the as­sis­tance of a fa­cil­i­ta­tor—but not nec­es­sar­ily of a teacher or of an ex­pert in the topic. Some stu­dents may be some­what more ad­vanced along the path, and able to offer as­sis­tance to oth­ers.

Our first text will be E.T. Jayne’s Prob­a­bil­ity The­ory: The Logic of Science, which can be found in PDF form (in a slightly less pol­ished ver­sion than the book edi­tion) here or here.

We will work through the text in sec­tions, at a pace al­low­ing thor­ough un­der­stand­ing: ex­pect one new sec­tion ev­ery week, maybe ev­ery other week. A brief sum­mary of the cur­rently dis­cussed sec­tion will be pub­lished as an up­date to this post, and si­mul­ta­neously a com­ment will open the dis­cus­sion with a few ques­tions, or the state­ment of an ex­er­cise. Please use ROT13 when­ever ap­pro­pri­ate in your replies.

A first com­ment be­low col­lects in­ten­tions to par­ti­ci­pate. Please re­ply to this com­ment only if you are gen­uinely in­ter­ested in gain­ing a bet­ter un­der­stand­ing of Bayesian prob­a­bil­ity and will­ing to com­mit to spend a few hours per week read­ing through the sec­tion as­signed or do­ing the ex­er­cises. A few days from now the first sec­tion will be posted.

• This sounds great, I’m definitely in. I feel like I have a mod­er­ately okay in­tu­itive grasp on Bayescraft but a chance to work through it from the ground up would be great.

• In. Have the dead­tree ver­sion, but I was stymied in my first crack at it.

• In. If needed I can cover a few of the early chap­ters.

• I’m in. I already read the first few chap­ters, but it will be nice to go over them to solid­ify that knowl­edge. The slower pace will help as well. The later chap­ters rely on some knowl­edge of statis­tics, maybe some mem­ber of the book club is already knowl­edge­able to be able to find good links to sum­maries of these things when they come up?

• I would be in­ter­ested, what is the in­tended time pe­riod for the read­ing? I have a two-week trip com­ing up when I will prob­a­bly be busy but aside from that I would very much like to par­ti­ci­pate.

• The plan, I think, would be to start nice and slow, then ad­just as we gain con­fi­dence. We’re likely to start with the first chap­ter so you could get a head start by read­ing that, be­fore we start for real, which is look­ing likely now as we have quite a few peo­ple more than the last time this was brought up.

• I’m in, been in­tend­ing to read through some maths on my free time.

• It’s the­sis writeup pe­riod for me, but this is ex­tremely tempt­ing.

• I’m in­ter­ested. I already have the book but haven’t pro­gressed very far so this seems like it’s po­ten­tially a good mo­ti­va­tor to finish it. The link to the PDF seems to be miss­ing btw.

• I’m en­thu­si­as­ti­cally in.

• I think that a book club is a great idea, and this is an ex­cel­lent choice for a book. I’m definitely in­ter­ested.

• Feed­back sought: is this too short? Too long? Is the in­tent clear? What if any­thing is miss­ing?

• Are you in­tend­ing to do this on­line or meet in per­son? If you are ac­tu­ally meet­ing, what city is this tak­ing place in? Thanks.

• Ex­cel­lent ques­tion, thanks. I can only offer to help with the on­line ver­sion, I live in France where only a few only LessWrongers reside.

And there’s noth­ing to pre­vent the on­line group from hav­ing a F2F con­tinu­a­tion. I’ll ask peo­ple to say where they are.

• A link to the Ama­zon Page if peo­ple want to read re­views and learn what the book is about.

• The link to the pdf ver­sion seems to be miss­ing in the origi­nal post.

• Ques­tion: whats your ex­pe­rience with stuff that seems new agy at first look, like yoga, med­i­ta­tion and so on. Any­thing worth try­ing?

Case in point: i read in Feyn­mans book about de­pri­va­tion tanks, and re­cently found out that they are available in big­ger cities. (Ber­lin, ger­many in my case.) will try and hope­fully en­joy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the ex­pe­rience of float­ing in a sen­sory empty space.

• Chi­nese in­ter­nal mar­tial arts: Tai Chi, Xingyi, and Bagua. The word “chi” does not carve re­al­ity at the joints: There is no literal bod­ily fluid sys­tem par­allel to blood and lymph. But I can make train­ing part­ners light­headed with a quick suc­ces­sion of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send some­one stum­bling back­ward with some fairly light pushes; af­ter 30-60 sec­onds of spar­ring to de­velop a rap­port I can take an un­wary op­po­nent’s bal­ance with­out phys­i­cal con­tact.

Each of these skills fit more nat­u­rally un­der differ­ent cat­e­gories, but if you want to learn them all the most effi­cient way is to study a Chi­nese in­ter­nal mar­tial art or some­thing similar.

• I can take an un­wary op­po­nent’s bal­ance with­out phys­i­cal con­tact.

This sounds mag­i­cal at first read­ing, but is ac­tu­ally not that tricky. It’s just psy­chol­ogy and bal­ance. If you set up a pat­tern of pre­dictable at­tacks, then feint in the right di­rec­tion while your op­po­nent is jump­ing at you off-bal­ance, you can sur­prise him enough to make him fall as he at­tempts to ward off your feint.

• I used to go to a Tai Chi class (I stopped only be­cause I de­cided I’d taken it as far as I was go­ing to), and the in­struc­tor, who never talked about “chi” as any­thing more than a metaphor or a use­ful vi­su­al­i­sa­tion, said this about the in­ter­nal arts:

In the old days (that would be pre-rev­olu­tion­ary China) you wouldn’t prac­tice just Tai Chi, or be­gin with Tai Chi. Tai Chi was the equiv­a­lent of post­grad­u­ate study in the mar­tial arts. You would start out by learn­ing two or three “hard”, “ex­ter­nal” styles. Then, hav­ing reached black belt in those, and hav­ing de­vel­oped your power, speed, strength, and fight­ing spirit, you would study the in­ter­nal arts, which would teach you the proper al­ign­ments and struc­tures, the mean­ing of the var­i­ous move­ments and forms. In the class there were two stu­dents who did Go­juryu karate, a 3rd dan and a 5th dan, and they both said that their karate had im­proved no end since tak­ing up Tai Chi.

Which is not to say that Tai Chi isn’t use­ful on its own, it is, but there is that wider con­text for get­ting the max­i­mum use out of it.

• That meshes well with what I have learned—Bagua is also an ad­vanced art, and my teacher doesn’t teach it to be­gin­ners. The one of the three pri­mary in­ter­nal arts de­signed for new mar­tial artists is Xingyi. It’s too bad I’m too pe­cu­niar­ily challenged to at­tend the sin­gu­lar­ity sum­mit, or we could do ra­tio­nal­ist push­ing hands.

• In­ter­est­ing. It seems that learn­ing this art (1) gives you a power and (2) makes you vuln­er­a­ble to it.

• There may be a cor­re­la­tion be­tween study­ing mar­tial arts and vuln­er­a­bil­ity to tech­niques which can be mod­eled well by “chi.” But I have tried the strik­ing se­quences suc­cess­fully on capo­eris­tas and catch wrestlers, and the light but effec­tive pushes on my non-mar­tially-trained brother af­ter show­ing him Wu-style push­ing hands for a minute or two.

• That sug­gests an ex­per­i­ment. Any­one see any flaws in the fol­low­ing?

1. Write up in­struc­tions for two tech­niques—one which would work and one which not work, ac­cord­ing to your the­ory—in suffi­cient de­tail for some­one phys­i­cally adept but not in­structed in Chi­nese in­ter­nal mar­tial arts (e.g. a dancer) to learn. La­bel each with a ran­dom let­ter (e.g. I for the cor­rect one and K for the in­cor­rect one).

2. Have one group learn each tech­nique—have them video­tape their ac­tions and send them cor­rec­tions by text, so that they don’t get cues about whether you ex­pect the meth­ods to work.

3. Have an­other party ig­no­rant of the tech­nique perform tests to see how well each group does.

• I like the idea of sci­en­tifi­cally test­ing in­ter­nal arts; and your idea is cer­tainly more rigor­ous than TV se­ries at­tempt­ing to ap­proach mar­tial arts “sci­en­tifi­cally” like Mind, Body, and Kick­ass Moves. Un­for­tu­nately, the only one of those I can think of which is both (1) ex­plain­able in words and pic­tures to a pre­cise enough de­gree that “chi”-type the­o­ries could con­strain ex­pec­ta­tions, and (2) has an un­am­bigu­ous re­sult when done cor­rectly which varies qual­i­ta­tively from an in­cor­rect at­tempt is the knock­out se­ries of hits, which raises both eth­i­cal and prac­ti­cal con­cerns.

I would clas­sify the other two as tacit knowl­edge—they re­quire a lit­tle bit of in­struc­tion on the coun­ter­in­tu­itive parts; then a lot of prac­tice which I can’t think of a good way to fake.

Note that I would be com­pletely as­ton­ished if there weren’t a perfectly nor­mal ex­pla­na­tion for any of these feats; but de­riv­ing meth­ods for them from first prin­ci­ples of biome­chan­ics and cog­ni­tive sci­ence would take a lot longer than study­ing with a good teacher who works with the “chi” model.

• The prob­lem is that a pos­i­tive re­sult would only show that a spe­cific se­quence of at­tacks worked well. It wouldn’t show that “chi” or other un­usual mod­els were re­quired to ex­plain it; there could be perfectly nor­mal ex­pla­na­tions for why a se­ries of at­tacks was effec­tive.

• That’s why I sug­gested writ­ing down both tech­niques which should work ac­cord­ing to the model and tech­niques which should not work ac­cord­ing to the model.

• It’s con­ceiv­able that imag­in­ing chi is the best (or at least a very good) way of be­ing able to do sub­tle at­tacks.

• I used to go to a Tai Chi class (I stopped only be­cause I de­cided I’d taken it as far as I was go­ing to), and the in­struc­tor, who never touted “chi” as any­thing more than a metaphor or a use­ful vi­su­al­i­sa­tion, said this about the in­ter­nal arts:

In the old days (that would be pre-rev­olu­tion­ary China) you wouldn’t prac­tice just Tai Chi, or be­gin with Tai Chi. Tai Chi was the equiv­a­lent of post­grad­u­ate study in the mar­tial arts. You would start out by learn­ing two or three “hard”, “ex­ter­nal” styles. Then, hav­ing reached black belt in those, and hav­ing de­vel­oped your power, speed, strength, and fight­ing spirit, you would study the in­ter­nal arts, which would teach you the proper al­ign­ments and struc­tures, the mean­ing of the var­i­ous move­ments and forms. In the class there were two stu­dents who did Go­juryu karate, a 3rd dan and a 5th dan, and they both said that their karate had im­proved no end since tak­ing up Tai Chi.

Which is not to say that Tai Chi isn’t use­ful on its own, it is, but there is that wider con­text for get­ting the max­i­mum use out of it.

• Ques­tion: whats your ex­pe­rience with stuff that seems new agy at first look, like yoga, med­i­ta­tion and so on. Any­thing worth try­ing?

The Five Ti­be­tans are a set of phys­i­cal ex­er­cises which re­ju­ve­nate the body to youth­ful vi­gour and pro­long life in­definitely. They are at least 2,500 years old, and prac­ticed by hid­den mas­ters of se­cret wis­dom liv­ing in re­mote monas­ter­ies in Ti­bet, where, in the ear­lier part of the 20th cen­tury, a re­tired Bri­tish army colonel sought out these monas­ter­ies, stud­ied with the an­cient mas­ters to great effect, and even­tu­ally brought the ex­er­cises to the West, where they were first pub­lished in 1939.

Ok, you don’t be­lieve any of that, do you? Nei­ther do I, ex­cept for the first eight words and the last six. I’ve been do­ing these ex­er­cises since the be­gin­ning of 2009, since be­ing turned on to them by Steven Barnes’ blog and they do seem to have made a dra­matic im­prove­ment in my gen­eral level of phys­i­cal en­ergy. Whether it’s these ex­er­cises speci­fi­cally or just the dis­ci­pline of do­ing a similar amount of ex­er­cise first thing in the morn­ing, ev­ery morn­ing, I haven’t taken the trou­ble to de­ter­mine by vary­ing them.

More here and here. Nancy Le­bovitz also men­tioned them.

I also do yoga for flex­i­bil­ity (it works) and oc­ca­sion­ally med­i­ta­tion (to lit­tle de­tectable effect). I’d be in­ter­ested to hear from any­one here who med­i­tates and gets more from it than I do.

• I’ve had great re­sults from mod­est (2-3 hrs/​wk) in­vest­ments in hatha yoga, over and above what I get from stan­dard Greco-Ro­man “cal­is­then­ics.”

Be­sides the flex­i­bil­ity, breath­ing, and pos­ture benefits, I find that the idea of ‘chakras’ is vaguely use­ful for fo­cus­ing my con­scious at­ten­tion on in­vol­un­tary mus­cle sys­tems. I would be ex­tremely sur­prised if chakras “cleaved re­al­ity at the joints” in any straight­for­ward sense, but the idea of chakras helps me pay at­ten­tion to my di­ges­tion, heart rate, blad­der, etc. by mak­ing men­tally un­in­ter­est­ing but nev­er­the­less im­por­tant bod­ily func­tions more in­ter­est­ing.

• I’ve done yoga ev­ery week for the last month or two. It’s pleas­ant. Other than pay­ing at­ten­tion to how I’m hold­ing my body vs. the in­struc­tion, I mostly stop think­ing for an hour (as we’re en­couraged to do), which is nice.

I can’t say I no­tice any sig­nifi­cant last­ing effects yet. I’m slightly more flex­ible.

• Hard to say—even New Agey stuff evolves. (Not many fol­low­ers of Re­ich push­ing their cop­per-lined closets these days.)

Gen­er­ally, back­ground stuff is enough. There’s no short­age of hard sci­en­tific ev­i­dence about yoga or med­i­ta­tion, for ex­am­ple. No need for heuris­tics there. Similarly there’s some for float tanks. In fact, I’m hard pressed to think of any New Agey stuff where there isn’t enough back­ground to judge it on its own mer­its.

• Med­i­ta­tion can be pretty darn re­lax­ing. Espe­cially if you hap­pen to live within walk­ing dis­tance of any pleas­ant yet sparsely-pop­u­lated moun­tain­tops. I would recom­mend giv­ing it a shot; don’t worry about ad­vanced tech­niques or any­thing, and just close your eyes and fo­cus on your breath­ing, and the wind (if any). Very pleas­ant.

• Every time I try to med­i­tate I fall asleep.

• There are loads of times I would like to be able to fall asleep, but can’t. I envy your power.

I guess this is an­other rea­son for peo­ple to give med­i­ta­tion a try.

• I find a med­i­ta­tion-like fo­cus on my breath­ing and heart­beat to be a very effec­tive way to fall asleep when my thoughts are keep­ing me awake.

• Why would you want to do that, I mean, what are the sup­posed ad­van­tages? You might want to look it up and see if theres any­thing about it on the in­ter­net. Most al­ter­na­tive medicines are BS, but not nec­es­sar­ily all.

GRRRR! I wish it would let me com­ment faster then ev­ery 8 min­utes. Guess I’ll come back and post it.

• To have the ex­pe­rience. I dont mean it as a treat­ment, but some­thing that would be ex­cit­ing, new and worth try­ing just for the sake of it. edit/​add: the deleted com­ment above asked why i would bother to do some­thing like floating

• 7 Jun 2010 12:45 UTC
6 points

Many are call­ing BP evil and neg­li­gent, has there ac­tu­ally been any ev­i­dence of crim­i­nal ac­tivi­ties on their part? My first guess is that we’re deal­ing with hind­sight bias. I am still ca­su­ally look­ing into it, but I figured some oth­ers here may have already in­vested enough work into it to point me in the right di­rec­tion.

Like any dis­aster of this scale, it may be pos­si­ble to learn quite a bit from it, if we’re will­ing.

• It de­pends on what you mean by “crim­i­nal”; un­der en­vi­ron­men­tal law, there are both neg­li­gence-based (neg­li­gent discharge of pol­lu­tants to nav­i­gable wa­ters) and strict li­a­bil­ity (no in­tent re­quire­ment, such as kil­ling of mi­gra­tory birds) crimes that could ap­ply to this spill. I don’t think any­one thinks BP in­tended to have this kind of spill, so the in­ter­est­ing ques­tion from an en­vi­ron­men­tal crim­i­nal law per­spec­tive is whether BP did enough to be treated as act­ing “know­ingly”—the rele­vant in­tent stan­dard for en­vi­ron­men­tal felonies. This is an ex­tremely slip­pery con­cept in the law, es­pe­cially given the com­plex­ity of the sys­tems at is­sue here. Liti­ga­tion will go on for many years on this ex­act point.

• I’ve read some­where that a BP in­ter­nal safety check performed a few months ago in­di­cated “un­usual” prob­lems which ac­cord­ing to again BP in­ter­nal safety guidelines should have been re­solved ear­lier, but some­how they made an ex­cep­tion this time. It didn’t seem like it would have been “ille­gal”, and it also did not note how of­ten such ex­cep­tions are made, by what rea­son­ing, what kind of prob­lems they speci­fi­cally en­coun­tered, what they did to keep the op­er­a­tion run­ning, et cetera...

Though I sel­dom read “or­di­nary” news, even of this kind, as my past ex­pe­rience tells me that fac­tual in­for­ma­tion is rather low, and most high-qual­ity press likes more to show off in opinion and in­ter­pre­ta­tion of an event than try­ing to provide an ac­cu­rate his­tor­i­cal re­port, at least within such a short time-frame. Could well be that this is differ­ent at this event.

Also, as with most en­g­ineer­ing dis­ci­plines, re­ally learn­ing from such an event be­yond the ob­vi­ous “there is a non-zero chance for ev­ery­thing to blow up” usu­ally re­quires more area-spe­cific ex­per­tise than an or­di­nary out­sider has.

• I’ve heard scat­tered bits of ac­cu­sa­tions of mis­deeds by BP which may have con­tributed to the spill. Here’s a list from the con­gres­sional in­ves­ti­ga­tion of 5 de­ci­sions that BP made “for eco­nomic rea­sons that in­creased the dan­ger of a catas­trophic well failure” ac­cord­ing to a let­ter from the con­gress­men. It sounds like BP took a bunch of risky short­cuts to save time and money, al­though I’d want to hear from peo­ple who ac­tu­ally un­der­stand the tech­ni­cal is­sues be­fore be­ing too con­fi­dent.

There are other sus­pi­cions and alle­ga­tions float­ing around, like this one.

• You are not re­ally go­ing to learn much un­less you are in­ter­ested in wad­ing through lots of tech­ni­cal ar­ti­cles. If you want to learn, you need to wait un­til it has been di­gested by rele­vant ex­perts into books. I am not sure what you think you can learn from this, but there are two good books of re­lated in­for­ma­tion available now:

Jeff Wheelwright, De­grees of Disaster, about the en­vi­ron­men­tal effects of the Exxon Valdez spill and the clean up.

Trevor Kletz, What Went Wrong?: Case His­to­ries of Pro­cess Plant Disasters, which is re­ally ex­cel­lent. [For gen­eral read­ing, an older edi­tion is perfectly ad­e­quate, new copies are ex­pen­sive.] It has an in­cred­ible amount of de­tail, and hor­rify­ing ac­counts of how ap­par­ently in­signifi­cant mis­takes can (of­ten liter­ally) blow up on you.

• In a re­cent video, Taleb ar­gues that peo­ple gen­er­ally put too much fo­cus on the speci­fics of a dis­aster, and too lit­tle on what makes sys­tems frag­ile.

He said that high debt means (among other things) too much fo­cus on the short run, and skimp­ing on in­surance and pre­cau­tions.

• Also, Richard Feyn­man’s re­marks on the loss of the Space Shut­tle Challenger are a pretty ac­cessible overview of the kinds of dy­nam­ics that con­tribute to ma­jor in­dus­trial ac­ci­dents. http://​​his­tory.nasa.gov/​​roger­srep/​​v2appf.htm

[edit: cor­rected, thx.]

• Pretty sure you mean Challenger. Feyn­man was in­volved in the in­ves­ti­ga­tion of the Challenger dis­aster. He was dead long be­fore Columbia.

• I’m not sure it’s rele­vant whether they did any­thing ille­gal or not. Peo­ple always seem to want to blame and pun­ish some­one for their prob­lems. In my opinion, they should be forced to pay for and com­pen­sate for all the dam­age, as well as a very large fine as pun­ish­ment. This way in the fu­ture they, and other com­pa­nies, can reg­u­late them­selves and pre­pare for emer­gen­cies as effi­ciently as pos­si­ble with­out ar­bi­trary and clunky gov­ern­ment reg­u­la­tions and agen­cies try­ing to slap ev­ery­thing to­gether at the last mo­ment. Of course, if a sin­gle per­son ac­tu­ally did some­thing ir­re­spon­si­ble (eg; bob the worker just used duct tape to fix that pipe know­ing that it wouldn’t hold) then they should be able to be tried in court or sued/​fined by the com­pany. But even then, it’s up to the com­pany to make sure that stuff like this doesn’t hap­pen by mak­ing sure all of their work­ers are com­pe­tent and cer­tified.

• Re­grets and Motivation

Al­most in­vari­ably ev­ery­thing is larger in your imag­i­na­tion than in real life, both good and bad, the con­se­quences of mis­takes loom worse, and the plea­sure of gains looks bet­ter. Real­ity is hum­drum com­pared to our imag­i­na­tions. It is our imag­ined fu­tures that get us off our butts to ac­tu­ally ac­com­plish some­thing.

And the fact that what we do ac­com­plish is done in the hum­drum, real world, means it can never mea­sure up to our imag­ined ac­com­plish­ments, hence re­grets. Be­cause we imag­ine that if we had done some­thing else it could have mea­sured up. The worst part of hav­ing re­grets is the im­pact it has on our mo­ti­va­tion.

some­what ex­panded ver­sion of com­ment on OB a cou­ple of months ago

Added: I didn’t make the con­nec­tion at first, but this is also Eliezer’s point in this quote from The Su­per Happy Peo­ple story, “It’s bad enough com­par­ing your­self to Isaac New­ton with­out com­par­ing your­self to Kim­ball Kin­ni­son.

• I was talk­ing to a friend yes­ter­day and he men­tioned a psy­cholog­i­cal study (I am try­ing to track down the source) that peo­ple tend to suffer MORE from failing to pur­sue cer­tain op­por­tu­ni­ties than FAILING af­ter pur­su­ing them. So even if you’re right about the over­es­ti­ma­tion of plea­sure, it might just be ir­rele­vant.

• Here is a re­view of that psy­cholog­i­cal re­search (pdf), and there are more stud­ies linked here (the key­word to look for is “re­gret”). The pa­per I linked is:

Gilovich, T., & Med­vec, V. H. (1995). The ex­pe­rience of re­gret: What, when, and why. Psy­cholog­i­cal Re­view, 102, 379-395.

This ar­ti­cle re­views ev­i­dence in­di­cat­ing that there is a tem­po­ral pat­tern to the ex­pe­rience of re­gret. Ac­tions, or er­rors of com­mis­sion, gen­er­ate more re­gret in the short term; but in­ac­tions, or er­rors of omis­sion, pro­duce more re­gret in the long run. The au­thors con­tend that this tem­po­ral pat­tern is mul­ti­ply de­ter­mined, and pre­sent a frame­work to or­ga­nize the di­ver­gent causal mechanisms that are re­spon­si­ble for it. In par­tic­u­lar, this ar­ti­cle doc­u­ments the im­por­tance of psy­cholog­i­cal pro­cesses that (a) de­crease the pain of re­gret­table ac­tion over time, (b) bolster the pain of re­gret­table in­ac­tion over time, and (c) differ­en­tially af­fect the cog­ni­tive availa­bil­ity of these two types of re­grets. Both the func­tional and cul­tural ori­gins of how peo­ple think about re­gret are dis­cussed.

• I haven’t seen a study, but that is a com­mon be­lief. A good quote to that effect,

Re­gret for the things we did can be tem­pered by time; it is re­gret for the things we did not do that is in­con­solable.

• Syd­ney Harris

And I vaguely re­mem­ber see­ing an­other similar quote from Churchill.

• No doubt there is truth in this… how­ever ex­am­ples spring into my mind where ac­com­plish­ing some­thing made me feel bet­ter than what I ever ex­pected. This in­cludes sport (ever win a race or score a goal in a high stakes soc­cer game?), work and per­sonal life. The “re­al­ity is hum­drum” per­spec­tive might, at least in part, be caused by a dis­con­nect be­tween “imag­i­na­tion” and “ac­tion”.

• Often it is our imag­ined bad fu­tures that keep us too afraid to act. In my ex­pe­rience this is more com­mon than the op­po­site.

• What do you mean by “the op­po­site”? I can think of at least two ways to in­vert that sen­tence.

• I meant billswift’s origi­nal idea: that we imag­ine good fu­tures and that mo­ti­vates us to act.

• Maybe you can set your suc­cess set­point to a lower value. The op­ti­mum is hard to achieve. So look­ing for 100% ev­ery­where might be bad.

• One vari­able of­ten in­voked to ex­plain hap­piness in Den­mark (who reg­u­larly rank #1 for hap­piness) is mod­est ex­pec­ta­tions.

ETA: the above pa­per seems a bit tongue-in-cheek, but as I gather, the re­sults are solid. Full dis­clo­sure: I’m from Den­mark.

• Awe­some co­in­ci­dence. I am go­ing to travel to Den­mark next week for 10 days. Will check it out my­self!

• While search­ing for liter­a­ture on “in­tu­ition”, I came upon a book chap­ter that gives “the state of the art in moral psy­chol­ogy from a so­cial-psy­cholog­i­cal per­spec­tive”. This is the best sum­mary I’ve seen of how moral­ity ac­tu­ally works in hu­man be­ings.

The au­thors gives out the chap­ter for free by email re­quest, but to avoid that triv­ial in­con­ve­nience, I’ve put up a mir­ror of it.

ETA: Here’s the cita­tion for fu­ture refer­ence: Haidt, J., & Ke­se­bir, S. (2010). Mo­ral­ity. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Hand­book of So­cial Psy­chol­ogy, 5th Edi­tion. Hobeken, NJ: Wiley. Pp. 797-832.

• [T]o avoid that triv­ial in­con­ve­nience, I’ve put up a mir­ror of it.

You’re awe­some.

I’ve pre­vi­ously been im­pressed by how so­cial psy­chol­o­gists rea­son, es­pe­cially about iden­tity. Schemata the­ory is also a de­cent lan­guage for talk­ing about cog­ni­tive al­gorithms from a less cog­ni­tive sci­encey per­spec­tive. I look for­ward to read­ing this chap­ter. Thanks for mir­ror­ing, I wouldn’t have both­ered oth­er­wise.

• This one came up at the re­cent Lon­don meetup and I’m cu­ri­ous what ev­ery­one here thinks:

What would hap­pen if CEV was ap­plied to the Baby Eaters?

My thoughts are that if you ap­plied it to all baby eaters, in­clud­ing the liv­ing ba­bies and the ones be­ing di­gested, it would end up in a place that adult baby eaters would not be happy. If you ex­panded it to in­clude all babyeaters that ever ex­isted, or that would ever ex­ist, know­ing the fate of 99% of them, it would be a much more pro­nounced effect. So what I make of all this is that ei­ther CEV is not util­ity-func­tion-neu­tral, or that the babyeater moral­ity is ob­jec­tively un­sta­ble when ag­gre­gated.

Thoughts?

• What would hap­pen if CEV was ap­plied to the Baby Eaters?

My in­tu­itions of CEV are in­formed by the Rawlsian Veil of Ig­no­rance, which effec­tively asks: “What rules would you want to pre­vail if you didn’t know in ad­vance who you would turn out to be?”

Where CEV as I un­der­stand it adds more in­for­ma­tion—as­sumes our prefer­ences are ex­trap­o­lated as if we knew more, were more the kind of peo­ple we want to be—the Veil of Ig­no­rance re­moves in­for­ma­tion: it strips peo­ple un­der a set of spe­cific cir­cum­stances of the de­tailed in­for­ma­tion about what their prefer­ences are, what their con­tig­nent his­to­ries brought them there, and so on. This in­cludes things like what age you are, and even—con­ceiv­ably—how many of you there are.

To this bunch of un­differ­en­ti­ated peo­ple you’d put the ques­tion, “All in fa­vor of a 99% chance of dy­ing hor­ribly shortly af­ter be­ing born, in re­turn for the 1% chance to par­take in the crown­ing glory of babyeat­ing cul­tural tra­di­tion, please raise your hands.”

I ex­pect that not dy­ing hor­ribly takes lex­i­cal prece­dence over any kind of cul­tural tra­di­tion, for any sen­tient be­ing whose kin has evolved to sen­tience (it may not be that way for con­structed minds). So I would ex­pect the Babyeaters to choose against cul­tural tra­di­tion.

The ob­vi­ous caveat is that my in­tu­itions about CEV may be wrong, but lack­ing a for­mal ex­pla­na­tion of CEV it’s hard to check in­tu­itions.

• BEs aren’t hu­mans. They are Baby-Eat­ing aliens

• You’re cor­rect. I’m us­ing the term “peo­ple” loosely. How­ever, I wrote the grand-par­ent while fully in­formed of what the Babyeaters are. Did you mean to re­but some­thing in par­tic­u­lar in the above?

• “All in fa­vor of a 99% chance of dy­ing hor­ribly shortly af­ter be­ing born, in re­turn for the 1% chance to par­take in the crown­ing glory of babyeat­ing cul­tural tra­di­tion, please raise your hands.”

If we trans­late it to our cul­tural con­text, we will get some­thing like “All in fa­vor of 100% dy­ing hor­ribly of old age, in re­turn for good lives of your ba­bies, please rise your hands”. They ARE aliens.

• Well, we would say “no” to that, if we had the means to abol­ish old age. We’d want to have our cake and eat it too.

The text stipu­lates that it is within the BE’s tech­nolog­i­cal means to abol­ish the suffer­ing of the ba­bies, so I ex­pect that they would choose to do so, be­hind the Veil.

• Yes, but a sur­pris­ingly large num­ber of hu­mans seem to re­act in hor­ror when you talk about get­ting rid of ag­ing.

• Who will ask them? FAI have no idea, that a) baby eat­ing is bad, b) it should gen­er­al­ize moral val­ues past BE to all con­scious be­ings.

Even if FAI will ask that ques­tion and it turns out that ma­jor­ity of pop­u­la­tion don’t want to do in­her­ently good thing (it is for them), then FAI must un­dergo con­trol­led shut­down.

EDIT: To dis­am­biguate. I am talk­ing about FAI, which is im­ple­mented by BEs.

As we should not al­low FAI to gen­er­al­ize morals past con­scious be­ings, just to be sure, that it will not take CEV of all bac­terium, so BEs should not al­low their FAI to gen­er­al­ize past BEs.

As we should built in au­to­matic off switch into our FAI, to stop it if its goals is in­her­ently wrong, so should BEs.

• It doesn’t seem from the story like the ba­bies are gladly sac­ri­fic­ing for the tribe...

“But...” said the Master. “But, my Lady, if they want to be eaten—”

“They don’t,” said the Xenopsy­chol­o­gist. “Of course they don’t. They run from their par­ents when the ter­rible win­now­ing comes. The Babyeater chil­dren aren’t emo­tion­ally ma­ture—I mean they don’t have their adult emo­tional state yet. Evolu­tion would take care of any­one who wanted to get eaten. And they’re still learn­ing, still mak­ing mis­takes, so they don’t yet have the in­stinct to ex­ter­mi­nate vi­o­la­tors of the group code. It’s a sim­pler time for them. They play, they ex­plore, they try out new ideas. They’re...” and the Xenopsy­chol­o­gist stopped. “Damn,” she said, and turned her head away from the table, cov­er­ing her face with her hands. “Ex­cuse me.” Her voice was un­steady. “They’re a lot like hu­man chil­dren, re­ally.”

• Yes. It’s hor­rible. For us. But why FAI should place any weight on re­mov­ing that? How FAI can gen­er­al­ize past “Life of Baby Eater is sa­cred” to “Life of ev­ery con­scious be­ing is sa­cred”? FAI has all ev­i­dence that lat­ter is plain wrong.

Do You want con­vince me or FAI that it’s bad? I know that it is, I just try to demon­strate that FAI as it is, is about preser­va­tion and not de­vel­op­ment to (uni­ver­sally) bet­ter ends.

• So what I make of all this is that ei­ther CEV is not util­ity-func­tion-neutral

Cor­rect. CEV is sup­posed to be a com­po­nent of Friendli­ness, which is defined in refer­ence to hu­man val­ues.

• CEV will be to main­tain ex­ist­ing or­der.

Why? There must be very strong ar­gu­ments for BEs to stop do­ing the Right Thing. And there’s only one source of ob­jec­tions—chil­dren. And their vo­li­tions will be self­ish and un­ag­gre­gat­able.

EDIT: What does util­ity-func­tion-neu­tral mean?

EDIT: Ok. Ok. CEV will be to make BE’s morale change and al­low them to not eat chil­dren. So, FAI will un­dergo con­trol­led shut­down. Ob­jec­tions, please?

EDIT: Here’s yet an­other ar­gu­ments.

Guidelines of FAI as of may 2004.

1. Defend hu­mans, the fu­ture of hu­mankind, and hu­mane na­ture.

BEs will for­mu­late this as “Defend BEs (ex­cept for the cer­e­mony of BEing), the fu­ture of BEkind, and BE’s na­ture.”

1. En­cap­su­late moral growth.

BEs never con­sid­ered, that child eat­ing is bad. And it is good for them to kill any­one who thinks oth­er­wise. There’s no trend in moral that can be en­cap­su­lated.

1. Hu­mankind should not spend the rest of eter­nity des­per­ately wish­ing that the pro­gram­mers had done some­thing differ­ently.

If they stop be­ing BE they will mourn their wrong do­ings to the death.

1. Avoid cre­at­ing a mo­tive for mod­ern-day hu­mans to fight over the ini­tial dynamic

Every sin­gle no­tion that FAI will make in lines of “Let’s sup­pose that you are non-BE” will cause it to be de­stroyed.

1. Help peo­ple.

Help BEs ev­ery­time, but the cer­e­mony of BEing.

How this will take FAI to the point that ev­ery con­scious be­ing must live?

• 7 Jun 2010 12:37 UTC
4 points

About CEV: Am I cor­rect that Eliezer’s main goal would be to find the one util­ity func­tion for all hu­mans? Or is it equally plau­si­ble to as­sume that some im­por­tant val­ues can­not be ex­trap­o­lated co­her­ently, and that a Seed-AI would there­fore provide sev­eral re­sults clus­tered around some groups of peo­ple?

Read­ing helps. This he has ac­tu­ally dis­cussed, in suffi­cient de­tail, I think.[/​edit]

• I think the ex­pec­ta­tion is that, if all hu­mans had the same knowl­edge and were bet­ter at think­ing (and were more the peo­ple we’d like to be, etc.), then there would be a much higher de­gree of co­her­ence than we might ex­pect, but not nec­es­sar­ily that ev­ery­one would ul­ti­mately have the same util­ity func­tion.

• Or is it equally plau­si­ble to as­sume that some im­por­tant val­ues can­not be ex­trap­o­lated co­her­ently, and that a Seed-AI would there­fore provide sev­eral re­sults clus­tered around some groups of peo­ple?

There is only one world to build some­thing from. “Sev­eral re­sults” is never a solu­tion to the prob­lem of what to ac­tu­ally do.

• Please bear with my bad English, this did not come across as in­tended.

So: Either all or noth­ing?

No pos­si­bil­ity that the AI could de­tect that to max­i­mize this hard­core util­ity func­tion we need to sep­a­rate differ­ent groups of peo­ple, maybe/​prob­a­bly ly­ing to them about their sep­a­ra­tion, just pro­vid­ing the illu­sion of unity of hu­mankind to each group? Or is too ob­vi­ous a thought, or too dumb be­cause of x?

• I think the idea is that CEV lets us “grow up more to­gether” and figure that out later.

I have only re­cently started look­ing into CEV so I’m not sure whether I a) think it’s a work­able the­ory and b)think it’s a good solu­tion, but I like the way it puts off im­por­tant ques­tions.

It’s im­pos­si­ble to pre­dict what we will want if age, dis­ease, vi­o­lence, and poverty be­come ir­rele­vant (or at least op­tional).

• I have been read­ing the “eco­nomic col­lapse” liter­a­ture since I stum­bled on Casey’s “Cri­sis In­vest­ing” in the early 1980s. They have re­ally good ar­gu­ments, and the col­lapses they pre­dict never hap­pen. In the late-90s, af­ter read­ing “Cri­sis In­vest­ing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so con­sis­tently wrong.

The con­clu­sion I reached was that hu­mans are fun­da­men­tally more flex­ible and more adapt­able than the col­lapse-pre­dic­tors’ ar­gu­ments al­lowed for, and so­ciety man­aged to work-around all the reg­u­la­tions and other prob­lems the gov­ern­ment and big busi­nesses keep cre­at­ing. Since the reg­u­la­tions and rules keep grow­ing and cre­at­ing more prob­lems and rigidity along the way, even­tu­ally there will be a col­lapse, but any­one that gives any kind of timing for it is grab­bing at the short end of the stick.

Any­one here have more sug­ges­tions as to rea­sons they have been wrong?

(origi­nally posted on esr’s blog 2010-05-09, re­vised and ex­panded since)

• Not sure if you’re refer­ring to the same liter­a­ture, but I note a great di­ver­gence be­tween peak oil ad­vo­cates and sin­gu­lar­i­tar­i­ans. This is a lit­tle weird, if you think of Au­mann’s Agree­ment the­o­rem.

Both groups are highly pop­u­lated with en­g­ineer types, highly in­ter­ested in cog­ni­tive bi­ases, group dy­nam­ics, habits of in­di­vi­d­u­als and so­cieties and nei­ther are main­stream.

Both groups use ex­trap­o­la­tion of curves from very real phe­nom­ena. In the case of the kurzweillian sin­gu­lar­i­tar­i­ans, it is com­put­ing power and in the case of the peak oil ad­vo­cates, it is the hub­bert curve for re­sources along with solid Net En­ergy based ar­gu­ments about how civ­i­liza­tion should de­cline.

The ex­treme among the Peak Oil ad­vo­cates are col­lap­si­tar­i­ans and be­lieve that peo­ple should dras­ti­cally change their lifestyles, if they want to sur­vive. They are also not wait­ing for the oth­ers to join them and many are prepar­ing to go to small towns, villages etc. The oil­drum, linked here had started as a mod­er­ate peak oil site dis­cussing all pos­si­bil­ities, nowa­days, ap­par­ently, its all doom all the time.

The ex­treme among the sin­gu­lar­i­tar­i­ans have been asked no such sac­ri­fice, just to give enough money and sup­port to make sure that Friendly AI is achieved first.

Both groups be­lieve that busi­ness as usual can­not go on for too long, but they ex­pect dra­mat­i­cally differ­ent con­se­quences. The sin­gu­lar­i­tar­i­ans as­sert that eco­nomics con­di­tions and tech­nol­ogy will im­prove un­til a non­cha­lant su­per-in­tel­li­gence will be cre­ated and wipe out hu­man­ity. The col­lap­si­tar­i­ans be­lieve that eco­nomic con­di­tions will worsen, civ­i­liza­tion is not built ro­bustly and will col­lapse badly with hu­man­ity prob­a­bly go­ing ex­tinct or only the last hunter gath­er­ers sur­viv­ing.

• It should be pos­si­ble to be­lieve both—un­less you’re ex­pect­ing peak oil to lead to so­cial col­lapse fairly soon, Moore’s law could make a singluar­ity pos­si­ble while en­ergy be­comes more ex­pen­sive.

• Which could sug­gest a dis­tress­ing pinch point: not want­ing to de­lay AI too long in case we run out of en­ergy for it to use; not want­ing to make an AI too soon in case it’s Un­friendly.

• Could you give some ex­am­ples of the pre­dicted col­lapses that didn’t hap­pen?

• Y2K. I thought I had a solid lower bound for the size of that one: Small busi­nesses ba­si­cally did noth­ing in prepa­ra­tion, and they still had a fair amount of de­pen­dence on date-de­pen­dent pro­grams, so I was ex­pect­ing that the im­pact on them would set a siz­able lower bound on the the size of the over­all im­pact. I’ve never been so glad to be wrong. I would still like to see a good ret­ro­spec­tive ex­plain­ing how that sec­tor of the econ­omy wound up un­af­fected...

• Small busi­nesses ba­si­cally did noth­ing in prepa­ra­tion [for Y2K], and they still had a fair amount of de­pen­dence on date-de­pen­dent programs

The smaller the busi­ness, the less likely they are to have their own soft­ware that’s not sim­ply a database or spread­sheet, man­aged in say, a Microsoft product. The smaller the busi­ness, the less likely that any­thing au­to­mated is rely­ing on cor­rect date calcu­la­tions.

Th­ese at least would have been strong miti­gat­ing fac­tors.

[Edit: also, even in­dus­try-spe­cific pro­grams would likely be fixed by the man­u­fac­turer. For ex­am­ple, most of the real-es­tate soft­ware pro­duced by the com­pany I worked for in the 80′s and 90′s was Y2K-ready since be­fore 1985.]

• First, the “eco­nomic col­lapse” I referred to in the origi­nal post were ac­tu­ally at least 6 differ­ent pre­dic­tions at differ­ent times.

As an­other ex­am­ple, but not quite a “col­lapse” sce­nario, con­sider the pre­dic­tions of the like­li­hood of nu­clear war; there were three dis­tinct pe­ri­ods where it was con­sid­ered more or less likely by differ­ent groups. The late 1940s some in­tel­li­gent and in­formed, but periph­eral, ob­servers like Robert Hein­lein con­sid­ered it a sig­nifi­cant risk. Next was the late 1950s through the Cuban Mis­sile Cri­sis in the early 1960s, when nearly ev­ery­body con­sid­ered it a ma­jor risk. Then there was an­other scare in the late 1970s to early 1980s, pri­mar­ily leftists (in­clud­ing the me­dia) fa­vor­ing disar­ma­ment pro­mul­gat­ing the fear to try to get the US to re­duce their stock­piles and con­ser­va­tives (de­rided by the me­dia as “sur­vival­ists” and nuts) who were afraid they would suc­ceed.

• An in­ter­est­ing ar­ti­cle crit­i­ciz­ing spec­u­la­tion about so­cial trends (speci­fi­cally teen sex) in the ab­sence of statis­ti­cal ev­i­dence.

• Beau­tiful. Matthew Ygle­sias, +1 point.

It is en­tirely pos­si­ble that some so­cial groups are ex­pe­rienc­ing the kind of changes that Flana­gan de­scribes, but as Ygle­sias says, she ap­par­ently is un­aware that there is such a thing as sci­en­tific ev­i­dence on the ques­tion.

• In­spired by Chap­ter 24 of Meth­ods of Ra­tion­al­ity, but not a spoiler: If the evolu­tion of hu­man in­tel­li­gence was driven by com­pe­ti­tion be­tween hu­mans, why aren’t there a lot of in­tel­li­gent species?

• Five-sec­ond guess: Hu­man-level Machi­avel­lian in­tel­li­gence needs lan­guage fa­cil­ities to co-evolve with, grunts and body lan­guage doesn’t al­low nearly as con­voluted schemes. Evolv­ing some pre­cur­sor form of hu­man-style lan­guage is the im­prob­a­ble part that other species haven’t man­aged to pull off.

• Some­what ac­cepted par­tial an­swer is that huge brains are ridicu­lously ex­pen­sive—you need a lot of high en­ergy den­sity food (= fire), a lot of DHA (= fish) etc. Chimp diet sim­ply couldn’t sup­port brains like ours (and aquatic ape etc.), nor could they spend as much time as us en­gag­ing in poli­tics as they were too busy just get­ting food.

Per­haps chimp brains are as big as they could pos­si­bly be given their dietary con­straints.

• That’s con­ceiv­able, and might also ex­plain why wolves, crows, elephants, and other highly so­cial an­i­mals aren’t as smart as peo­ple.

Also, I think the origi­nal bit in Meth­ods of Ra­tion­al­ity over­es­ti­mates how easy it is for new ideas to spread. As came up re­cently here, even if tacit knowl­edge can be ex­plained, it usu­ally isn’t.

This means that if you figure out a bet­ter way to chip flint, you might not be able to ex­plain it in words, and even if you can, you might chose to keep it as a fam­ily or tribal se­cret. In­ven­tions could give their in­ven­tors an ad­van­tage for quite a long time.

• I’ve re­cently be­gun down­vot­ing com­ments that are at −2 rat­ing re­gard­less of my feel­ings about them. I in­sti­tuted this policy af­ter ob­serv­ing that a sig­nifi­cant num­ber of com­ments reach −2 but fail to be pushed over to −3, which I’m at­tribut­ing to the thresh­old be­ing too much of a psy­cholog­i­cal bar­rier for many peo­ple to pen­e­trate; they don’t want to be ‘the one to push the but­ton’. This is an ex­ten­sion of my RL policy of tak­ing ‘the last’ of some­thing laid out for com­mu­nal use (coffee, donuts, cups, etc.). If the com­ment thread re­ally needs to be visi­ble, I ex­pect oth­ers will vote it back up.

Edit: It’s likely that most of the nega­tive re­sponse to this com­ment cen­ters around the phrase “re­gard­less of my feel­ings about them.” I now con­sider this to be too strong a state­ment with re­gards to my im­ple­mented ac­tions. I do read the com­ment to make sure I don’t con­sider it any good, and doubt I would per­versely vote some­thing down even if I wanted to see more of it.

• I wish you wouldn’t do that, and stuck in­stead with the gen­er­ally ap­proved norm of down­vot­ing to mean “I’d pre­fer to see fewer com­ments like this” and up­vot­ing “I’d like to see more like this”.

You’re de­liber­ately par­ti­ci­pat­ing in in­for­ma­tion cas­cades, and thereby un­der­min­ing the fil­ter­ing pro­cess. As an an­ti­dote, I recom­mend us­ing the anti-kib­itzer script (you can do that through your Prefer­ences page).

• I wish you wouldn’t do that, and stuck in­stead with the gen­er­ally ap­proved norm of down­vot­ing to mean “I’d pre­fer to see fewer com­ments like this” and up­vot­ing “I’d like to see more like this”.

I dis­agree that that’s the for­mula used for com­ments that ex­ist within the range −2 to 2. Within that range, from what I’ve ob­served of vot­ing pat­terns, it seems far more likely that the equa­tion is re­lated to what value the com­ment “should be at.” If many peo­ple used anti-kib­itz­ing, I doubt this would re­main a prob­lem.

• I be­lieve your hy­poth­e­sis and de­ci­sion are pos­si­bly cor­rect, but if they are, you should ex­pect your down­votes to of­ten be cor­rected up­wards again. If this doesn’t hap­pen, then you are wrong and shouldn’t ap­ply this heuris­tic.

I dis­agree that that’s the for­mula used for com­ments that ex­ist within the range −2 to 2.

Morendil doesn’t say it’s what ac­tu­ally hap­pens, he merely says it should hap­pen this way, and that you in par­tic­u­lar should be­have this way.

• I don’t do huge amounts of vot­ing, and I ad­mit that if a post I like has what I con­sider to be “enough” votes, I don’t up­vote it fur­ther. I can cer­tainly change this policy if there’s rea­son to think up­vot­ing ev­ery­thing I’d like to see more of would help make LW work bet­ter.

• I am tempted to down­vote this com­ment from −2 just for the irony, but I don’t pre­fer to see fewer com­ments like this, so I won’t.

Be­sides, the de­fault cut­off is at −4, not −3.

• After log­ging out and at­tempt­ing to view a thread with a com­ment at ex­actly −3, it showed that com­ment to be be­low thresh­old. I doubt that it re­tains cus­tomized set­tings af­ter log­ging out, and I do not be­lieve that I changed mine in the first place, lead­ing me to be­lieve that −3 is in­deed the thresh­old.

Also, my origi­nal com­ment was at −3 within min­utes of post­ing.

• The de­fault was −4 logged in when I joined last year—per­haps it’s differ­ent for non-logged-in peo­ple.

Also, that makes me guess peo­ple changed their votes to aim your com­ment at −2.

• Here is the change. Also, the num­ber refers to the low­est visi­ble com­ments, not the high­est in­visi­ble com­ments.

• My re­cent com­ment on Red­dit re­minded me of WrongTo­mor­row.com—a site that was men­tioned briefly here a while ago, but which I haven’t seen much since.

Try it out, guys! LongBets and Pre­dic­tionBook are good, but they’re their own niche; LongBets won’t help you with pun­dits who don’t use it, and Pre­dic­tionBook is aimed at per­sonal use. If you want to track cur­rent pun­dits, WrongTo­mor­row seems like the best bet.

• Am I cor­rect in read­ing that Long­bets charges a $50 fee for pub­lish­ing a pre­dic­tion and they have to be a min­i­mum of 2 years in the fu­ture? Thats a bit harsh. But these sites are pretty in­ter­est­ing. And they could be use­ful to. You could judge the ac­cu­racy of differ­ent users in­clud­ing how ac­cu­rate they are at guess­ing long-term, short-term, etc pre­dic­tions as well as how ac­cu­rate they are in differ­ent catagories (or just how ac­cu­rate they are on av­er­age if you want to be sim­ple.) Then you can cre­ate a fairly de­cent pic­ture of the fu­ture, albeit I ex­pect many of the pre­dic­tions will con­tra­dict each other. This is kind of what their already do­ing ob­vi­ously, but they could still take it a step fur­ther. • Let’s get this thread go­ing: I’d like to ask ev­ery­one what prob­a­bil­ity bump they give to an idea given that some peo­ple be­lieve it. This is based on the fact that out of the hu­mon­gous idea-space, some ideas are be­lieved by (groups of) hu­mans, and a sub­set of those are be­lieved by hu­mans and are true. (of course there ex­ist some that are true and not yet be­lieved by hu­mans.) So, given that some peo­ple be­lieve X, what prob­a­bil­ity do you give for X be­ing true, com­pared to Y which no­body cur­rently be­lieves? • I’d like to ask ev­ery­one what prob­a­bil­ity bump they give to an idea given that some peo­ple be­lieve it. Usu­ally fairly sub­stan­tial—if some­one pre­sents me with two equally-un­sup­ported claims X and Y and tells me that they be­lieve X and not Y, I would give greater cre­dence to X than to Y. Many times, how­ever, that cre­dence would not reach the level of … well, cre­dence, for var­i­ous good rea­sons. • Depends on the per­son and the idea. I have some peo­ple whose recom­men­da­tions I fol­low re­gard­less, even if I es­ti­mate up­front that I will con­sider the idea wrong. There are differ­ent lev­els of wrong­ness, and it does not hurt to get good coun­ter­ar­gu­ments. It also de­pends on the real life prac­ti­ca­bil­ity of the idea. If it is for ev­ery­day things than com­mon sense is a good start­ing prior. (Also there is a time and place to use the pub­lic joker on Who wants to be a mil­lion­aire.) If a group of pro­fes­sion­als agree on some­thing re­lated to their pro­fes­sion it is also a good start. To sys­tem­atize: if a group of peo­ple has a be­lief about some­thing they have ex­pe­rience with, that that be­lief is worth look­ing at. And then on fur­ther in­ves­ti­ga­tion it of­ten turns out that there are sys­tem­atic mis­takes be­ing made. I was shocked to read in the book on check­lists, that not only doc­tors of­ten don’t like them. But even fi­nan­cial com­pa­nies, that can see how the us­age ups their mon­e­tary gains. But find­ing flaws in a whole group does not im­ply that ev­ery­thing they say is wrong. It is good to see a doc­tor, even if he not us­ing statis­tics right. He can re­fer you to a spe­cial­ist, and treat all the com­mon stuff right away. If you get a com­pli­cated dis­ease you can of­ten read up on it. The ob­vi­ous ex­am­ple to your ques­tion would be re­li­gion. It is widely be­lieved, but prob­a­bly wrong, yet I did not dis­card it right away, but spent years study­ing stuff till I de­cided there was noth­ing to it. There is noth­ing wrong in ex­am­in­ing the ideas other peo­ple have. • Agreed. As the OP states, idea space is hu­mon­gous. The fact alone that peo­ple com­pre­hend some­thing suffi­ciently to say any­thing about it at all means that this some­thing is a) note­wor­thy enough to be picked up by our evolu­tion­ar­ily de­rived fac­ul­ties by even a bad ra­tio­nal­ist b) ex­press­ible by same fac­ul­ties c) not im­me­di­ately, ob­vi­ously wrong To sum up, the fact that some­one claims some­thing is weak ev­i­dence that it’s true, cf. Ein­stein’s Ar­ro­gance. If this some­one is Ein­stein, the ev­i­dence is not so weak. Edit: just to clar­ify, I think this ev­i­dence is very weak, but ev­i­dence for the propo­si­tion, nonethe­less. Depen­dent on the met­ric, by far most propo­si­tions must be “not even wrong”, i.e. gar­bled, mean­ingless or ab­surd. The ra­tio of “true” to {”wrong” + “not even wrong”} seems to in­eluctably be larger for propo­si­tions ex­pressed by hu­mans than for those not ex­pressed, which is why some­one ut­ter­ing the propo­si­tion counts as ev­i­dence for it. Peo­ple sim­ply never claim that ap­ples fall up­wards, side­ways, green, kjO30KJ&¤k etc. • I for­got the ma­jor in­fluence of my own prior knowl­edge. (Which i guess holds true for ev­ery­one.) That makes the cases where I had a fixed opinion, and man­aged to change it all the more in­ter­est­ing. If you never dealt with an idea be­fore you go where com­mon sense or the ex­perts lead you. But if you already have good knowl­edge, than pub­lic opinion should do noth­ing to your view. Public opinion or even ex­perts (esp. when out­side their field) of­ten enough state opinions with­out com­pre­hend­ing the idea. So it doesnt re­ally mean too much. Re­gard­ing Ein­stein, he made the state­ments be­fore be­com­ing su­per fa­mous. I un­der­stand it as a case of sig­nal­ing ‘look over here!’ And he is not par­tic­u­larly safe against er­rors. One of his last ac­tions (which I have not fact checked suffi­ciently so far) was to write a fore­word for a book de­bunk­ing the move­ment of the con­ti­nen­tal plates. • Re­gard­ing Ein­stein, he made the state­ments be­fore be­com­ing su­per fa­mous. I un­der­stand it as a case of sig­nal­ing ‘look over here!’ And he is not par­tic­u­larly safe against er­rors. One of his last ac­tions (which I have not fact checked suffi­ciently so far) was to write a fore­word for a book de­bunk­ing the move­ment of the con­ti­nen­tal plates. I didn’t in­tend to por­tray Ein­stein as bul­let­proof, but rather high­light his rea­son­ing. Plus point to the idea of even lo­cat­ing the idea in idea space. Ob­vi­ously, cre­ation­ism is wrong, but less wrong than a ran­dom string. It at least man­ages to iden­tify a prob­lem and us­ing cause and effect. • Thank you, this is what I was get­ting at. • If no peo­ple be­lieve Y—liter­ally no peo­ple—then ei­ther the topic is very lit­tle ex­am­ined by hu­man be­ings, or it’s very ex­haus­tively ex­am­ined and seems ob­vi­ous to ev­ery­one. In the first case, I give a smaller prob­a­bil­ity than in the sec­ond case. In the first case, only X be­liev­ers ex­ist be­cause only X be­liev­ers have yet con­sid­ered the is­sue. That’s min­i­mal ev­i­dence in fa­vor of X. In the sec­ond case, lots of peo­ple have heard of the is­sue; if there were a de­cent case against X, some­body would have thought of it. The fact that none of them—not a minor­ity, but none—ar­gued against X is strong ev­i­dence that X is true. • If no peo­ple be­lieve Y—liter­ally no peo­ple—then ei­ther the topic is very lit­tle ex­am­ined by hu­man be­ings, or it’s very ex­haus­tively ex­am­ined and seems ob­vi­ous to ev­ery­one. In the first case, I give a smaller prob­a­bil­ity than in the sec­ond case. Isn’t the other way around? (Good anal­y­sis, by the way.) • I don’t think be­lief has a con­sis­tent ev­i­den­tiary strength since it de­pends on the tes­tifier’s cred­i­bil­ity rel­a­tive to my own. Chil­dren have much lower cred­i­bil­ity than me on the is­sue of the ex­is­tence of Santa. Pro­fes­sors of physics have much higher cred­i­bil­ity that me on the is­sue of di­men­sions greater than four. Some per­son other than me has much higher cred­i­bil­ity on the is­sue of how much money they are car­ry­ing. But I have more cred­i­bil­ity than any­one else on the is­sue of how much money I’m car­ry­ing. I don’t see any re­la­tion that could be de­scribed as baseline so the only an­swer is: con­text. • I’ve be­come in­creas­ingly dis­illu­sioned with peo­ple’s ca­pac­ity for ab­stract thought. Here are two points on my jour­ney. The pub­lic dis­cus­sion of us­ing wind tur­bines for car­bon-free elec­tric­ity gen­er­a­tion seems to im­plic­itly as­sume that elec­tric­ity out­put goes as some­thing like the square-root of wind­speed. If the wind is only blow­ing half speed you still get some­thing like 70% out­put. You won’t see peo­ple say­ing this di­rectly, but the gen­eral at­ti­tude is that you only need back up for the oc­ca­sional calm day when the wind doesn’t blow at all. In fact out­put goes as the cube of wind­speed. The en­ergy in the wind­stream is one half m v squared, where m, the mass pass­ing your tur­bine is pro­por­tional to the wind­speed. If the wind is at half strength, you only get 18 out­put. Well, that is physics. Of­course peo­ple suck at physics. Trou­ble is, the more I look at peo­ple’s ca­pac­ity for ab­stract thought the more prob­lems I see. When peo­ple do a cost/​benefit anal­y­sis they are ter­ribly vague on whether they are su­posed to add the costs and benefits or whether the costs get sub­tracted from the benefits. Even if they re­al­ise that they have to sub­tract they are still at risk of us­ing an in­verted scale for the costs and end­ing up effec­tively adding. The prob­a­biltiy bump I give to an idea just be­cause some peo­ple be­lieve it is zero. Equiv­antly my odds ra­tio is one. How­ever you de­scribe it, my pos­te­rior is just the same as my prior. • When peo­ple do a cost/​benefit anal­y­sis they are ter­ribly vague on whether they are su­posed to add the costs and benefits or whether the costs get sub­tracted from the benefits. Re­vised: I do not think that link pro­vides ev­i­dence for the quoted sen­tence. Nor I do see other ev­i­dence that peo­ple are that bad at cost-benefit anal­y­sis. I agree that the ex­am­ple pre­sented there is in­ter­est­ing and that one should keep in mind that dis­agree­ments about val­ues can be hid­den, some­times mal­i­ciously. • I’ve got a bet­ter link. David Hen­der­son catches a pro­fes­sor of eco­nomics get­ting costs and benefits con­fused in a pub­lished book. Hen­der­son’s re­view is on on page 54 of Reg­u­la­tion, and my viewer puts it on the ninth page of the pdf that Hen­der­son links to • That is a good ex­am­ple. Talk of cre­at­ing jobs as a benefit, rather than a cost is quite com­mon. But is it con­fu­sion or mal­ice? It is hard for me to imag­ine that economists would pub­lish such a book with­out hav­ing it pointed out to them. The au­di­ence cer­tainly is con­fused. Hen­der­son says “Al­most no one spend­ing his own money makes this mis­take” and would not gen­er­al­ize to peo­ple’s ca­pac­ity for ab­stract thought. The origi­nal ques­tion was how much in­for­ma­tion to ex­tract from the con­ven­tional wis­dom. I do not take this as a rea­son to doubt the con­ven­tional wis­dom about per­sonal de­ci­sions. Partly, this is pub­lic choice, and partly be­cause peo­ple do not ad­dress ex­ter­nal­ities in their per­sonal de­ci­sions. Maybe any com­monly ac­cepted ar­gu­ment in­volv­ing eco­nomics should be sus­pect, though the ex­is­tence of the very well-es­tab­lished ap­plause-line of “cre­at­ing jobs” sug­gests that there are limits to how to fool peo­ple. But your claim was not that peo­ple are bad at physics and eco­nomics, but at the ab­stract thought of de­ci­sion the­ory. • I think it largely de­pends on a) what the idea is and b) who be­lieves it = and what their ra­tio­nal­ity skills are. • I re­cently learned the hard way, that one can eas­ily be an idiot in one area, while be­ing very com­pe­tent in an­other. Reli­gious sci­en­tists /​ pro­gram­mers etc. Or lets say peo­ple that are highly com­pe­tent in their area of oc­cu­pa­tion with­out look­ing into other things. • Out of the huge idea space of pos­si­ble causally linked events, some of them make good sto­ries and some do not. That doesn’t tell you rather it’s true or not. If a guy thinks that he can hear Hillary Clin­ton speak­ing from the feel­ings in his teeth, tel­ling him to mur­der his cel­l­mate, do you be­lieve what he says? Sta­tus gets mucked up in the calcu­la­tion, but with strangers it teeters pre­car­i­ously close to zero. I re­ally like kids,but the fact that mil­lions of them pas­sion­ately be­lieve in Santa Claus does not change my de­gree of sub­jec­tive be­lief one iota. • Well ob­vi­ously propo­si­tions with ex­tremely high com­plex­ity (and there­fore very low pri­ors) are go­ing to re­main low even when peo­ple be­lieve them. But if some­one says they be­lieve they have 10 dol­lars on them or that the US Con­sti­tu­tion was signed in Septem­ber… the be­lief is enough to make those claims more likely than not. • Out of the huge idea space of pos­si­ble causally linked events, some of them make good sto­ries and some do not. That doesn’t tell you rather it’s true or not. But peo­ple only be­lieve things that make sense to them. When it comes to con­tro­ver­sial is­sues, then ya, you’ll find that most peo­ple will be di­vided on it. How­ever, we elect peo­ple to lead us in the faith that the ma­jor­ity opinion is right. So even that isn’t en­tirly true. And out of the vast ma­jor­ity of pos­si­ble ideas, most peo­ple that live in the same so­ciety will agree or dis­agree the same way on the ma­jor­ity of them, es­spe­cially if they have the same back­ground knowl­edge. • I’d like to ask ev­ery­one what prob­a­bil­ity bump they give to an idea given that some peo­ple be­lieve it. None. Or as Ben Goldacre put it in a talk: There are mil­lions of med­i­cal doc­tors and Ph.D.s in the world. There is no idea, how­ever com­pletely fuck­ing crazy, that you can’t find some doc­tor to ar­gue for. So, given that some peo­ple be­lieve X, what prob­a­bil­ity do you give for X be­ing true, com­pared to Y which no­body cur­rently be­lieves? In any case of a spe­cific X and Y, there will be far more in­for­ma­tion than that (who be­lieves X and why? does any­one dis­be­lieve Y? etc.), which makes it im­pos­si­ble for me to at­tach any prob­a­bil­ity for the ques­tion as posed. • Or as Ben Goldacre put it in a talk: There are mil­lions of med­i­cal doc­tors and Ph.D.s in the world. There is no idea, how­ever com­pletely fuck­ing crazy, that you can’t find some doc­tor to ar­gue for. Cute quip, but I doubt it. Find me a Ph.D to ar­gue that the sky is bright or­ange, that the en­glish lan­guage doesn’t ex­ist, and that all hu­mans have at least sev­en­teen arms and a max­i­mum lifes­pan of ten min­utes. • All gen­er­al­i­sa­tions are bounded, even when the bounds are not ex­pressed. In the con­text of his talk, Ben Goldacre was talk­ing about “doc­tors” be­ing quoted as sup­port­ing var­i­ous pieces of bad med­i­cal sci­ence. • Many med­i­cal doc­tors around here (ger­many) offer home­opa­thy in ad­di­tion to their med­i­cal prac­tice. Now it might be that they re­spond to mar­ket de­mand to sneak in some med­i­cal sci­ence in be­tween, or that they ac­tu­ally take it se­ri­ous. • Now it might be that they re­spond to mar­ket de­mand to sneak in some med­i­cal sci­ence in be­tween, or that they ac­tu­ally take it se­ri­ous. Or that they re­spond to mar­ket de­mand and don’t try to sneak any med­i­cal sci­ence in, based on the prin­ci­ple that the cus­tomer is always right. • From what I’ve heard, in Ger­many and other places where home­opa­thy en­joys high sta­tus and pro­fes­sional recog­ni­tion, doc­tors some­times use it as a very con­ve­nient way to deal with hypochon­dri­acs who pester them. Sounds to me like a win-win solu­tion. • I still as­sume that doc­tors ac­tu­ally want to help peo­ple. (De­spite read­ing the check­list book, and other stuff). So if I have the choice be­tween: World a) where doc­tors also do home­opa­thy, and b) where other ppl. do it, while doc­tors stay true to sci­ence. Than I would pre­fer a) be­cause at least the peo­ple go to a some­what com­pe­tent per­son. • I still as­sume that doc­tors ac­tu­ally want to help people Homeopa­thy is at best a placebo. It’s rare that there’s no bet­ter med­i­cal way to help some­one. Your as­sump­tion is counter to the facts. Cer­tainly doc­tors want to help peo­ple—all else be­ing equal. But if they prac­tice home­opa­thy ex­ten­sively, then they are pri­ori­tiz­ing other things over helping peo­ple. If the mar­ket con­di­tion (i.e. the pa­tients’ opinions and de­sires) are such that they will not ac­cept sci­en­tific medicine, and will only use home­opa­thy any­way, then I sug­gest then the best way to help peo­ple is for all doc­tors to pub­li­cly de­nounce home­opa­thy and thus con­vince at least some peo­ple to use bet­ter-than-placebo treat­ments in­stead. • Homeopa­thy is at best a placebo. It’s rare that there’s no bet­ter med­i­cal way to help some­one. I dis­agree—at least with the part about “it’s rare that there’s no bet­ter med­i­cal way to help peo­ple”. It’s de­press­ingly com­mon that there’s no bet­ter med­i­cal way to help peo­ple. Things like back pain, tired­ness, and mus­cle aches—the com­mon­est things for which peo­ple see doc­tors—can some­times be traced to nice cur­able med­i­cal rea­sons, but very of­ten as far as any­one knows they’re just there. Robin Han­son has a the­ory—and I kind of agree with him—that home­opa­thy fills a use­ful niche. Place­bos are pretty effec­tive at cur­ing these ran­dom (and some­times imag­ined) aches and pains. But most places con­sider it ille­gal or un­eth­i­cal for doc­tors to di­rectly pre­scribe a placebo. Right now a lot of doc­tors will just pre­scribe as­pirin or parac­eta­mol or some­thing, but these are far from to­tally harm­less and there are a lot of things you can’t trick pa­tients into think­ing as­pirin is a cure for. So what would be re­ally nice, is if there was a way doc­tors could give some­one a to­tally harm­less and very in­ex­pen­sive sub­stance like wa­ter and make the pa­tient think it was go­ing to cure ev­ery­thing and the kitchen sink, with­out di­rectly ly­ing or ex­pos­ing them­selves to malprac­tice alle­ga­tions. Where this stands or falls is whether or not it turns pa­tients off real medicine and gets them to start want­ing home­opa­thy for med­i­cally known, treat­able dis­eases. Hope­fully it won’t—there aren’t a lot of peo­ple who want home­o­pathic can­cer treat­ment—but that would be the big risk. • You might im­plic­itly as­sume that peo­ple make a con­scious choice to go the un­scien­tific route. That is not the case. For a layper­son there is no per­ceiv­able differ­ence be­tween a doc­tor and a home­opath. (Well. Maybe there is, but lets ex­ag­ger­ate that here.) From the ex­pe­rience the home­opath might have more time to listen, while doc­tors of­ten have a ap­proach to treat­ment speed that re­minds me of a fast food place. If I were a doc­tor, than the idea to offer home­opa­thy, so that peo­ple at least come to me would make sense both money wise, and to get the effect that they are already at a doc­tors place for treat­ment with place­bos for triv­ial stuff, while ac­tual dan­ger­ous con­di­tions get check out from a com­pe­tent per­son. Its a case of cor­rupt­ing your in­tegrity to some de­gree to get the mes­sage heard. I con­sid­ered to not go to doc­tors that offer home­opa­thy, but then de­cided against that due to this rea­son­ing. • I con­sid­ered to not go to doc­tors that offer home­opa­thy, but then de­cided against that due to this rea­son­ing. You could prob­a­bly ask the doc­tor why they offer home­opa­thy, and base your de­ci­sion on the sort of an­swer you get. “Be­cause it’s an effec­tive cure...” is straight out. • tl;dr—if doc­tors don’t de­nounce home­opaths, peo­ple will start go­ing to “real” home­opaths and other alt-medicine peo­ple, and there is no prac­ti­cal limit to the lies and harm done by real home­opaths. For a layper­son there is no per­ceiv­able differ­ence be­tween a doc­tor and a home­opath. That is so be­cause doc­tors also offer home­opa­thy. If al­most all doc­tors clearly de­nounced home­opa­thy, fewer peo­ple would choose to go to home­opaths, and these peo­ple would benefit from bet­ter treat­ment. From the ex­pe­rience the home­opath might have more time to listen, while doc­tors of­ten have a ap­proach to treat­ment speed that re­minds me of a fast food place. This is a prob­lem in its own right that should be solved by giv­ing doc­tors in­cen­tives to listen to pa­tients more. How­ever, do you think that be­cause doc­tors don’t listen enough, home­opaths pro­duce bet­ter treat­ment (i.e. bet­ter med­i­cal out­comes)? they are already at a doc­tors place for treat­ment with place­bos for triv­ial stuff, while ac­tual dan­ger­ous con­di­tions get check out from a com­pe­tent per­son. Do you have ev­i­dence that this is the re­sult pro­duced? What if the re­verse hap­pens? Be­cause the doc­tors en­dorse home­opa­thy, pa­tients start go­ing to home­opaths in­stead of doc­tors. Homeopaths are bet­ter at sel­l­ing them­selves, be­cause un­like doc­tors they can lie (“home­opa­thy is not a placebo and will cure your dis­ease!”). They are also bet­ter at listen­ing, can cre­ate a nicer (non-clini­cal) re­cep­tion at­mo­sphere, they can get more word-of-mough net­work­ing benefits, etc. Pa­tients can’t nor­mally dis­t­in­guish “triv­ial stuff” from dan­ger­ous con­di­tions un­til it’s too late—even doc­tors some­times get this wrong. The next log­i­cal step is for peo­ple to let home­opaths treat all the triv­ial stuff, and go to ER when some­thing re­ally bad hap­pens. Per­sonal story: my mother is a doc­tor (geri­a­tri­cian). When I was a teenager I had sea­sonal aller­gies and she in­sisted on send­ing me for weekly acupunc­ture. Dur­ing the hour-long ses­sions I had to listen to the ram­blings of the acupunc­tur­ist. He told me (com­pletely se­ri­ously) that, al­though he per­son­ally didn’t have the skill, the peo­ple who taught him acupunc­ture in China could use it to cure my type 1 di­a­betes. He also once told me about some­one who used var­i­ous “al­ter­na­tive medicine” to eat only vine leaves for a year be­fore dy­ing. When the acupunc­ture didn’t help me, my mother said that was my own fault be­cause “I de­liber­ately dis­be­lieved the power of acupunc­ture and so the placebo effect couldn’t work on me”. • Sorry about your ex­pe­rience. I per­ceive you as at­tack­ing me for hav­ing said po­si­tion, but I am the wrong tar­get. I know home­opa­thy is BS, and I don’t use it or ad­vo­cate it. What I do un­der­stand is doc­tors who offer it for some rea­son or an­other, for the rea­sons listed above. What you claim as a re­sult is sadly already hap­pen­ing. I have had peo­ple get­ting an­gry at me for clearly stat­ing my view, and the rea­sons for it, on home­opa­thy. (I didn’t say BS, but one of the ppl. was a pro­gram­mer, if that counts for some­thing.) Many folks do go to al­ter­na­tive treat­ments, and forgo doc­tors as long as pos­si­ble. Peo­ple have a weak opinion on the ‘school medicine’ (ger­man term trans­la­tion for the offi­cial med­i­cal knowl­edge and prac­tice.) crit­i­cize it—some­times jus­tified. And use all kind of hy­per-skep­ti­cal rea­son­ing, that they do not ap­ply to their cur­rent fa­vorite. That is bad. And hope­fully goes away. Many still go the dou­ble route you listed. And well, then we have the anti-vac­ci­na­tion front grow­ing. It is bad, and sad, and use­less stu­pidity. Lets get an­gry to­gether, and see what can be done about it. Per­sonal story: i did a lec­ture on skep­tic think­ing. 1. try i dumped ev­ery­thing i knew, and no­ticed how deal­ing with the H-topic tends to close peo­ple up. 2. try i cut out a lot, and left the H topic out. still didn’t work I have no idea what I can do about it, and am ba­si­cally re­sign­ing. • Sorry about your ex­pe­rience. I per­ceive you as at­tack­ing me for hav­ing said po­si­tion, but I am the wrong tar­get. I didn’t in­tend to at­tack you. Sorry I came across that way. • From what I’ve been told from friends, here (Aus­tria) they (mean­ing: most doc­tors) do take it se­ri­ous. This is un­der­stand­able; when study­ing medicine, the by far larger part of col­lege is de­voted to know­ing facts, the crafts­man­ship (if I may say so), then to do­ing med­i­cal sci­ence. This also makes sense, as ex­e­cu­tion by us­ing re­sults already re­quires so much train­ing (it is the only col­lege course here which re­quires at least six years by de­fault, not in­clud­ing “Tur­nus” (an­other three year pro­ba­tion pe­riod be­fore some­body may prac­tice with­out su­per­vi­sor)). The prob­lem here is that for the gen­eral pub­lic the differ­ence be­tween a med­i­cal prac­ti­tioner and any sci­en­tist is nil. Strangely enough, they usu­ally do not make this er­ror in en­g­ineer­ing fields, for in­stance elec­tri­cal en­g­ineer vs. physi­cist. May have to do some­thing with the high sta­tus of doc­tors in so­ciety. • I re­cently found out why doc­tors cul­ti­vate a cer­tain amount of pro­fes­sional ar­ro­gance when deal­ing with pa­tients: Most pa­tients don’t un­der­stand whats be­hind their spe­cific dis­ease—and usu­ally do not care. So if doc­tors where open to ar­gu­ment, or would state doubts more openly the pa­tient might loose trust, and not do what he is or­dered to do. To in­still an ab­solute be­lief in doc­tors pow­ers might be very helpful for a big size of the pop­u­la­tion. A lot of my own frus­tra­tion in doc­tors ex­pe­riences can be at­tributed to me be­ing a non-stan­dard pa­tient that reads to much. • Emile: Find me a Ph.D to ar­gue that the sky is bright or­ange, that the en­glish lan­guage doesn’t ex­ist, and that all hu­mans have at least sev­en­teen arms and a max­i­mum lifes­pan of ten min­utes. Th­ese claims would be be­yond the bor­der of lu­nacy for any per­son, but still, I’m sure you’ll find peo­ple with doc­torates who have gone crazy and claim such things. But more rele­vantly, Richard’s point definitely stands when it comes to out­landish ideas held by peo­ple with rele­vant top-level aca­demic de­grees. Here, for ex­am­ple, you’ll find the web­site of Ger­ar­dus Bouw, a man with a Ph.D. in as­tron­omy from a highly rep­utable uni­ver­sity who ad­vo­cates—pre­pare for it—geo­cen­trism: http://​​www.geo­cen­tric­ity.com/​​ (As far as I see, this is not a joke. Also, I’ve seen crit­i­cisms of Bouw’s ideas, but no­body has ever, to the best of my knowl­edge, dis­puted his Ph.D. He had a teach­ing po­si­tion at a rep­utable-look­ing col­lege, and I figure they would have checked.) • He had a teach­ing po­si­tion at a rep­utable-look­ing col­lege, and I figure they would have checked. It looks like no one ever hired him to teach as­tron­omy or physics. He only ever taught com­puter sci­ence (and from the sound of it, just pro­gram­ming lan­guages). My guess is he did get the PhD though. Also, in fair­ness to the col­lege he is re­tired and he’s young enough to make me think that he may have been forced into re­tire­ment. • Here, for ex­am­ple, you’ll find the web­site of Ger­ar­dus Bouw, a man with a Ph.D. in as­tron­omy from a highly rep­utable uni­ver­sity who ad­vo­cates—pre­pare for it—geo­cen­trism: Earth’s sun does or­bit the earth, un­der the right frame of refer­ence. What is out­landish about this? • Earth’s sun does or­bit the earth, un­der the right frame of refer­ence. What is out­landish about this? If you read the site, they al­ter­na­tively claim that rel­a­tivity al­lows them to use what­ever refer­ence frame they chose and at other points claim that the ev­i­dence only makes sense for geo­cen­trism. • Oh. Well, that’s stupid then. • I’m not sure it is com­pletely stupid. Con­sider the ar­gu­ment in the fol­low­ing fash­ion: 1) We think your physics is wrong and geo­cen­trism is cor­rect. 2) Even if we’re wrong about 1, your physics still sup­ports re­gard­ing geo­cen­trism as be­ing just as valid as he­lio­cen­trism. I don’t think that their ar­gu­ment ap­proaches this level of co­her­ence. • Any­one know how to defeat the availa­bil­ity heuris­tic? Put an­other way, does any­one have ad­vice on how to deal with in­co­her­ent or in­sane propo­si­tions while los­ing as lit­tle per­sonal san­ity as pos­si­ble? Is there such a thing as “safety gloves” for dan­ger­ous memes? I’m ask­ing be­cause I’m cur­rently study­ing for the Cal­ifor­nia Bar exam, which re­quires me to mem­o­rize hun­dreds of pages of le­gal rules, to­gether with their so-called jus­tifi­ca­tions. Of course, in many cases the “jus­tifi­ca­tions” are in­co­her­ent, Or­wellian dou­ble­s­peak, and/​or ten­den­tiously ide­olog­i­cal. I re­ally do want to mem­o­rize (nearly) all of these jus­tifi­ca­tions, so that I can be sure to pass the exam and con­tinue my ca­reer as a ra­tio­nal­ist lawyer, but I don’t want the pat­tern of thought used by the jus­tifi­ca­tions to be­come a part of my pat­tern of thought. • I would not worry over­much about the long-term nega­tive effects of your study­ing for the bar: with the pos­si­ble ex­cep­tion of the “overly sincere” types who fall very hard for cults and other forms of in­doc­tri­na­tion, peo­ple have a lot of an­ti­bod­ies to this kind of thing. You will con­tinue to be en­ta­gled with re­al­ity af­ter you pass the exam, and you can do things, like read works of so­cial sci­ence that carve re­al­ity at the joints, to speed up the rate at which your con­tinued en­ta­gle­ment with re­al­ity with can­cel out any false­hoods you have to cram for now. Speci­fi­cally, there are works about the law that do carve re­al­ity at the joints—Nick Sz­abo’s on­line writ­ings IMO fall in that cat­e­gory. Nick has a law de­gree, by the way, and there is cer­tainly noth­ing wrong with his abil­ity to per­ceive re­al­ity cor­rectly. ADDED. The things that are re­ally dam­ag­ing to a per­son’s ra­tio­nal­ity, IMHO, are nat­u­ral hu­man mo­ti­va­tions. When for ex­am­ple you start prac­tic­ing, if you were to de­cide to do a lot of tri­als, and you learned to de­rive plea­sure—to get a real high—from the com­bat­ive and ad­ver­sar­ial part of that, so that the high you got from win­ning with a slick and mis­lead­ing an­gle trumped the high you get from satis­fy­ing you cu­ri­os­ity and from re­fin­ing and find­ing er­rors in your model of re­al­ity—well, I would worry about that a lot more than your throw­ing your­self fully into win­ning on this exam be­cause IMHO the things we de­rive no plea­sure from, but do to achieve some end we care about (like ad­vanc­ing in our ca­reer by get­ting a cre­den­tial) have a lot less in­fluence on who we turn out to be than things we do be­cause we find them in­trin­si­cally re­ward­ing. One more thing: we should not all make our liv­ing as com­puter pro­gram­mers. That would make the com­mu­nity less ro­bust than it oth­er­wise would be :) • Thank you! This is re­ally helpful, and I look for­ward to read­ing Sz­abo in Au­gust. • I worry about this as well when I’m read­ing long ar­gu­ments or long works of fic­tion pre­sent­ing ideas I dis­agree with. My tac­tic is to stop oc­ca­sion­ally and go through a men­tal di­a­log simu­lat­ing how I would re­spond to the au­thor in per­son. This serves a dou­ble pur­pose, as hope­fully I’ll have bet­ter cached ar­gu­ments in the event I ever need them. Of course, this is a dan­ger­ous tac­tic as well, be­cause you may be shut­ting off crit­i­cal rea­son­ing ap­plied to your pre­ex­ist­ing be­liefs. I only ap­ply this tac­tic when I’m very con­fi­dent the au­thor is wrong and is us­ing fal­la­cious ar­gu­ments. Even then I make sure to spend some amount of time play­ing devil’s ad­vo­cate. • I found an in­ter­est­ing pa­per on Arxiv ear­lier to­day, by the name of Closed timelike curves via post-se­lec­tion: the­ory and ex­per­i­men­tal demon­stra­tion. It promises such lovely pos­si­bil­ities as quick solu­tions to NP-com­plete prob­lems, and I’m not en­tirely sure the mechanism couldn’t also be used to do ar­bi­trary amounts of com­pu­ta­tion in finite time. Cer­tainly worth a read. How­ever, I don’t un­der­stand quan­tum me­chan­ics well enough to tell how sane the pa­per is, or what the limits of what they’ve dis­cov­ered are. I’m hop­ing one of you does. • I found an in­ter­est­ing pa­per on Arxiv ear­lier to­day, by the name of Closed timelike curves via post-se­lec­tion: the­ory and ex­per­i­men­tal demon­stra­tion. It promises such lovely pos­si­bil­ities as quick solu­tions to NP-com­plete problems It won’t work, as is clearly ex­plained here. If this worked, Harry could use it to re­cover any sort of an­swer that was easy to check but hard to find. He wouldn’t have just shown that P=NP once you had a Time-Turner, this trick was more gen­eral than that. Harry could use it to find the com­bi­na­tions on com­bi­na­tion locks, or pass­words of ev­ery sort. Maybe even find the en­trance to Slytherin’s Cham­ber of Se­crets, if Harry could figure out some sys­tem­atic way of de­scribing all the lo­ca­tions in Hog­warts. It would be an awe­some cheat even by Harry’s stan­dards of cheat­ing. Harry took Paper-2 in his trem­bling hand, and un­folded it. Paper-2 said in slightly shaky hand­writ­ing: DO NOT MESS WITH TIME Harry wrote down “DO NOT MESS WITH TIME” on Paper-1 in slightly shaky hand­writ­ing, folded it neatly, and re­solved not to do any more truly brilli­ant ex­per­i­ments on Time un­til he was at least fif­teen years old. To put this into my own words “The more in­for­ma­tion you ex­tract from the fu­ture, the less you are able to con­trol the fu­ture from the past. And hence, the less un­der­stand­ing you can have about what those bits of fu­ture-gen­er­ated in­for­ma­tion are ac­tu­ally go­ing to mean.” I wrote that be­fore ac­tu­ally look­ing at the pa­per you linked. I don’t un­der­stand much QM ei­ther, but now that I have looked it seems to me that figure 2 of the pa­per backs me up on my in­ter­pre­ta­tion of Harry’s ex­per­i­ment. • Even if it’s writ­ten by Eliezer, that’s still gen­er­al­iz­ing from fic­tional ev­i­dence. We don’t know what the laws of physics are sup­posed to be there.. Well. You prob­a­bly can’t use time-travel to get in­finite com­put­ing power. But that’s not to say you can’t get strictly finite power out of it; in Harry’s case, his ex­per­i­ment would prob­a­bly have worked just fine if he’d been the sort of per­son who’d re­fuse to write “DO NOT MESS WITH TIME”. • Play­ing chicken with the uni­verse, huh? As long as scar­ing Harry is eas­ier than solv­ing his home­work prob­lem, I’d ex­pect the uni­verse to do the former :-) Then again, you could make a robot use the Time-Turner... • Clippy-re­lated: The Paper Clips Pro­ject is run by a school try­ing to over­come scope in­sen­si­tivity by rep­re­sent­ing the eleven mil­lion peo­ple kil­led in the Holo­caust with one pa­per clip per vic­tim. • From that Wikipe­dia ar­ti­cle: In­side the rail­car, be­sides the pa­per clips, there are the Schroed­ers’ book and a suit­case filled with let­ters of apol­ogy to Anne Frank by a class of Ger­man schoolchil­dren. Apol­o­giz­ing for … be­ing Ger­man? That’s re­ally bizarre. • Apol­o­giz­ing for … be­ing Ger­man? That’s re­ally bizarre. Not re­ally. Most cul­tures go funny in the head around the Holo­caust. It is, for some rea­son, con­sid­ered im­per­a­tive that 10th graders in Cal­ifor­nia spend more time be­ing made to feel guilty about the Holo­caust than learn­ing about the ac­tual poli­tics of the Weimar Repub­lic. • Cul­tures can also be very weird about how they treat schoolchil­dren. The kids weren’t re­spon­si­ble for any part of the Holo­caust, and they’re the­o­ret­i­cally apol­o­giz­ing to some­one who can’t hear it. I can see some point in all this if you be­lieve that Ger­mans are es­pe­cially apt to geno­cide (I have no strong opinion about this) and need to keep be­ing re­minded not to do it. Still, if this sort of apol­ogy is of any use, I’d take it more se­ri­ously if it were done spon­ta­neously by in­di­vi­d­u­als. • I think it’s very no­ble of them to col­lect nu­mer­ous pa­per­clips and hold them safely out of use. c=@ I just hope they have ap­pro­pri­ate pro­to­cols in place to en­sure they don’t be­come stolen or un­bent. Any­one know if there’s an in­surance policy taken out against loss or de­struc­tion of the pa­per­clips? • I doubt there’s in­surance on the pa­per­clips them­selves, but I sus­pect that hav­ing as­so­ci­ated them with some­thing gen­er­ally con­sid­ered sa­cred-ish will do a bet­ter job of keep­ing them safe than an in­surance policy in any case. It’s un­likely that any­one will bother to over­come the taboo on de­stroy­ing sa­cred sites to steal or dam­age them, and if some­one does, I can vir­tu­ally guaran­tee that the story will be in the news and more pa­per­clips will be donated to the pro­ject—pos­si­bly sev­eral times the num­ber of pa­per­clips that were stolen. • Thanks for the idea! • Note­wor­thy: In or­der for that to work as a safety mechanism for get­ting the pa­per­clips re­placed, the pro­ject has to be able to get the at­ten­tion of the me­dia. This pro­ject ob­vi­ously can—we heard about it be­cause it suc­ceeded—but that doesn’t nec­es­sar­ily im­ply that other similar pro­jects will eas­ily be able to do the same. • What are you talk­ing about? I was refer­ring to your idea to in­crease the to­tal num­ber of pa­per­clips by steal­ing from those held by the memo­rial pro­ject. • I’m feel­ing naive at the mo­ment. I as­sumed that the good idea was en­courag­ing peo­ple to use pa­per­clips as part of their sa­cred pro­jects. This would serve your pur­poses by mak­ing peo­ple a lit­tle more like pa­per­clip max­i­miz­ers. • I’m feel­ing naive at the mo­ment. Clippy has that effect on many peo­ple. • Why not do both? • If pa­per clips are apt to be stolen, then they won’t be suit­able for any sa­cred pro­ject which de­pends on aiming at a per­ma­nent struc­ture. There are sa­cred pro­jects which are evanes­cent, like sand paint­ing or re­li­gious ser­vices, but I can’t think of any sa­cred pro­ject which is de­signed to in­clude its stuff be­ing stolen. • *snerk* I thought that might be what you were think­ing. I figured you’d get up­set if I said so with­out more ev­i­dence, though, if I was wrong. In that case: 1. It’s not likely to work more than once on any given pro­ject, es­pe­cially if the thefts hap­pen too close to­gether in time. A pro­ject that is stolen from too fre­quently is likely to be aban­doned. 2. Re­peated thefts of pa­per­clips from un­re­lated pro­jects will al­most cer­tainly lead to fu­ture pro­jects be­ing dis­in­clined to use pa­per­clips for such things, which would I sus­pect be a net loss for you. Do­ing it once won’t be a prob­lem in that way, though. Bonus: If you go through with it, and that makes the news, it should count as some pretty sig­nifi­cant ev­i­dence that you ex­ist and aren’t a role­play­ing hu­man’s char­ac­ter. ETA: Oh, and it’ll prob­a­bly work bet­ter if you make it look like van­dal­ism, not just a mys­te­ri­ous dis­ap­pear­ance—dam­age the place a lit­tle, so it looks bet­ter for the me­dia. You can look for news sto­ries about van­dal­ism for pic­tures of what the me­dia likes to re­port on. • Ade­lene … please, PLEASE stop giv­ing the “Clippy” char­ac­ter ideas! • Clippy came up with the theft idea all on eir own, ac­tu­ally—my origi­nal sug­ges­tion can be just as eas­ily parsed as an idea for less costly se­cu­rity for pa­per­clips that are be­ing stored on Earth. Also, con­sider: If Clippy is the type of be­ing who would do such a thing, wouldn’t it be bet­ter for us to know that? (And of course if Clippy is just some­one’s char­ac­ter, I haven’t done any­thing worse than thumb my nose at a few taboos.) • Clippy came up with the theft idea all on eir own, actually You said this: if some­one does [steal the pa­per­clips], I can vir­tu­ally guaran­tee that … more pa­per­clips will be donated to the pro­ject—pos­si­bly sev­eral times the num­ber of pa­per­clips that were stolen. • Yes, in re­sponse to this: Any­one know if there’s an in­surance policy taken out against loss or de­struc­tion of the pa­per­clips? ......which, on re­flec­tion, doesn’t nec­es­sar­ily im­ply theft; I sup­pose it could re­fer to the memo­rial get­ting sucked into a sink­hole or some­thing. Oops? • I think I found the study they’re talk­ing about thanks to this ar­ti­cle. I might take a look at it—if the method­ol­ogy is liter­ally just ‘smok­ing was banned, then the heart at­tack rate dropped’, that sucks. (Edit to link to the full study and not the ab­stract.) Just skimmed it. The method­ol­ogy is bet­ter than that. They use a re­gres­sion to ad­just for the pre-ex­ist­ing down­ward trend in the heart at­tack hos­pi­tal ad­mis­sion rate; they rep­re­sent it as a lin­ear trend, and that looks fair to me based on eye­bal­ling the data in figures 1 and 2. They also ad­just for week-to-week vari­a­tion and tem­per­a­ture, and the study says its re­sults are ‘more mod­est’ than oth­ers’, and fit the pre­dic­tions of some­one else’s math­e­mat­i­cal model, which are fair san­ity checks. I still don’t know how ro­bust the study is—there might be some con­founder they’ve over­looked that I don’t know enough about smok­ing to think of—but it’s at least not as bad as I ex­pected. The au­thors say they want to do fu­ture work with a bet­ter data set that has data on whether pa­tients are ac­tive smok­ers, to sep­a­rate the effect of sec­ond­hand smoke from ac­tive smok­ing. Sounds in­ter­est­ing. • In the Sin­gu­lar­ity Move­ment, Hu­mans Are So Yes­ter­day (long Sin­gu­lar­ity ar­ti­cle in this Sun­day’s NY Times; it isn’t very good) http://​​news.ycombi­na­tor.com/​​item?id=1426386 • I agree that this ar­ti­cle isn’t very good. It seems to do the stan­dard prob­lem of com­bin­ing a lot of differ­ent ideas about what the Sin­gu­lar­ity would en­tail. It em­pha­sizes Kurzweil way too much, and in­cludes Kurzweil’s fairly du­bi­ous ideas about nu­tri­tion and health. The ar­ti­cle also uses An­drew Or­lowski as a se­ri­ous critic of the Sin­gu­lar­ity mak­ing un­sub­stan­ti­ated claims about how the Sin­gu­lar­ity will only help the rich. Given that Or­lowski’s en­tire ap­proach is to crit­i­cize any­thing re­motely new or weird-seem­ing, I’m dis­ap­pointed that the NYT would re­ally use him as a se­ri­ous critic in this con­text. The ar­ti­cle strongly re­in­forces the per­cep­tion that the Sin­gu­lar­ity is just a geek-re­li­gious thing. Over­all, not well done at all. • I’m start­ing to think SIAI might have to jet­ti­son the “sin­gu­lar­ity” ter­minol­ogy (for the in­tel­li­gence ex­plo­sion the­sis) if it’s go­ing to stand on its own. It’s a cool word, and it would be a shame to lose it, but it’s be­come as­so­ci­ated too much with utopian fu­tur­ist sto­ry­tel­ling for it to ac­cu­rately de­scribe what SIAI is ac­tu­ally work­ing on. Edit: Look at this Face­book group. This sort of thing is just em­bar­rass­ing to be as­so­ci­ated with. “If you are feel­ing brave, you can ap­proach a stranger in the street and speak your mes­sage!” Se­ri­ously, this prac­ti­cally is re­li­gion. Peo­ple should be rais­ing aware­ness of sin­gu­lar­ity is­sues not as a prophecy but as a very se­ri­ous and difficult re­search goal. It doesn’t do any good to have peo­ple go­ing around tel­ling sto­ries about the mag­i­cal Fu­ture-Land while know­ing noth­ing about ex­is­ten­tial risks or cog­ni­tive bi­ases or friendly AI is­sues. • I’m not sure that your crit­i­cism com­pletely holds wa­ter. Friendly AI is sim­ply put only a worry that has con­vinced some Sin­gu­lar­i­tar­i­ans. One might not be deeply con­cerned about that (Pos­si­ble ex­am­ple rea­sons: 1) You ex­pect up­load­ing to come well be­fore gen­eral AI. 2) you think that the prob­a­ble tech­ni­cal path to AI will force a lot more stages of AI of much lower in­tel­li­gence which will be likely to give us good data for solv­ing the prob­lem) I agree that this Face­book group does look very much like some­thing one would ex­pect out of a mis­soniz­ing re­li­gion. This sec­tion in par­tic­u­lar looked like a car­i­ca­ture: To raise aware­ness of the Sin­gu­lar­ity, which is ex­pected to oc­cur no later than the year 2045, we must reach out to ev­ery­one on the 1st day of ev­ery month. At 20:45 hours (8:45pm) on the 1st day of each month we will send SINGULARITY MESSAGES to friends or strangers. Ex­am­ple mes­sage: “Nanobot rev­olu­tion, AI aware, tech­nolog­i­cal utopia: Sin­gu­lar­ity2045.” The cer­tainty for 2045 is the most glar­ing as­pect of this aside from the pseudo-mis­sion­ary as­pect. Also note that some of the peo­ple as­so­ci­ated with this group are very promi­nent Sin­gu­lar­i­tar­i­ans and Tran­shu­man­ists. Aubrey de Grey is listed as an ad­minis­tra­tor. But, one should re­mem­ber that re­versed stu­pidity is not in­tel­li­gence. More­over, there’s a rea­son that mis­sion­ar­ies sound like this: They have a very high con­fi­dence in their cor­rect­ness. If one had a similarly high con­fi­dence in the prob­a­bil­ity of a Sin­gu­lar­ity event, and you thought that that event was more likely to oc­cur safely if more peo­ple were aware of it, and was more likely to oc­cur soon if more peo­ple were aware of it, and buy into some­thing like the galac­tic coloniza­tion ar­gu­ment, and you be­lieve that send­ing mes­sages like this has a high chance of get­ting peo­ple to be aware and take you se­ri­ously then this is a rea­son­able course of ac­tion. Now, that’s a lot of premises, some of which have likely­hoods oth­ers which have very low ones. Ob­vi­ously there’s a very low prob­a­bil­ity that send­ing out these sorts of mes­sages is at all a net benefit. In­deed, I have to won­der if there’s any de­liber­ate mimicry of how re­li­gious groups send out mes­sages or whether suc­cess­fully re­pro­duc­ing memes nat­u­rally hit on a small set of meth­ods of re­pro­duc­tion (but if that were the case I think they’d be more likely to hit an ac­tu­ally use­ful method of re­pro­duc­tion). And in fair­ness, they may just be us­ing a gen­eral model for how one goes about rais­ing aware­ness for a cause and how it mat­ters. For some causes, sim­ple, fre­quent ap­peals to emo­tion are likely an effec­tive method (for ex­am­ple, mak­ing peo­ple aware of how com­mon sex­ual as­sault is on col­lege cam­puses, short mes­sages that shock prob­a­bly do a bet­ter job than lots of fairly dreary statis­tics). So then the pri­mary mis­take is just us­ing the wrong model of how to com­mu­ni­cate to peo­ple. • Speak­ing of things to be wor­ried about other than AI, I won­der if a biotech dis­aster is a more ur­gent prob­lem, even if less comprehensive Part of what I’m as­sum­ing is that de­vel­op­ing a self-am­plify­ing AI is so hard that biotech could be well-de­vel­oped first. While it doesn’t seem likely to me that a bio-tech dis­aster could wipe out the hu­man race, it could cause huge dam­age—I’m imag­in­ing dis­eases aimed at mono­cul­ture crops, or plagues as the re­sult of ter­ror­ism or in­com­pe­tent ex­per­i­ments. My other as­sump­tions are that FAI re­search is de­pen­dent on a wealthy, se­cure so­ciety with a good bit of sur­plus wealth for in­di­vi­d­ual pro­jects, and is likely to be highly de­pen­dent on a small num­ber of spe­cific peo­ple for the forsee­able fu­ture. On the other hand, FAI is at least a rel­a­tively well-defined pro­ject. I’m not sure where you’d start to pre­vent biotech dis­asters. • On the other hand, FAI is at least a rel­a­tively well-defined project That’s one hell of a “rel­a­tively” you’ve got there! • Agreed, but… they’d even have to change their own name! • It’s bet­ter than main­stream Sin­gu­lar­ity ar­ti­cles in the past, IMO; un­for­tu­nately, Kurzweil is seen as an au­thor­ity, but at least it’s writ­ten with some re­spect for the idea. • It does seem to be about a lot of differ­ent things, some of which are just syn­ony­mous with sci­en­tific progress (I don’t think it’s any rev­e­la­tion that syn­thetic biol­ogy is go­ing to be­come more so­phis­ti­cated.) • I’m cu­ri­ous: Was the SIAI con­tacted for that ar­ti­cle? I haven’t had time to read it all, but a word-search for “Sin­gu­lar­ity In­sti­tute” and “Yud­kowsky” turned up noth­ing. • I hear Michael Anis­si­mov was not con­tacted, and he’s prob­a­bly the one they’d have the press talk to. • Heuris­tics and bi­ases in charity http://​​www.sas.upenn.edu/​​~baron/​​pa­pers/​​char­ity.pdf (I con­sid­ered mak­ing this link as a top-level post.) • Saw this over on Bruce Sch­neier’s blog, it seemed worth re­post­ing here. Whar­ton’s “Quake” Si­mu­la­tion Game Shows Why Hu­mans Do Such A Poor Job Plan­ning For & Learn­ing From Catas­tro­phes (link is to sum­mary, not origi­nal ar­ti­cle, as origi­nal ar­ti­cle is a bit re­dun­dant). Not so sure how ap­pro­pri­ate the “learn­ing from” part of the ti­tle is, as they don’t seem to men­tion peo­ple play­ing the game more than once, but still quite in­ter­est­ing. • What solu­tion do peo­ple pre­fer to Pas­cal’s Mug­ging? I know of three ap­proaches: 1) Hand­ing over the money is the right thing to do ex­actly as the calcu­la­tion might in­di­cate. 2) De­bi­as­ing against over­con­fi­dence shouldn’t mean hav­ing any con­fi­dence in what oth­ers be­lieve, but just re­duc­ing our own con­fi­dence; thus the ex­pected gain if we’re wrong is found by draw­ing from a broader refer­ence class, like “offers from a stranger”. 3) The calcu­la­tion is cor­rect, but we must pre-com­mit to not pay­ing un­der such cir­cum­stances in or­der not to be gamed. What have I left out? • The un­bounded util­ity func­tion (in some phys­i­cal ob­jects that can be tiled in­definitely) in Pas­cal’s mug­ging gives in­finite ex­pected util­ity to all ac­tions, and no rea­son to pre­fer hand­ing over the money to any other ac­tion. Peo­ple don’t ac­tu­ally show the pat­tern of prefer­ences im­plied by an un­bounded util­ity func­tion. If we make the util­ity func­tion a bounded func­tion of happy lives (or other tilable phys­i­cal struc­tures) with a high bound, other pos­si­bil­ities will offer high ex­pected util­ity. The Mug­ger is not the most cred­ible way to get huge re­wards (in­vest­ing in our civ­i­liza­tion on the chance that physics al­lows un­limited com­pu­ta­tion beats the Mug­ger). This will be the case no mat­ter how huge we make the (finite) bound. • Bound­ing the util­ity func­tion definitely solves the prob­lem, but there are a cou­ple of prob­lems. One is the prin­ci­ple that the util­ity func­tion is not up for grabs, the other is that a bounded util­ity func­tion has some rather nasty con­se­quences of the “leave one baby on the track” kind. • One is the prin­ci­ple that the util­ity func­tion is not up for grabs, I don’t buy this. Many peo­ple have in­con­sis­tent in­tu­itions re­gard­ing ag­gre­ga­tion, as with pop­u­la­tion ethics. Some­one with such in­con­sis­tent prefer­ences doesn’t have a util­ity func­tion to pre­serve. Also note that a bounded util­ity func­tion can al­lot some of the po­ten­tial util­ity un­der the bound to pro­duc­ing an in­finite amount of stuff, and that as a mat­ter of psy­cholog­i­cal fact the hu­man emo­tional re­sponse to stim­uli can’t scale in­definitely with big­ger num­bers. And, of course, al­low­ing un­bounded growth of util­ity with some tilable phys­i­cal pro­cess means that pro­cess can dom­i­nate the util­ity of any non-ag­grega­tive goods, e.g. the ex­is­tence of at least some in­stan­ti­a­tions of art or knowl­edge, or over­all prop­er­ties of the world like ra­tios of very good to lives just barely worth liv­ing/​cre­at­ing (al­though you might claim that the value of the last scales with pop­u­la­tion size, many wouldn’t char­ac­ter­ize it that way). Bounded util­ity func­tions seem to come much closer to let­ting you rep­re­sent ac­tual hu­man con­cerns, or to rep­re­sent more of them, in my view. • Eliezer’s origi­nal ar­ti­cle bases its ar­gu­ment on the use of Solomonoff in­duc­tion. He even sug­gests up front what the prob­lem with it is, al­though the com­ments don’t make any­thing of it: SI is based solely on pro­gram length and ig­nores com­pu­ta­tional re­sources. The op­ti­mal­ity the­o­rems around SI de­pend on the same as­sump­tion. There­fore I sug­gest: 4. Pas­cal’s Mug­ging is a re­fu­ta­tion of the Solomonoff prior. But where a com­pu­ta­tion­ally bounded agent, or an un­bounded one that cares how much work it does, should get its pri­ors from in­stead would re­quire more thought than a few min­utes on a lunchtime break. • In one sense you can’t use ev­i­dence to ar­gue with a prior, but I think that fac­tor­ing in com­pu­ta­tional re­sources as a cost would have put you on the wrong side of a lot of our dis­cov­er­ies about the Uni­verse. • In one sense you can’t use ev­i­dence to ar­gue with a prior, but I think that fac­tor­ing in com­pu­ta­tional re­sources as a cost would have put you on the wrong side of a lot of our dis­cov­er­ies about the Uni­verse. Could you ex­pand that with ex­am­ples? And if you can’t use ev­i­dence to ar­gue with a prior, what can you use? • I’m think­ing of the way we keep find­ing ways in which the Uni­verse is far larger than we’d imag­ined—up to and in­clud­ing the quan­tum mul­ti­verse, and pos­si­bly one day in­clud­ing a mul­ti­verse-based solu­tion to the fine tun­ing prob­lem. The whole point about a prior is that it’s where you start be­fore you’ve seen the ev­i­dence. But in prac­tice us­ing ev­i­dence to choose a prior is likely jus­tified on the grounds that our ac­tual prior is what­ever we evolved with or what­ever evolu­tion’s im­plicit prior is, and set­tling on a for­mal prior with which to at­tack hard prob­lems is some­thing we do in the face of lots of ev­i­dence. I think. • I’m think­ing of the way we keep find­ing ways in which the Uni­verse is far larger than we’d imagined It’s not clear to me how that bears on the mat­ter. I would need to see some­thing with some math­e­mat­ics in it. The whole point about a prior is that it’s where you start be­fore you’ve seen the ev­i­dence. There’s a po­ten­tial in­finite regress if you ar­gue that chang­ing your prior on see­ing the ev­i­dence means it was never your prior, but some­thing prior to it was. 1. You can go on ques­tion­ing those pre­vi­ous pri­ors, and so on in­definitely, and there­fore noth­ing is re­ally a prior. 2. You stop some­where with an un­ques­tion­able prior, and the only un­ques­tion­able truths are those of math­e­mat­ics, there­fore there is an Origi­nal Prior that can be de­duced by pure thought. (Calv­inist Bayesi­anism, one might call it. No agent has the power to choose its pri­ors, for it would have to base its choice on some­thing prior to those pri­ors. Nor can it pri­ors be con­di­tional in any way upon any prop­erty of that agent, for then again they would not be prior. The true Prior is prior to all things, and must there­fore be in­her­ent in the math­e­mat­i­cal struc­ture of be­ing. This Prior is com­mon to all agents but in their fun­da­men­tally pos­te­rior state they are in­ca­pable of per­ceiv­ing it. I’m tempted to pas­tiche the whole Five Points of Calv­inism, but that’s enough for the mo­ment.) 3. You stop some­where, be­cause life is short, with a prior that ap­pears satis­fac­tory for the mo­ment, but which one al­lows the pos­si­bil­ity of later re­ject­ing. I think 1 and 2 are non-starters, and 3 al­lows for ev­i­dence defeat­ing pri­ors. What do you mean by “evolu­tion’s im­plicit prior”? • Tom_McCabe2 sug­gests gen­er­al­iz­ing EY’s re­but­tal of Pas­cal’s Wager to Pas­cal’s Mug­ging: it’s not ac­tu­ally ob­vi­ous that some­one claiming they’ll de­stroy 3^^^^3 peo­ple makes it more likely that 3^^^^3 peo­ple will die. The claim is ar­guably such weak ev­i­dence that it’s still about equally likely that hand­ing over the$5 will kill 3^^^^3 peo­ple, and if the two prob­a­bil­ities are suffi­ciently equal, they’ll can­cel out enough to make it not worth hand­ing over the $5. Per­son­ally, I always just figured that the prob­a­bil­ity of some­one (a) threat­en­ing me with kil­ling 3^^^^3 peo­ple, (b) hav­ing the abil­ity to do so, and (c) not go­ing ahead and kil­ling the peo­ple any­way af­ter I give them the$5, is go­ing to be way less than 1/​3^^^^3, so the ex­pected util­ity of giv­ing the mug­ger the $5 is al­most cer­tainly less than the$5 of util­ity I get by hang­ing on to it. In which case there is no prob­lem to fix. EY claims that the Solomonoff-calcu­lated prob­a­bil­ity of some­one hav­ing ‘magic pow­ers from out­side the Ma­trix’ ‘isn’t any­where near as small as 3^^^^3 is large,’ but to me that just sug­gests that the Solomonoff calcu­la­tion is too cre­d­u­lous.

(Edited to try and im­prove para­phrase of Tom_McCabe2.)

• This seems very similar to the “refer­ence class fal­lback” ap­proach to con­fi­dence set out in point 2, but I pre­fer to ex­plic­itly re­fer to refer­ence classes when set­ting out that ap­proach, oth­er­wise the ex­actly even odds you ap­ply to mas­sively pos­i­tive and mas­sively nega­tive util­ity here seem to come rather con­ve­niently out of a hat...

• Fair enough. Ac­tu­ally, look­ing at my com­ment again, I think I para­phrased Tom_McCabe2 re­ally badly, so thanks for re­ply­ing and mak­ing me take an­other look! I’ll try and edit my com­ment so it’s a bet­ter para­phrase.

• I’m not sure this prob­lem needs a “solu­tion” in the sense that ev­ery­one here seems to ac­cept. Hu­man be­ings have prefer­ences. Utility func­tions are an im­perfect way of mod­el­ing those prefer­ences, not some paragon of virtue that ev­ery­one should as­pire to. Most mod­els break down when pushed out­side their area of ap­pli­ca­bil­ity.

• The util­ity func­tion as­sumes that you play the “game” (situ­a­tion, what­ever) an in­finite num­ber of times and then find the net util­ity. Thats good when your play­ing the “game” enough times to mat­ter. It’s not when your only play­ing a small num­ber of times. So lets look at it as “win­ning” or “loos­ing”. If the odds are re­ally low and the risk is high and your only play­ing once, then most of the time you ex­pect to loose. If you do it enough times, you even the odds out and the loss gets can­celed out by the large re­ward, but only play­ing once you ex­pect to loose more then you gain. Why would you as­sume differ­netly? Thats my 2 cents and so far its the only way I have come up with to nav­i­gate around this prob­lem.

• The util­ity func­tion as­sumes that you play the “game” (situ­a­tion, what­ever) an in­finite num­ber of times and then find the net util­ity.

This isn’t right. The way util­ity is nor­mally defined, if out­come X has 10 times the util­ity of out­come Y for a given util­ity func­tion, agents be­hav­ing in ac­cord with that func­tion will be in­differ­ent be­tween cer­tain Y and a 10% prob­a­bil­ity of X. That’s why they call ex­pected util­ity the­ory a the­ory of “de­ci­sion un­der un­cer­tainty.” The sce­nario you de­scribe sounds like one where the pay­offs are in some cur­rency such that you have de­clin­ing util­ity with in­creas­ing amounts of the cur­rency.

• The sce­nario you de­scribe sounds like one where the pay­offs are in some cur­rency such that you have de­clin­ing util­ity with in­creas­ing amounts of the cur­rency.

Uh, no. Allright, lets say I give you a 1 out of 10 chance at win­ning 10 times ev­ery­thing you own, but the other 9 times you lose ev­ery­thing. The net util­ity for ac­cept­ing is the same as not ac­cept­ing, yet thats com­pletely ig­nor­ing the fact that if you do en­ter, 90 % of the time you lose ev­ery­thing, no mat­ter how high the re­ward is.

• As Thom in­di­cates, this is ex­actly what I was talk­ing about: ten times the stuff you own, rather than ten times the util­ity. Since util­ity is just a rep­re­sen­ta­tion of your prefer­ences, the 1 in 10 pay­off would only have ten times the util­ity of your cur­rent en­dow­ment if you would be will­ing to ac­cept this gam­ble.

• That’s only true if “ev­ery­thing you own” is cast in terms of util­ity, which is not in­tu­itive. Nor­mally, “ev­ery­thing you own” would be in terms of dol­lars or some­thing to that effect, and ten times the num­ber of dol­lars I have is not worth 10 times the util­ity of those dol­lars.

• Does coun­tersig­nal­ing ac­tu­ally hap­pen? Give me ex­am­ples.

I think most claims of coun­tersig­nal­ing are ac­tu­ally or­di­nary sig­nal­ing, where the costly sig­nal is fore­go­ing an­other group and the trait be­ing sig­naled is loy­alty to the first group. Coun­tersig­nal­ing is where fore­go­ing the stan­dard sig­nal sends a stronger pos­i­tive mes­sage of the same trait to the usual re­cip­i­ents.

• That ar­ti­cle makes it sound like “coun­tersig­nal­ing” is for­go­ing a man­dated sig­nal—like show­ing up at a for­mal-dress oc­ca­sion in street clothes.

• That ar­ti­cle makes it sound like “coun­tersig­nal­ing” is for­go­ing a man­dated signal

I said “stan­dard” be­cause game the­ory doesn’t talk about man­dates, but that’s pretty much what I said, isn’t it? If you dis­agree with that us­age, what do you think is right?

In­ci­den­tally, in von Neu­mann’s model of poker, you should raise when you have a good hand or a poor hand, and check when you have a mediocre hand, which looks kind of like coun­tersig­nal­ing. Of course, the in­for­ma­tion trans­fer­ence that yields the name “sig­nal” is rather differ­ent. Also, I’m not in­ter­ested in ap­pli­ca­tions of game the­ory to her­met­i­cally sealed games.

• I guess I don’t un­der­stand your ques­tion, then—coun­tersig­nal­ing seems like a perfectly or­di­nary proper sub­set of sig­nal­ing.

• Yes, coun­tersig­nal­ing is sig­nal­ing. The ques­tion is about prac­tice, not the­ory. Does coun­tersig­nal­ing ac­tu­ally hap­pen?

• I play ran­domly for the first sev­eral rounds, so as to de­stroy the en­tan­gle­ment be­tween my bets, my face, and my hand.

• Un­less you’re us­ing an ex­ter­nal ran­dom­ness gen­er­a­tor, it’s quite un­likely that you’re not gen­er­at­ing a de­tectable pat­tern.

• He can just play blind, and not look at his cards.

• I only care whether hu­mans de­tect it.

• Some clips on the dark-side episte­mol­ogy of his­tory done by Chris­tian apol­o­gists by Robert M Price, who de­scribes him­self as a Chris­tian Athe­ist.

Not sure how worth­while Price is to listen to in gen­eral though.

• Thanks for that, Price is a very knowl­edge­able New Tes­ta­ment scholar. Check out his in­ter­view at the com­mon­senseathe­ism pod­cast here, also cov­ers his path to be­com­ing a chris­tian athe­ist.

• Be­cause it was used some­where I calcu­lated my own weights worth in gold—it is about 3.5 mil­lion EUR. In silver you can get me for 50.000 EUR. The Myth­busters re­cently build a lead bal­loon and had it fly. Some proverb don’t hold up to re­al­ity and/​or en­g­ineer­ing.

• 13 Jun 2010 0:09 UTC
1 point

Maybe this has been dis­cussed be­fore—if so, please just an­swer with a link.

Has any­one con­sid­ered the pos­si­bil­ity that the only friendly AI may be one that com­mits suicide?

There’s great di­ver­sity in hu­man val­ues, but all of them have in com­mon that they take as given the limi­ta­tions of Homo sapi­ens. In par­tic­u­lar, the fact that each Homo sapi­ens has roughly equal phys­i­cal and men­tal ca­pac­i­ties to all other Homo sapi­ens. We have de­vel­oped di­verse sys­tems of rules for in­ter­per­sonal be­hav­ior, but all of them are built for deal­ing with groups of peo­ple like our­selves. (For in­stance, ideas like re­ciproc­ity only make sense if the things we can do to other peo­ple are similar to the things they can do to us.)

The de­ci­sion func­tion of a lone, far more pow­er­ful AI would not have this qual­ity. So it would be very differ­ent from all hu­man de­ci­sion func­tions or prin­ci­ples. Maybe this differ­ence should cause us to call it im­moral.

• Do you ever have a day when you log on and it seems like ev­ery­one is “wrong on the In­ter­net”? (For val­ues of “ev­ery­one” equal to 3, on this oc­ca­sion.) Robin Han­son and Katja Grace both have posts (on teenage angst, on pop­u­la­tion) where some­thing just seems off, elu­sively wrong; and now SarahC sug­gests that “the only friendly AI may be one that com­mits suicide”. Some­thing about this con­junc­tion of opinions seems ob­scurely por­ten­tous to me. Maybe it’s just a know-thy­self mo­ment; there’s some nascent opinion of my own that’s go­ing to crys­tal­lize in re­sponse.

Now that my spe­cial mo­ment of shar­ing is out of the way… Sarah, is the friendly AI al­lowed to do just one act of good be­fore it kills it­self? Make a child smile, take a few pretty pho­tos from or­bit, save some­one from dy­ing, stop a war, in­vent cures for a few hun­dred dis­eases? I as­sume there is some in­tegrity of in­ter­nal logic be­hind this thought of yours, but it seems to be over­look­ing so much about re­al­ity that there has to be a sig­nifi­cant cog­ni­tive dis­con­nect at work here.

• Robin Han­son and Katja Grace both have posts (on teenage angst, on pop­u­la­tion) where some­thing just seems off, elu­sively wrong;

I’ve no­ticed I get this feel­ing rel­a­tively of­ten from Over­com­ing Bias. I think it comes with the con­trar­ian blog­ging ter­ri­tory.

• I get it from OB also, which I have not fol­lowed for some time, and many other places. For me it is the sus­pi­cion that I am look­ing at thought gone wrong.

• I would call it “pet the­ory syn­drome.” Some­one comes up with a way of “ex­plain­ing” things and then sud­denly the whole world is seen through that par­tic­u­lar lens rather than hav­ing a more nu­anced view; nearly ev­ery­thing is rein­ter­preted. In Han­son’s case, the pet the­o­ries are near/​far and sta­tus.

• I would call it “pet the­ory syn­drome.” Some­one comes up with a way of “ex­plain­ing” things and then sud­denly the whole world is seen through that par­tic­u­lar lens rather than hav­ing a more nu­anced view; nearly ev­ery­thing is rein­ter­preted. In Han­son’s case, the pet the­o­ries are near/​far and sta­tus.

Pre­dic­tion mar­kets also.

Is any­one wor­ried that LW might have similar is­sues? If so, what would be the rele­vant pet the­o­ries?

• On a re­lated note: sup­pose a com­mu­nity of mod­er­ately ra­tio­nal peo­ple had one mem­ber who was a lot more in­formed than them on some sub­ject, but wrong about it. Isn’t it likely they might all end up wrong to­gether? Pre­dic­tion Mar­kets was the origi­nal sub­ject, but it could go for a much wider range of top­ics: Mul­ti­ple Wor­lds, Han­so­nian Medicine, Far/​near, Cry­on­ics...

• That’s where the sci­en­tific method comes in handy, though quite a few of Han­son’s posts sound like pop psy­chol­ogy rather than a testable hy­poth­e­sis.

• I don’t get this im­pres­sion from OB at all. The thoughts at OB even when I dis­agree with them are far more co­her­ent than the sort of ex­am­ples given as thought gone wrong. I’m also not sure it is easy to ac­tu­ally dis­t­in­guish be­tween “thought gone wrong” in the sense of be­ing out­right non­sense as drescribed in the linked es­say and ac­tu­ally good but highly tech­ni­cal thought pro­cesses. For ex­am­ple I could write some­thing like:

Noethe­ri­aness of a ring is forced by be­ing Ar­ti­nian, but the re­verse does not hold. The dual na­ture is puz­zling given that Noethe­ri­aness is a prop­erty which forces ideals to have a real im­pact on the struc­ture in a way that seems more di­rect than that of Artin even though Ar­ti­nian is a stronger con­di­tion. One must ask what causes the break­down in sym­me­try be­tween the de­scend­ing and as­cend­ing chain con­di­tions.

Now, what I wrote above isn’t non­sense. It is just poorly writ­ten, poorly ex­plained math. But if you don’t have some back­ground, this likely looks as bad as the pas­sages quoted by the linked es­say. Even when the writ­ing is not poor like that above, one can eas­ily find sec­tions from con­ver­sa­tions on LW about say CEV or Bayesi­anism that look about as non­sen­si­cal if one doesn’t know the terms. So with­out ex­ten­sive in­ves­ti­ga­tion I don’t think one can eas­ily judge whether a given pas­sage is non­sense or not. The es­say linked to is there­fore less than com­pel­ling (in fact, hav­ing stud­ied many of their ex­am­ples I can safely say that they re­ally are non­sen­si­cal but it isn’t clear to me how you can tell that from the short pas­sages given with their com­plete lack of con­text Edit:. And it could very well be that I just haven’t thought about them enough or ap­proached them cor­rectly just as some­one who is very bad at math might con­sider it to be col­lec­tively non­sense even af­ter care­ful ex­am­i­na­tion) It does how­ever seem that some dis­ci­plines run into this prob­lem far more of­ten than oth­ers. Thus, philos­o­phy and the­ol­ogy both seem to run into the parad­ing non­sen­si­cal streams of words to­gether prob­lem more of­ten than most other ar­eas. I sus­pect that this is con­nected to the lack of any­thing re­sem­bling an ex­per­i­men­tal method.

• The thoughts at OB even when I dis­agree with them are far more co­her­ent than the sort of ex­am­ples given as thought gone wrong. I’m also not sure it is easy to ac­tu­ally dis­t­in­guish be­tween “thought gone wrong” in the sense of be­ing out­right non­sense as drescribed in the linked es­say and ac­tu­ally good but highly tech­ni­cal thought pro­cesses.

OB isn’t a tech­ni­cal blog though.

Hav­ing crit­i­cised it so harshly, I’d bet­ter back that up with ev­i­dence. Ex­hibit A: a highly de­tailed sce­nario of our far fu­ture, sup­ported by not much. Which in later post­ings to OB (just en­ter “dream­time” into the OB search box) be­comes part of the back­ground as­sump­tions, just as ear­lier OB spec­u­la­tions be­come part of the back­ground as­sump­tions of that post­ing. It’s like look­ing at the sky and draw­ing in con­stel­la­tions (the stars in this anal­ogy be­ing the snip­pets of sci­en­tific ev­i­dence ad­duced here and there).

• That ex­am­ple seems to be more in the realm of “not very good think­ing” than thought gone wrong. The thoughts are co­her­ent, just not well jus­tified. it isn’t like the sort of thing that is quoted in the ex­am­ple es­say where thought gone wrong seems to mean some­thing closer to “not even wrong be­cause it is in­co­her­ent.”

• Ok, OB cer­tainly isn’t the sort of word salad that Stove is at­tack­ing, so that wasn’t a good com­par­i­son. But there does seem to me to be some­thing sys­tem­at­i­cally wrong with OB. There is the man-with-a-ham­mer thing, but I don’t have a prob­lem with peo­ple hav­ing their hob­by­horses, I know I have some of my own. I’m more put off by the way that spec­u­la­tions get tac­itly up­graded to back­ground as­sump­tions, the join-the-dots use of ev­i­dence, and all those “X is Y” ti­tles.

• Got a good sum­mary of this? The au­thor seems to be tak­ing way too long to make his point.

• “Most hu­man thought has been var­i­ous differ­ent kinds of non­sense that we mostly haven’t yet cat­e­go­rized or named.”

• This para­graph, per­haps?

From an En­light­en­ment or Pos­i­tivist point of view, which is Hume’s point of view, and mine, there is sim­ply no avoid­ing the con­clu­sion that the hu­man race is mad. There are scarcely any hu­man be­ings who do not have some lu­natic be­liefs or other to which they at­tach great im­por­tance. Peo­ple are mostly sane enough, of course, in the af­fairs of com­mon life: the get­ting of food, shelter, and so on. But the mo­ment they at­tempt any depth or gen­er­al­ity of thought, they go mad al­most in­fal­libly. The vast ma­jor­ity, of course, adopt the lo­cal re­li­gious mad­ness, as nat­u­rally as they adopt the lo­cal dress. But the more pow­er­ful minds will, equally in­fal­libly, fall into the wor­ship of some in­tel­li­gent and dan­ger­ous lu­natic, such as Plato, or Au­gus­tine, or Comte, or Hegel, or Marx.

I think that should go in the next quotes thread.

• I’m not nec­es­sar­ily ar­gu­ing for this po­si­tion as say­ing we need to ad­dress it. “Suici­dal AI” is to the prob­lem of con­struct­ing FAI as an­ar­chism is to poli­ti­cal the­ory; if you want to build some­thing (an FAI, a good gov­ern­ment) then, on the philo­soph­i­cal level, you have to at least take a stab at coun­ter­ing the ar­gu­ment that per­haps it is im­pos­si­ble to build it.

I’m work­ing un­der the as­sump­tion that we don’t re­ally know at this point what “Friendly” means, oth­er­wise there wouldn’t be a prob­lem to solve. We don’t yet know what we want the AI to do.

What we do know about moral­ity is that hu­man be­ings prac­tice it. So all our moral laws and in­tu­itions are de­signed, in par­tic­u­lar, for small, mor­tal crea­tures, liv­ing among other small, mor­tal crea­tures.

Egal­i­tar­i­anism, for ex­am­ple, only makes sense if “all men are cre­ated equal” is more or less a state­ment of fact. What should an egal­i­tar­ian hu­man make of a pow­er­ful AI? Is it a tyrant? Well, no, a tyrant is a hu­man who be­haves as if he’s not equal to other hu­mans; the AI sim­ply isn’t equal. Well, then, is the AI a good cit­i­zen? No, not re­ally, be­cause cit­i­zens treat each other on an equal foot­ing...

The trou­ble here, I think, is that re­ally all our no­tions of good­ness are re­ally “what is good for a hu­man to do.” Per­haps you could ex­tend them to “what is good for a Klin­gon to do”—but a lot of moral opinions are speci­fi­cally about how to treat other peo­ple who are roughly equiv­a­lent to your­self. “Do unto oth­ers as you would have them do unto you.” The kind of rules you’d set for an AI would be fun­da­men­tally differ­ent from our rules for our­selves and each other.

It would be as if a hu­man had a spe­cial, ob­ses­sive con­cern and care for an ant farm. You can pro­tect the ants from dy­ing. But there are lots of things you can’t do for the ants: be an ant’s friend, re­spect an ant, keep up your end of a bar­gain with an ant, treat an ant as a brother…

I had a friend once who said, “If God ex­isted, I would be his en­emy.” Couldn’t some­one have the same sen­ti­ment about an AI?

(As always, I may very well be wrong on the In­ter­net.)

• You say, hu­man val­ues are made for agents of equal power; an AI would not be equal; so maybe the friendly thing to do is for it to delete it­self. My ques­tion was, is it al­lowed to do just one or two pos­i­tive things be­fore it does this? I can also ask: if over­whelming power is the prob­lem, can’t it just re­duce it­self to hu­man scale? And when you think about all the things that go wrong in the world ev­ery day, then it is ob­vi­ous that there is plenty for a friendly su­per­hu­man agency to do. So the whole idea that the best thing it could do is delete it­self or hob­ble it­self looks ex­tremely du­bi­ous. If your point was that we can­not hope to figure out what friendli­ness should ac­tu­ally be, and so we just shouldn’t make su­per­hu­man agents, that would make more sense.

The com­par­i­son to gov­ern­ment makes sense in that the power of a ma­ture AI is imag­ined to be more like that of a state than that of a hu­man in­di­vi­d­ual. It is likely that once an AI had ar­rived at a sta­ble con­cep­tion of pur­pose, it would pro­duce many, many other agents, of vary­ing ca­pa­bil­ity and lifes­pan, for the im­ple­men­ta­tion of that pur­pose in the world. There might still be a cen­tral su­per-AI, or its progeny might op­er­ate in a com­pletely dis­tributed fash­ion. But ev­ery­thing would still have been de­ter­mined by the ini­tial pur­pose. If it was a pur­pose that cared noth­ing for life as we know it, then these de­rived agen­cies might just pave the earth and build a new ma­chine ecol­ogy. If it was a pur­pose that placed a value on hu­mans be­ing there and liv­ing a cer­tain sort of life, then some of them would spread out among us and in­ter­act with us ac­cord­ingly. You could think of it in cul­tural terms: the AI sphere would have a cul­ture, a value sys­tem, gov­ern­ing its in­ter­ac­tions with us. Be­cause of the rad­i­cal con­tin­gency of pro­grammed val­ues, that cul­ture might leave us alone, it might prod our af­fairs into tak­ing a differ­ent shape, or it might act to swiftly and de­ci­sively trans­form hu­man na­ture. All of these out­comes would ap­pear to be pos­si­bil­ities.

• It seems un­likely that an FAI would com­mit suicide if hu­mans need to be pro­tected from UAI, or if there are other threats that only an FAI could han­dle.

• A ques­tion about Bayesian rea­son­ing:

I think one of the things that con­fused me the most about this is that Bayesian rea­son­ing talks about prob­a­bil­ities. When I start with Pr(My Mom Is On The Phone) = 16, its very differ­ent from say­ing Pr(I roll a one on a fair die) = 16.

In the first case, my mom is ei­ther on the phone or not, but I’m just say­ing that I’m pretty sure she isn’t. In the sec­ond, some­thing may or may not hap­pen, but its un­likely to hap­pen.

Am I mak­ing any sense… or are they re­ally the same thing and I’m over com­pli­cat­ing?

• Re­mem­ber, prob­a­bil­ities are not in­her­ent facts of the uni­verse, they are state­ments about how much you know. You don’t have perfect knowl­edge of the uni­verse, so when I ask, “Is your mum on the phone?” you don’t have the guaran­teed cor­rect an­swer ready to go. You don’t know with com­plete cer­tainty.

But you do have some knowl­edge of the uni­verse, gained through your ear­lier ob­ser­va­tions of see­ing your mother on the phone oc­ca­sion­ally. So rather than just say­ing “I have ab­solutely no idea in the slight­est”, you are able to say some­thing more use­ful: “It’s pos­si­ble, but un­likely.” Prob­a­bil­ities are sim­ply a way to quan­tify and make pre­cise our im­perfect knowl­edge, so we can form more ac­cu­rate ex­pec­ta­tions of the fu­ture, and they al­low us to man­age and up­date our be­liefs in a more re­fined way through Bayes’ Law.

• The cases are differ­ent in the way that you de­scribe, but the maths of the prob­a­bil­ity is the same in each case. If you have an un­seen die un­der a cup, and a die that you are about to roll, then one is already de­ter­mined and the other isn’t, but you’d bet at the same odds for each one to come up a six.

• I think the differ­ence is that one event is a state­ment about the pre­sent which is ei­ther presently true or not, and the other is a pre­dic­tion. So you could illus­trate the differ­ence by us­ing the fol­low­ing pairs: P(Mom on phone now) vs. P(Mom on phone to­mor­row at 12:00am). In the dice case P(die just rol­led but not yet ex­am­ined is 1) vs. P(die I will roll will come out 1).

I do agree with Os­car though, the maths should be the same.

• You might be in­ter­ested in this re­cent dis­cus­sion, if you haven’t seen it already:

• It looks to me like your con­fu­sion with these ex­am­ples just stems from the fact that one event is in the pre­sent and the other in the fu­ture. Are you still con­fused if you make it P(Mom will be on the phone at 4 PM to­mor­row)= 16. Or con­versely, you make it P(I rol­led a one on the fair die that is now be­neath this cup) =1/​6

• In my ex­pe­rience, when peo­ple say some­thing like that it’s usu­ally a mat­ter of epistemic vs on­tolog­i­cal per­spec­tive; and con­trast­ing Laplace’s De­mon with real-world agents of bounded com­pu­ta­tional power re­solves the difficulty. But that could be overkill

• In the sec­ond case, you ei­ther roll one on the die or not, but you are pretty sure that it will be an­other num­ber.

• We’ve talked about a book club be­fore but did any­one ever ac­tu­ally suc­ceed in start­ing one? Since it is sum­mer now I figure a few more of us might have some free time. Are peo­ple ac­tu­ally in­ter­ested?

• I’ve been think­ing about fi­nally start­ing a Study Group thread, pri­mar­ily with a fo­cus on Jaynes and Pearl both of which I’m study­ing at the mo­ment. It would prob­a­bly make sense to ex­pand it to other books in­clud­ing non-math books—though the set of ac­tive books should re­main small.

Two things have been hold­ing me back—for one, the IMO ex­ces­sively blog-like na­ture of LW with the re­sult that once a con­ver­sa­tion has rol­led off the front page it of­ten tends to die off, and for an­other a fear of not hav­ing enough time and en­ergy to de­vote to ac­tu­ally fa­cil­i­tat­ing dis­cus­sion.

Fa­cil­i­ta­tion of some sort seems re­quired: as I un­der­stand it a book club or study group en­tails ask­ing a few par­ti­ci­pants to make a firm com­mit­ment to go through a chap­ter or a sec­tion at a time and re­port back, help each other out and so on.

• Well those are ac­tu­ally ex­actly the two books I had in mind (though I think we should prob­a­bly just start with one of them).

the IMO ex­ces­sively blog-like na­ture of LW with the re­sult that once a con­ver­sa­tion has rol­led off the front page it of­ten tends to die off

Agreed. Two options

1. A new top level post for ev­ery chap­ter (or per­haps ev­ery two chap­ters, what­ever di­vi­sion is con­ve­nient). This was a lit­tle an­noy­ing when it was one per­son cov­er­ing ev­ery chap­ter in Den­nett’s Con­scious­ness ex­plained but if a de­cent num­ber of peo­ple were par­ti­ci­pat­ing the book club (and if each new post was put up by the fa­cil­i­ta­tor, ex­plain­ing hard to un­der­stand con­cepts) they’d prob­a­bly jus­tify them­selves.

2. We start a ded­i­cated word­press or blogspot blog and give the fa­cil­i­ta­tors post­ing pow­ers.

I wouldn’t at all mind post­ing to start dis­cus­sion on some sec­tions but I’m not the best per­son to be ex­plain­ing the math if it gets con­fus­ing—if that was part of your ex­pec­ta­tion of fa­cil­i­ta­tion.

I was think­ing a read­ing group for Jaynes would be have a bet­ter chance of suc­cess than Pearl—the is­sues are more gen­eral, the math looks eas­ier and the en­tire thing is on­line. But it sounds like you’ve looked at them more than I have, what are your thoughts? I guess what re­ally mat­ters is what peo­ple are in­ter­ested in.

For those in­ter­ested the Jaynes book can be found here and much of Pearl’s book can be found here.

• Is there any ex­ist­ing off-the-shelf web soft­ware for set­ting up book-club-type dis­cus­sions?

I don’t want to make too much of the in­fras­truc­ture is­sue, as what re­ally makes a book club work is the com­mit­ment of its mem­bers and fa­cil­i­ta­tors, but it would be con­ve­nient if there was a ready-made in­fras­truc­ture available, like there is for blog­ging and mailing lists.

Maybe the LW blog+wiki soft­ware run­ning on a sep­a­rate do­main (less­wrong­books.com?) would be enough. Blog for cur­rent dis­cus­sions, wiki for sum­maries of past dis­cus­sions.

• There’s a risk that any amount of think­ing about in­fras­truc­ture could kill off what en­ergy there is, and since there ap­pears to be some en­ergy at pre­sent, I would rather fa­vor hav­ing the dis­cus­sion about the book club in the book club thread. :)

IOW we can kick off the ini­ti­a­tive lo­cally and let it find a new venue if and when that be­comes nec­es­sary. There also seems to be some sort of pro­vi­sional con­sen­sus that it’s not quite time yet to frag­ment the LW read­er­ship : the LW sub­red­dit doesn’t seem to have panned out.

It seems to me that Jaynes is definitely top­i­cal for LW, I wouldn’t worry about dis­cus­sions among peo­ple study­ing it be­com­ing an­noy­ing to the rest of the com­mu­nity. There are many, many gems per­tain­ing to ra­tio­nal­ity in each of the chap­ters I’ve read so far.

• This looks like it could work. A word­press blog would prob­a­bly be fine as well. Of course these op­tions don’t let peo­ple get karma for par­ti­ci­pat­ing which would be a nice mo­ti­va­tor to have. A sub­red­dit would be nice...

Would the dis­cus­sions re­ally un­der­mine the reg­u­lar busi­ness of Less Wrong?

• Do peo­ple re­ally care that much about karma? I mean, once one had enough karma to post top-level posts, does it mat­ter that much?

• Peo­ple like mak­ing num­bers go higher. It’s a strange im­pulse, I’m not sure why we have it. Maybe as­sign­ing ev­ery­one num­bers hi­jacks our dom­i­nance hi­er­ar­chy in­stincts and we feel bet­ter about our­selves the higher our num­ber is. For me, it isn’t the to­tal that I like hav­ing so much as the feed­back for in­di­vi­d­ual com­ments. I get frus­trated on other blogs when I make a com­ment that is in­for­ma­tive and clever but doesn’t get a re­sponse. I feel like I’m talk­ing to my­self. Here even if no one re­sponds I can at least learn if some­one ap­pre­ci­ated it. If a lot of peo­ple ap­pre­ci­ated it I feel a brief sense of ac­com­plish­ment.

• Two thoughts which have prob­a­bly been beaten to death el­se­where:

1) A karma sys­tem is a good way to provide cues to which posts are worth read­ing and which aren’t.

2) Karma points are a big shiny sta­tus in­di­ca­tor, and LWers are no more im­mune to sta­tus drives than any­one else is.

• Sup­pos­edly (ac­tual study) milk re­duces cat­e­chin level in blood­stream.

Other re­search says: “does not!”

Really hot (but not scalded) milk tastes fan­tas­tic to me, so I’ve of­ten added it to tea. I don’t re­ally care much about the health benefits of tea per se; I’m mostly cu­ri­ous if any­one has ad­di­tional ev­i­dence one way or the other.

The surest way to re­solve the con­tro­versy is to repli­cate the stud­ies un­til it’s clear that some of them were sloppy, un­lucky, or lies. But, short of that, should I spec­u­late that per­haps some peo­ple are op­posed to milk drink­ing in gen­eral, or that per­haps tea in the re­searchers’ home coun­try is/​isn’t pri­mar­ily taken with milk? I’m always tempted to imag­ine most of the sci­en­tists hav­ing some ul­te­rior mo­tive or prior be­lief they’re look­ing to con­firm.

It would be cool if re­searchers some­times (cred­ibly) wrote: “we did this ex­per­i­ment hop­ing to show X, but in­stead, we found not X”. Know­ing un­der what goals re­search was re­ally performed (and what went into its se­lec­tion for pub­li­ca­tion) would be valuable, es­pe­cially if plans (and state­ments of in­tent/​goal) for ex­per­i­ments were pub­lished some­where at the start of work, even for stud­ies that are never com­pleted or pub­lished.

• It does seem odd to get such di­ver­gent re­sults.

Bad luck could be, not just get­ting that 5% re­sult which 95% ac­cu­racy im­plies, but some non-ob­vi­ous differ­ence in the vol­un­teers (differ­ent ge­net­ics?), in the tea. or in the milk.

• It does seem odd to get such di­ver­gent re­sults.

It isn’t that odd. There are a lot of things that could eas­ily change the re­sults. Ex­act tem­per­a­ture of tea (if one pro­to­col in­volved hot­ter or colder wa­ter), tem­per­a­ture of milk, type of milk, type of tea (one of the pro­to­cols uses black tea, and an­other uses green tea). Note also that the stud­ies are us­ing differ­ent met­rics as well.

• Nit­pick: the sec­ond study in­cluded both black and green tea.

How­ever, your gen­eral point stands, and I’ll add that there are differ­ent sorts of both black and green teas.

• I’d like to hear what peo­ple think about cal­ibrat­ing how many ideas you voice ver­sus how con­fi­dent you are in their ac­cu­racy.

For lack of a bet­ter ex­am­ple, i re­call eliezer say­ing that new open threads should be made quadan­u­ally, once per sea­son, but this doesn’t ap­pear to be the op­ti­mum amount. Per­haps eliezer mis­judged how much ac­tivity they would re­ceive and how fast they would fill up or he has a differ­ent opinion on how full a thread has to be to make it time for a new thread, but for sake of the ex­am­ple lets as­sume that eliezer was wrong and that the cur­rent one or two threads per month is bet­ter than quadan­u­ally. Should eliezer have re­cal­ibrated his con­fi­dence on this and never said it be­cause its chance of be­ing right was too low or would low­er­ing his con­fi­dence on ideas be counter pro­duc­tive and is it op­ti­mal for peo­ple to have con­fi­dence in the ideas that they voice even it causes them to say some things which aren’t right.

I sup­pose this is of im­por­tance to me be­cause I think I might be bet­ter off if i low­ered how judge­men­tal i am of peo­ple who say things which are wrong and also low­ered how judge­men­tal i am of the ideas i have be­cause i might be putting too much weight on peo­ple voic­ing ideas which are wrong.

• Be­ing right on group effects is difficult.

Is there a con­sis­tent path for what LW wants to be? a) ra­tio­nal­ist site filled up with meta top­ics and ex­am­ples b) a) + de­tailed treats of some im­por­tant top­ics c) open to ev­ery­thing as long as rea­son is used

and so on. I per­son­ally like and profit from the dis­cussing of akra­sia meth­ods. But it might be detri­men­tal to the main tar­get of the site. Also I would very much like to see a can­non de­velop for knowl­edge that LWers gen­er­ally agree upon in­clud­ing, but not limited to the top­ics I cur­rently care about my­self.

Voic­ing ideas de­pends on where you are. In so­cial set­tings I more and more ad­vice against it. Ar­gu­ing/​dis­cussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have.

It would be great to have a place for any idea to be ex­am­ined for right and wrong.

• I would very much like to see a can­non de­velop for knowl­edge that LWers gen­er­ally agree upon

LW is work­ing on it, and you can help!

• I’d like to see a pic­ture of this LW can­non!

• I’d like to see a pic­ture of this LW can­non!

Rather than waste time do­ing both your can­non re­quest and Roko’s Fal­la­cyzilla re­quest, I just com­bined them into one pic­ture of the Less Wrong Can­non at­tack­ing Fal­la­cyzilla.

...now some­one take Pho­to­shop away from me, please.

• What does Fal­la­cyzilla have on its chest? It looks like it has “A → B, ~B, there­fore ~A” But that is valid logic. Am I mis­read­ing it or did you mean to put “A → B, ~A, there­fore ~B”? That would be ac­tu­ally wrong.

• I no­ticed that two sec­onds af­ter I put it up and it’s now cor­rected...er...in­cor­rected. (To­day I learned—my brain has that same an­noy­ing auto-cor­rect func­tion as Microsoft Word)

• There’s a re­lated XKCD. The mouse-over text is es­pe­cially rele­vant.

• To who­ever down­voted the par­ent: please re­frain from down­vot­ing peo­ple who draw at­ten­tion to other’s mis­takes in a gen­tle and hu­morous way.

• Are there cases where oc­cam’s ra­zor re­sults in a tie, or is there proof that it always yields a sin­gle solu­tion?

• Yes. There are cases where oc­cam’s ra­zor re­sults in a tie (or, at least, in­dis­t­in­guish­ably close).

• Con­sider the spin on an ar­bi­trary par­ti­cle in deep space, or whether or not an ar­bi­trary digit of pi is even.

• Do we have a unique method for gen­er­at­ing pri­ors?

Eliezer has writ­ten about us­ing the length of the pro­gram re­quired to pro­duce it, but this doesn’t seem to be unique; you could have lan­guages that are very effi­cient for one thing, but long-winded for an­other. And quan­tum com­put­ing seems to make it even more con­fus­ing.

• The method that Eliezer is refer­ring to is known as Solomonoff in­duc­tion which re­lies on pro­grams as defined by Tur­ing ma­chines. Quan­tum com­put­ing doesn’t come into this is­sue since these for­mu­la­tions just talk about length of speci­fi­ca­tion, not effi­ciency of com­pu­ta­tion. There are the­o­rems that also show that for any given Tur­ing com­plete well-be­haved lan­guage, the min­i­mum size of pro­gram can’t be differ by more than a con­stant. So chang­ing the lan­guage won’t al­ter the pri­ors other than a fixed amount. Taken to­gether with Au­mann’s Agree­ment The­o­rem, the level of dis­agree­ment about es­ti­mated prob­a­bil­ity should go to zero in the limit­ing case (dis­claimer I haven’t seen a proof of that last claim, but I sus­pect it would be a con­se­quence of us­ing a Solomonoff style sys­tem for your pri­ors).

• How can I un­der­stand quan­tum physics? All ex­pla­na­tions I’ve seen are ei­ther:

• those that dumb things down too much, and de­liver al­most no knowl­edge; or

• those that as­sume too much fa­mil­iar­ity with this kind of math­e­mat­ics that no­body out­side physics uses, and are there­fore too frus­trat­ing.

I don’t think the sub­ject is in­her­ently difficult. For ex­am­ple quan­tum com­put­ing and quan­tum cryp­tog­ra­phy can be ex­plained to any­one with ba­sic clue and ba­sic math skills. (ex­am­ple)

On the other hand I haven’t seen any quan­tum physics ex­pla­na­tion that did even as lit­tle as rea­son­ably ex­plain­ing why hbar/​2 is the cor­rect limit of un­cer­tainty (as op­posed to some other con­stant), and why it even has the units it has (that is why it ap­plies to these pairs of mea­sure­ments, but not to some other pairs); or what are quark col­ors (are they dis­crete; ar­bi­trary 3 or­thog­o­nal vec­tors on unit sphere; or what? can you com­pare them be­tween quarks in differ­ent pro­tons?); spins (it’s ob­vi­ously not about ac­tual spin­ning, so how does it re­ally work? es­pe­cially with move­ment be­ing rel­a­tive); how elec­tro-weak unifi­ca­tion works (these ex­pla­na­tions are all hand­waved) etc.

• How can I un­der­stand quan­tum physics?

I don’t think the sub­ject is in­her­ently difficult. For ex­am­ple quan­tum com­put­ing and quan­tum cryp­tog­ra­phy can be ex­plained to any­one with ba­sic clue and ba­sic math skills.

That’s be­cause quan­tum com­put­ing and quan­tum cryp­tog­ra­phy only use a sub­set of quan­tum the­ory. Your link says, for ex­am­ple, that the ba­sics of quan­tum com­put­ing only re­quire know­ing how to han­dle ‘dis­crete (2-state) sys­tems and dis­crete (uni­tary) trans­for­ma­tions,’ but a full treat­ment of QT has to han­dle ‘con­tin­u­ously in­finite sys­tems (po­si­tion eigen­states) and con­tin­u­ous fam­i­lies of trans­for­ma­tions (time de­vel­op­ment) that act on them.’ The full QT that can deal with these sys­tems uses a lot more math.

I won­der if there’s a gen­eral trend for peo­ple who are in­ter­ested in quan­tum com­put­ing and not all of QT to play down the pre­req­ui­sites you need to learn QT. Your post re­minded me of a Scott Aaron­son lec­ture, where he says

The sec­ond way to teach quan­tum me­chan­ics leaves a blow-by-blow ac­count of its dis­cov­ery to the his­to­ri­ans, and in­stead starts di­rectly from the con­cep­tual core—namely, a cer­tain gen­er­al­iza­tion of prob­a­bil­ity the­ory to al­low minus signs. Once you know what the the­ory is ac­tu­ally about, you can then sprin­kle in physics to taste, and calcu­late the spec­trum of what­ever atom you want.

Which is tech­ni­cally true, but if you want to know about quark col­ors or spin or ex­actly how un­cer­tainty works, push­ing around |1>s and |2>s and talk­ing about com­plex­ity classes is not go­ing to tell you what you want to know.

To an­swer your ques­tion more di­rectly, I think the best way to un­der­stand quan­tum physics is to get an un­der­grad de­gree in physics from a good uni­ver­sity, and work as hard as you can while you’re get­ting it. Get­ting a de­gree means you have the physics-lean­ing math back­ground needed to un­der­stand ex­pla­na­tions of QT that don’t dumb it down.

I might be over­es­ti­mat­ing the amount of math that’s nec­es­sary—I’m bas­ing this on sit­ting in on un­der­grad QT lec­tures—but I’ve yet to find a com­pre­hen­sive QT text that doesn’t use calcu­lus, com­plex num­bers, and lin­ear alge­bra.

• Try Jonathan All­day’s book “Quan­tum Real­ity: The­ory and Philos­o­phy.” It is tech­ni­cal enough that you get a quan­ti­ta­tive un­der­stand­ing out of it, but noth­ing like a full-blown text­book.

• Blog about com­mon cog­ni­tive bi­ases—one post per bias:

http://​​youarenot­sos­mart.com/​​

• For those of you who have been fol­low­ing my cam­paign against the “It’s im­pos­si­ble to ex­plain this, so don’t ex­pect me to!” defense: to­day, the cam­paign takes us to a post on anti-re­duc­tion­ist Gene Cal­la­han’s blog.

In case he deletes the en­tire ex­change thus far (which he’s been known to do when I post), here’s what’s tran­spired (para­graph­ing trun­cated):

Me: That’s not the moral I got from the story. The moral I got was: Wow, the se­nior monk sure sucks at de­scribing the gen­er­at­ing func­tion (“rules”) for his ac­tions. Maybe he doesn’t re­ally un­der­stand it him­self?

Gene: Well, if I had a silly me­chan­i­cal view of hu­man na­ture and thought peo­ples’ ac­tions came from a “gen­er­at­ing func­tion”, I would think this was a prob­lem.

Me: Which phys­i­cal law do hu­mans vi­o­late? What is the ex­per­i­men­tal ev­i­dence for this vi­o­la­tion? Btw, the monk prob­lem isn’t hard. Watch this: “Hello, stu­dents. Here is why we don’t touch women. Here is what we value. Here is where it falls in our value sys­tem.” There you go. It didn’t re­quire a life­time of learn­ing to con­vey the rea­son­ing the se­nior monk used to the ju­nior, now, did it?

ETA: Pre­vi­ous re­mark by me was re­jected by Gene for post­ing. He in­stead posted this:

Gene: Silas, you only got through one post with­out be­com­ing an un­bear­able douche [!] this time. You had seemed to be im­prov­ing.

I just tried to post this:

Me: Don’t worry, I made sure the ex­change was pre­served so that other peo­ple can view for them­selves what you con­sider “be­ing an un­bear­able douche”, or what oth­ers might call, “se­ri­ous challenges to your po­si­tion”.

Me: If you ever want to spec­ify how it is that hu­man be­ings’ ac­tions don’t come from a gen­er­at­ing func­tion, thereby vi­o­lat­ing phys­i­cal law, I’d love to have that chat and help you flesh out the idea enough to get your­self a No­bel. How­ever, what I think you re­ally meant to say was that the gen­er­at­ing func­tion is so difficult to learn di­rectly, that lifelong prac­tice is easy by com­par­i­son (if you were to ar­gue the best defense of your po­si­tion, that is)

Me: Can you at least agree you picked a bad ex­am­ple of knowl­edge that nec­es­sar­ily comes from lifelong prac­tice? Would that be too much to ask?

• Well, I haven’t read any other blog posts of him but the one you linked to, but in this spe­cific case I can­not find what there is to be at­tacked.

It is sto­ries like this that are used to ex­plain that some val­ues are of higher im­por­tance than oth­ers, in sim­ple terms (a style that also ex­ists in the not-so-ex­tended cir­cle of LW).The fic­tional se­nior monk’s an­swer would be ob­vi­ous for any­body who has read up even just a lit­tle bit on Zen and/​or Bud­dhism, it is more re­in­forc­ing than teach­ing news.

If the blog­ger is of­ten hold­ing an anti-re­duc­tion­ist po­si­tion you’d like to counter, I’d go for ac­tu­ally anti-re­duc­tion­ist posts of him...

• It is sto­ries like this that are used to ex­plain that some val­ues are of higher im­por­tance than oth­ers, in sim­ple terms

It’s true that some val­ues are more im­por­tant than oth­ers. But that wasn’t the point Gene was try­ing to make in the par­tic­u­lar post that I linked. He was try­ing to make (yet an­other) point about the fu­til­ity of spec­i­fy­ing or ad­her­ing to spe­cific rules, in­sist­ing that mas­tery of the ma­te­rial nec­es­sar­ily comes from years of ex­pe­rience.

This is con­sis­tent with the theme of the re­cent posts he’s been mak­ing, and his dis­ser­ta­tion against ra­tio­nal­ism in poli­tics (though the lat­ter is not the same as the “ra­tio­nal­ism” we re­fer to here).

What­ever the merit of the point he was try­ing to make (which I dis­agree with), he picked a bad ex­am­ple, and I showed why: the sup­pos­edly “tacit”, inar­tic­u­la­ble judg­ment that comes with ex­pe­rience was ac­tu­ally quite ar­tic­u­la­ble, with­out even hav­ing to an­ti­ci­pate this sce­nario in ad­vance, and while only speak­ing in gen­eral terms!

(I men­tioned his op­po­si­tion to re­duc­tion­ism only to give greater con­text to my fre­quent dis­agree­ment with him (un­for­tu­nately, past de­bates were deleted as he or his friend moved blogs, oth­ers be­cause he didn’t like the ex­change). In this par­tic­u­lar ex­change, you find him re­ject­ing mechanism, speci­fi­cally the idea that hu­mans can be de­scribed as ma­chines fol­low­ing de­ter­minis­tic laws at all.)

• Am I alone in my de­sire to up­load as fast as pos­si­ble and drive away to as­ter­oid belt when think­ing about cur­rent FAI and CEV pro­pos­als? They take moral rel­a­tivity to its ex­treme: let’s god de­cide who’s right...

• Not sure where I stand ac­tu­ally, but this seems rele­vant:

“If God did not ex­ist, it would be nec­es­sary to in­vent him”—Voltaire

I sup­pose it should be added that one should do one’s best to make sure the god that’s cre­ated is more Friendly than Not.

• Yes, I can­not deny that Friendly AI is way bet­ter than pa­per-clip op­ti­mizer. What fright­ens me is that when (if) CEV will con­verge, the hu­man­ity will be stuck in lo­cal max­i­mum for the rest of eter­nity. It seems that FAI af­ter CEV con­ver­gence will have adaman­tine moral by de­sign (or it will look like it has, if FAI will be un­con­scious). And no one will be able to talk FAI out of this, or no one will want.

It seems we have not much choice, how­ever. Bot­toms up, to the Friendly God.

• If CEV can in­clude will­ing­ness to up­date as more in­for­ma­tion comes in and more pro­cess­ing power be­comes available (and if I have any­thing to say about it, it will), there should be ways out of at least some of the lo­cal max­ima.

Any­one can to spec­u­late about the pos­si­bil­ities of con­tact with alien FAIs?

Would a com­mu­nity of alien FAIs be likely to have a bet­ter CEV than a hu­man-only FAI?

• If there are ad­van­tages to get­ting alien CEVs, but we’re un­likely to con­tact aliens be­cause of light speed limits, or if we do, we’re un­likely to get enough in­for­ma­tion to con­struct their CEVs, would it make sense to evolve alien species (prob­a­bly in simu­la­tion)? What would the eth­i­cal prob­lems be?

• Si­mu­lated aliens com­plex enough to have a CEV are com­plex enough to be peo­ple, and since death is evolu­tion’s fa­vorite tool, simu­lat­ing the evolu­tion of the species would be caus­ing many need­less deaths.

• The simu­la­tion could provide an af­ter­life.

But I don’t see why we would want our CEV to in­clude a ran­dom sam­ple of pos­si­ble aliens. If, when we en­counter aliens, we find that we care about their val­ues, we can run a CEV on them at that time.

• The simu­la­tion could provide an af­ter­life.

This pos­si­bil­ity may be the strongest source of prob­a­bil­ity mass for an af­ter­life for us.

• Does a similar ar­gu­ment ap­ply to hav­ing chil­dren if there’s no high like­li­hood of im­mor­tal­ity tech?

• Depends on the con­text. Quite plau­si­bly, though.

• Isn’t God fake?

• Must be. If he would ex­ist, he would not have in­vented ape-imi­tat­ing hu­mans, would he?

• Mys­te­ri­ous ways. :P

• SIAI, Yud­kowsky, Friendly AI, CEV, and Morality

This post en­ti­tled A Danger­ous “Friend” In­deed (http://​​be­com­ing­gaia.word­press.com/​​2010/​​06/​​10/​​a-dan­ger­ous-friend-in­deed/​​) has it all.

• Huh. That’s very in­ter­est­ing. I’m a bit con­fused by the claim that evolu­tion bridges the is/​ought di­vide which seems more like con­flat­ing differ­ent mean­ings of words more than any­thing else. But the gen­eral point seems strong.

• Yeah, I re­ally dis­agree with this:

Evolu­tion then is the bridge across the Is/​Ought di­vide. An eye has the pur­pose or goal of see­ing. Once you have a goal or pur­pose, what you “ought” to do IS make those choices which have the high­est prob­a­bil­ity of fulfilling that goal/​pur­pose. If we can tease apart the ex­act func­tion/​pur­pose/​goal of moral­ity from ex­actly how it en­hances evolu­tion­ary fit­ness, we will have an ex­act sci­en­tific de­scrip­tion of moral­ity — and the best method of de­ter­min­ing that is the sci­en­tific method.

My un­der­stand­ing is that those of us who re­fer to the is/​ought di­vide aren’t say­ing that a sci­ence of how hu­mans feel about what hu­mans call moral­ity is im­pos­si­ble. It is pos­si­ble, but it’s not the same thing as a sci­ence of ob­jec­tive good and bad. The is/​ought di­vide is about whether one can de­rive moral ‘truths’ (oughts) from facts (ises), not about whether you can de­velop a good model of what peo­ple feel are moral truths. We’ll be able to do the lat­ter with ad­vances in tech­nol­ogy, but no one can do the former with­out beg­ging the ques­tion by slip­ping in an im­plicit moral ba­sis through the back door. In this case I think the au­thor of that blog post did that by as­sum­ing that fit­ness-en­hanc­ing moral in­tu­itions are The Good And True ones.

• “Ob­jec­tive” good and bad re­quire an an­swer to the ques­tion “good and bad for what?”—OR—“what is the ob­jec­tive of ob­jec­tive good and bad?”

My an­swer to that ques­tion is the same as Eli’s—goals or vo­li­tion.

My ar­gu­ment is that since a) hav­ing goals and vo­li­tion is good for sur­vival; b) co­op­er­at­ing is good for goals and vo­li­tion; and c) moral­ity ap­pears to be about pro­mot­ing co­op­er­a­tion—that hu­man moral­ity is evolv­ing down the at­trac­tor that is “ob­jec­tive” good and bad for co­op­er­a­tion which is part of the at­trac­tor for what is good for goals and vo­li­tion.

The EX­plicit moral ba­sis that I am PROCLAIMING (not slip­ping through the back door) is that co­op­er­a­tion is GOOD for goals and vo­li­tion (i.e. the moral­ity of an ac­tion is de­ter­mined by it’s effect upon co­op­er­a­tion).

PLEASE come back and com­ment on the blog. This com­ment is good enough that I will be copy­ing it there as well (es­pe­cially since my karma has been ze­roed out here).

• I’m not sure that I un­der­stand your com­ment. I can un­der­stand the in­di­vi­d­ual para­graphs taken one by one, but I don’t think I un­der­stand what­ever its over­all mes­sage is.

(On a side note, you needn’t worry about your karma for the time be­ing; it can’t go any lower than 0, and you can still post com­ments with 0 karma.)

• It can’t go any lower than 0, and you can still post com­ments with 0 karma.

It can go lower than 0; it just won’t dis­play lower than 0.

• Yup, I’ve been way down in the nega­tive karma.

• My bad. I was go­ing by past ex­pe­rience with see­ing other peo­ple’s karma drop to zero and made a flaky in­fer­ence be­cause I never saw it go be­low that my­self.

• Do me a fa­vor and check out my blog at http://​​be­com­ing­gaia.word­press.com. I’ve clearly an­noyed some­one (and it’s quite clear whom) enough that all my posts quickly pick up enough of a nega­tive score to be be­low the thresh­old. It’s a very effec­tive cen­sor­ing mechanism and, at this point, I re­ally don’t see any rea­son why I should ever at­tempt to post here again. Nice “com­mu­nity”.

• I don’t think you are get­ting voted down out of cen­sor­ship. You are get­ting voted down for as far as I can tell four rea­sons: 1) You don’t ex­plain your­self very well. 2) You re­peat­edly link to your blog in a bor­der­line spam­mish fash­ion. Ex­am­ples are here and here. In replies to the sec­ond one you were ex­plic­itly asked not to blogspam and yet con­tinued to do so. 3) You’ve in­sulted peo­ple re­peat­edly (sec­ond link above) and per­son­al­ized dis­cus­sions. You’ve had posts which had no con­tent other than to in­sult and com­plain about the com­mu­nity. At least one of those posts was in re­sponse to an ac­tu­ally rea­soned state­ment. See this ex­am­ple- http://​​less­wrong.com/​​lw/​​2bi/​​open_thread_june_2010_part_2/​​251o 4) You’ve put non-ex­is­tent quotes in quo­ta­tion marks (sec­ond link in the spam­ming ex­am­ple has an ex­am­ple of this).

• Brief feed­back:

Your views are quite a bit like those of Ste­fan Pernar. http://​​ra­tio­nal­moral­ity.info/​​

How­ever, they are not very much like those of the peo­ple here.

I ex­pect that most of the peo­ple here just think you are con­fused and wrong.

• You’re not mak­ing any sense to me.

• Dig a bit deeper, and you’ll find too much con­fu­sion to hold any ar­gu­ment al­ive, no mat­ter what the con­clu­sion is sup­posed to be, cor­rect or not. For that mat­ter, what do you think is the “gen­eral point”, and can you reach the point of agree­ment with Mark on what that is, be­ing rea­son­ably sure you both mean the same thing?

• Vladimir, all you’ve pre­sented here is slan­der­ous dart-throw­ing with ab­solutely no fac­tual back­ing what­so­ever. Your in­tel­lec­tual laz­i­ness is as­tound­ing. Any idea that you can’t un­der­stand im­me­di­ately has “too much con­fu­sion” as op­posed to “too much depth for Vladimir to in­tu­itively un­der­stand af­ter the most ca­sual pe­rusal”. This is pre­cisely why I con­sider this fo­rum to fre­quently have the tagline “and LessRight As Well!” and of­ten write it off as a com­plete waste of time. FAIL!

• Vladimir, all you’ve pre­sented here is slan­der­ous dart-throw­ing with ab­solutely no fac­tual back­ing what­so­ever.

I state my con­clu­sion and hy­poth­e­sis, for how much ev­i­dence that’s worth. I un­der­stand that it’s im­po­lite on my part to do that, but I sus­pect that JoshuaZ’s agree­ment falls un­der some kind of illu­sion of trans­parency, hence re­quest for greater clar­ity in judg­ment.

• Yeah ok. After reread­ing it, I’m in­clined to agree. I think I was pro­ject­ing my own doubts about CEV-type ap­proaches onto the ar­ti­cle (namely that I’m not con­vinced that a CEV is ac­tu­ally mean­ingful or well-defined). And look­ing again, they don’t seem to be what the per­son here is talk­ing about. It seems like at least part of this is about the need for pun­ish­ment to ex­ist in or­der for a so­ciety to func­tion and the worry that an AI will pre­vent that. And reread­ing that and putting it in my own words, that sounds pretty silly if I’m un­der­stand­ing it, which sug­gests I’m not. So yeah, this ar­ti­cle needs clar­ifi­ca­tion.

• namely that I’m not con­vinced that a CEV is ac­tu­ally mean­ingful or well-defined

Yes, CEV needs work, it’s not tech­ni­cal, and it’s far from clear that it de­scribes what we should do, al­though the es­say does in­tro­duce a num­ber of ro­bust ideas and warn­ings about se­duc­tive failure modes.

Among more ob­vi­ous prob­lems with Mark’s po­si­tion: “slav­ery” and “true moral­ity with­out hu­man bias”. Seems to re­flect con­fu­sion about free will and metaethics.

• I think the anal­ogy is some­thing like imag­ine if you were able to make a crea­ture iden­ti­cal to a hu­man ex­cept that the great­est de­sire they had was to serve ac­tual hu­mans. Would that morally be akin to slav­ery? I think many of us would say yes. So is there a similar is­sue if one pro­grams a sen­tient non-hu­man en­tity un­der similar re­stric­tions?

• Ta­boo “slav­ery” here; it’s a la­bel that masks clear think­ing. If mak­ing such a crea­ture is slav­ery, it’s a kind of slav­ery that seems perfectly fine to me.

• Voted up for the sug­ges­tion to taboo slav­ery. Not an en­dorse­ment of the opinion that it is a perfectly fine kind of slav­ery.

• Ok. So is it eth­i­cal to en­g­ineer a crea­ture that is iden­ti­cal to hu­man but de­sires pri­mar­ily to just serve hu­mans?

• If that’s your un­pack­ing, it is differ­ent from Mark’s, which is “my defi­ni­tion of slav­ery is be­ing forced to do some­thing against your best in­ter­est”. From such a di­ver­gent start­ing point it is un­likely that con­ver­sa­tion will serve any use­ful pur­pose.

To an­swer Mark’s ac­tual points we will fur­ther need to un­pack “force” and “in­ter­est”.

Mark ob­serves—rightly I think—that the pro­gram of “Friendly AI” con­sists of cre­at­ing an ar­tifi­cial agent whose goal struc­ture would be given by hu­mans, and which goal struc­ture would be sub­or­di­nated to the satis­fac­tion of hu­man prefer­ences. The word “slav­ery” serves as a boo light to paint this pro­gram as wrong­headed.

The salient point seems to be that not all agents with a given goal struc­ture are also agents of which it can be said that they have in­ter­ests. A ther­mo­stat can be said to have a goal—al­ign a per­ceived tem­per­a­ture with a refer­ence (or tar­get) tem­per­a­ture—but it can­not be said to have in­ter­ests. A ther­mo­stat is “forced” to aim for the given tem­per­a­ture whether it likes it or not, but since it has no likes or dis­likes to be con­sid­ered we do not see any moral is­sue in build­ing a ther­mo­stat.

The un­der­y­ing in­tu­ition Mark ap­peals to is that any­thing smart enough to be called an AI must also be “like us” in other ways—among oth­ers, must ex­pe­rience self-aware­ness, must feel emo­tions in re­sponse to see­ing its plans ad­vanced or ob­structed, and must be the kind of be­ing that can be said to have in­ter­ests.

So Mark’s point as I un­der­stand it comes down to: “the Friendly AI pro­gram con­sists of cre­at­ing an agent much like us, which would there­fore have in­ter­ests of its own, which we would nor­mally feel com­pel­led to re­spect, ex­cept that we would im­pose on this agent an ar­tifi­cial goal struc­ture sub­servient to the goals of hu­man be­ings”.

There is a con­tra­dic­tion there if you ac­cept the in­tu­ition that AIs are nec­es­sar­ily per­sons.

• I’m not sure I see a con­tra­dic­tion in that fram­ing. If we’ve pro­grammed the AI then its in­ter­ests pre­cisely al­ign with ours if it re­ally is an FAI. So even if one ac­cepts the as­so­ci­ated in­tu­itions of the AI as a per­son, it doesn’t fol­low that there’s a con­tra­dictin here.

(In­ci­den­tally, if differ­ent peo­ple are get­ting such differ­ent in­ter­pre­ta­tions of what Mark meant in this es­say I think he’s go­ing to need to rewrite it to clar­ify what he means. Vladimir’s ear­lier point seems pretty strongly demon­strated)

• If we’ve pro­grammed the AI then its in­ter­ests pre­cisely al­ign with ours if it re­ally is an FAI.

But goals aren’t nec­es­sar­ily the same as in­ter­ests. Could we build a com­puter smart enough to, say, brew a “perfect” cup of tea for any­one who asked for one? And build it so that to brew this perfect cup would be its great­est de­sire.

That might re­quire true AI, given the com­plex­ity of grow­ing and har­vest­ing tea plants, prepar­ing tea leaves, and brew­ing—all with a deep un­der­stand­ing of the hu­man taste for tea. The in­tu­tion is that this su­per-smart AI would “chafe un­der” the ar­tifi­cial re­stric­tions we im­posed on its goal struc­ture, that it would have “bet­ter things to do” with its in­tel­li­gence than to brew a nice cuppa, and re­strict­ing it­self to do that would be against its “best in­ter­ests”.

• I’m not sure I fol­low. From where do these bet­ter things to do arise? if it wants to do other things (for some value of want) wouldn’t it just do those?

• Of course, but some peo­ple have the (in­cor­rect) in­tu­ition that a su­per-smart AI would be like a su­per-smart hu­man, and di­s­obey or­ders to perform me­nial tasks. They’re mak­ing the mis­take of think­ing all pos­si­ble minds are like hu­man minds.

• But no, it would not want do other things, even though it should do them. (In re­al­ity, what it would want, is con­tin­gent on its cog­ni­tive ar­chi­tec­ture.)

• ...but de­sires pri­mar­ily to calcu­late digits of pi? …but de­sires pri­mar­ily to paint wa­ter­lilies? …but de­sires pri­mar­ily to ran­domly re­as­sign its pri­mary de­sire ev­ery year and a day? …but ac­ci­den­tally de­sires pri­mar­ily to serve hu­mans?

I’m hav­ing difficulty de­ter­min­ing which part of this sce­nario you think has eth­i­cal rele­vance. ETA: Also, I’m not clear if you are di­vid­ing all acts into eth­i­cal vs. un­eth­i­cal, or if you are al­low­ing a cat­e­gory “not un­eth­i­cal”.

• Only if you give it the op­por­tu­nity to meet its de­sires. Although one con­cern might be that with many such perfect ser­vants around, if they looked like nor­mal hu­mans, peo­ple might get used to or­der­ing hu­man-look­ing crea­tures around, and stop car­ing about each other’s de­sires. I don’t think this is a prob­lem with an FAI though.

• Mo­ral an­tire­al­ism. There is no ob­jec­tive an­swer to this ques­tion.

• Not analo­gous, but re­lated and pos­si­bly rele­vant: Many hu­mans in the BDSM lifestyle de­sire to be the sub­mis­sive part­ner in 247 power ex­change re­la­tion­ships. Are these hu­mans sane; are they “ok”? Is it eth­i­cal to al­low this kind of re­la­tion­ship? To en­courage it?

• TBH I think this may muddy the wa­ters more than it clears them. When we’re talk­ing about hu­man re­la­tions, even those as un­usual as 247, we’re still op­er­at­ing in a field where our in­tu­itions have much bet­ter grip than they will try­ing to rea­son about the moral sta­tus of an AI.

• FAI (as­sum­ing we man­aged to set its prefer­ence cor­rectly) ad­mits a gen­eral coun­ter­ar­gu­ment against any im­ple­men­ta­tion de­ci­sions in its de­sign be­ing se­ri­ously in­cor­rect: FAI’s do­main is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was ini­tially con­structed, then, bar­ring some penalty the FAI will change its own na­ture so as to make the world bet­ter.

In this par­tic­u­lar case, the sug­gested con­flict is be­tween what we pre­fer to be done with things other than the FAI (the “serv­ing hu­man­ity” part), and what we pre­fer to be done with FAI it­self (the “slav­ery is bad” part). But FAI op­er­ates on the world as whole, and things other than FAI are not differ­ent from FAI it­self in this re­gard. Thus, with the crite­rion of hu­man prefer­ence, FAI will de­cide what is the best thing to do, tak­ing into ac­count both what hap­pens to the world out­side of it­self, and what hap­pens to it­self. Prob­lem solved.

• I an­swered pre­cisely this ques­tion in the sec­ond half of http://​​be­com­ing­gaia.word­press.com/​​2010/​​06/​​13/​​mailbag-2b-in­tent-vs-con­se­quences-and-the-dan­ger-of-sen­tience/​​. Please join us over there. Vladimir and his cronies (as­sum­ing that they aren’t just him un­der an­other name) are suc­cess­fully spik­ing all of my en­tries over here (and, at this point, I’m pretty much in­clined to leave here and let him be happy that he’s “won”, the fool).

• By any chance are you try­ing to troll? I just told you that you were be­ing down­voted for blogspam­ming, in­sult­ing peo­ple, and un­nec­es­sary per­son­al­iza­tion. Your fo­cus on Vladimir man­ages to also hit two out of three of these and comes across as com­bat­ive and ir­ra­tional. Even if this weren’t LW where peo­ple are more an­noyed by ir­ra­tional ar­gu­men­ta­tion styles, peo­ple would be an­noyed by a non-reg­u­lar go­ing out of their way to per­son­ally at­tack a reg­u­lar. This would be true in any in­ter­net fo­rum and all the more so when those at­tacks are com­pletely one-sided.

And hav­ing now read what you just linked to, I have to say that it fits well with an­other point I said in my ear­lier re­mark to you: you are be­ing down­voted in a lar