# Zetetic

Karma: 535
Page 1
• I see your point here, al­though I will say that de­ci­sion sci­ence is ideally a ma­jor com­po­nent in the skill set for any per­son in a man­age­ment po­si­tion. That be­ing said, what’s be­ing pro­posed in the ar­ti­cle here seems to be dis­tinct from what you’re driv­ing at.

Manag­ing cog­ni­tive bi­ases within an in­sti­tu­tion doesn’t nec­es­sar­ily over­lap with the sort of mea­sures be­ing dis­cussed. A wide ar­ray of statis­ti­cal tools and met­rics isn’t di­rectly rele­vant to, e.g. bat­tling sunk-cost fal­lacy or NIH. More rele­vant to that prob­lem set would be a strong knowl­edge of known bi­ases and good train­ing in de­ci­sion sci­ence and psy­chol­ogy in gen­eral.

That isn’t to say that these two ap­proaches can’t over­lap, they likely could. For ex­am­ple stronger statis­ti­cal anal­y­sis does seem rele­vant to the is­sue of over-op­ti­mistic pro­jec­tions you bring up in a very straight­for­ward way.

From what I gather you’d want a CRO that has a com­pli­men­tary knowl­edge base in rele­vant ar­eas of psy­chol­ogy alongside more stan­dard risk anal­y­sis tools. I definitely agree with that.

• This is part of why I tend to think that for the most part, these works aren’t (or if they are, they shouldn’t be) aimed at de-con­vert­ing the faith­ful (who have already built up a strong meme-plex to fall back on), but rather for in­ter­cep­tion and pre­ven­tion for young po­ten­tial con­verts and peo­ple who are on the fence. Par­tic­u­larly col­lege kids who have left home and are ques­tion­ing their be­lief struc­ture.

The side effect is that some­thing that is mar­keted well to­wards this group (imo, this is the case with “The God Delu­sion”) comes across as shock­ing and abra­sive to the older con­verts (and this also plays into its mar­ketabil­ity to a younger au­di­ence). So there’s definitely a trade-off, but get­ting the num­bers right to de­ter­mine the ac­tual pay­off is difficult.

I think a more effec­tive way to in­crease sec­u­lar in­fluence is through lob­by­ing. I think in the U.S. there is a great need for a well-funded sec­u­lar lobby to keep things in check. I found one such lobby but I haven’t had the chance to look into it yet.

• I’ve met both sorts, peo­ple turned off by “The God Delu­sion” who re­ally would have benefited from some­thing like “Great­est Show on Earth”, and peo­ple who re­ally seemed to come around be­cause of it (both irl and in a wide range of fora). The un­for­tu­nate side-effect of suc­cess­ful con­ver­sion, in my ex­pe­rience, has been that peo­ple who are suc­cess­fully con­verted by rhetoric fre­quently be­gin to spam similar rhetoric, in­eptly, re­sult­ing mostly in in­creased po­lariza­tion among their friends and fam­ily.

It seems pretty hard to con­trol for enough fac­tors to see what kind of im­pact pop­u­lar athe­ist in­tel­lec­tu­als ac­tu­ally have on de-con­ver­sion rates and be­lief po­lariza­tion (much less with spe­cific sub­set of abra­sive works), and I can’t find any clear num­bers on it. Seems like opinion min­ing face­book could po­ten­tially be use­ful here.

• First, I do have a cou­ple of nit­picks:

Why evolve a dis­po­si­tion to pun­ish? That makes no sense.

That de­pends. See here for in­stance.

Does it make sense to pun­ish some­body for hav­ing the wrong genes?

This de­pends on what you mean by “pun­ish”. If by “pun­ish” you mean so­cially os­tra­cize and dis­al­low mat­ing priv­ileges, I can think of situ­a­tions in which it could make evolu­tion­ary sense, al­though as we no longer live in our an­ces­tral en­vi­ron­ment and have since de­vel­oped a com­plex ar­ray of cul­tural norms, it no longer makes moral sense.

In any event, what you’ve writ­ten is pretty much or­thog­o­nal to what I’ve said; I’m not defend­ing what you’re call­ing evolu­tion­ary ethics (nor am I aware of in­di­cat­ing that I hold that view, if any­thing I took it to be a bit of a straw­man). De­scrip­tive evolu­tion­ary ethics is po­ten­tially use­ful, but nor­ma­tive evolu­tion­ary ethics com­mits the nat­u­ral­is­tic fal­lacy (as you’ve pointed out), and I think the Euthy­phro ar­gu­ment is fairly weak in com­par­i­son to that point.

The view you’re at­tack­ing doesn’t seem to take into ac­count the in­ter­play be­tween ge­netic, epi­ge­netic and cul­tural/​mememtic fac­tors in how moral in­tu­itions are shaped and can be shaped. It sounds like a pretty flimsy po­si­tion, and I’m a bit sur­prised that any ethi­cist ac­tu­ally holds it. I would be in­ter­ested if you’re will­ing to cite some peo­ple who cur­rently hold the view­point you’re ad­dress­ing.

The rea­son that the Euthy­phro ar­gu­ment works against evolu­tion­ary ethics be­cause—re­gard­less of what evolu­tion can teach us about what we do value, it teaches us that our val­ues are not fixed.

Well, re­ally it’s more neu­ro­science that tells us that our val­ues aren’t fixed (along with how the val­u­a­tion works). It also has the po­ten­tial to tell us to what de­gree our val­ues are fixed at any given stage of de­vel­op­ment, and how to take ad­van­tage of the pre­sent de­gree of malle­abil­ity.

Be­cause val­ues are not ge­net­i­cally de­ter­mined, there is a realm in which it is sen­si­ble to ask about what we should value, which is a ques­tion that evolu­tion­ary ethics can­not an­swer.

Of course; un­der your us­age of evolu­tion­ary ethics this is clearly the case. I’m not sure how this re­lates to my com­ment, how­ever.

Praise and con­dem­na­tion are cen­tral to our moral life pre­cisely be­cause these are the tools for shap­ing learned desires

I agree that it’s pretty ob­vi­ous that so­cial re­in­force­ment is im­por­tant be­cause it shapes moral be­hav­ior, but I’m not sure if you’re try­ing to make a cen­tral point to me, or just airing your own po­si­tion re­gard­less of the con­tent of my post.

• I’m not sure if it’s el­e­men­tary, but I do have a cou­ple of ques­tions first. You say:

what each of us val­ues to them­selves may be rele­vant to morality

This seems to sug­gest that you’re a moral re­al­ist. Is that cor­rect? I think that most forms of moral re­al­ism tend to stem from some var­i­ant of the mind pro­jec­tion fal­lacy; in this case, be­cause we value some­thing, we treat it as though it has some ob­jec­tive value. Similarly, be­cause we al­most uni­ver­sally hold some­thing to be im­moral, we hold its im­moral­ity to be ob­jec­tive, or mind in­de­pen­dent, when in fact it is not. The moral­ity or im­moral­ity of an ac­tion has less to do with the ac­tion it­self than with how our brains re­act to hear­ing about or see­ing the ac­tion.

Tak­ing this route, I would say that not only are our val­ues rele­vant to moral­ity, but the dy­namic sys­tem com­pris­ing all of our in­di­vi­d­ual value sys­tems is an up­per-bound to what can be in the ex­ten­sional defi­ni­tion of “moral­ity” if “moral­ity” is to make any sense as a term. That is, if some­thing is out­side of what any of us can as­cribe value to, then it is not moral sub­ject mat­ter, and fur­ther­more; what we can and do as­cribe value to is dic­tated by neu­rol­ogy.

Not only that, but there is a well-known phe­nomenon that makes naive (with­out in­put from neu­ro­science) moral de­ci­sion mak­ing: the dis­tinc­tion be­tween lik­ing and want­ing. This dis­tinc­tion crops up in part be­cause the way we eval­u­ate pos­si­ble al­ter­na­tives is lossy—we can only use a very finite amount of com­pu­ta­tional power to try and pre­dict the effects of a de­ci­sion or ob­tain­ing a goal, and we have to use heuris­tics to do so. In ad­di­tion, there is the fact that hu­man val­u­a­tion is multi-lay­ered—we have at least three val­u­a­tion mechanisms, and their in­ter­ac­tion isn’t yet fully un­der­stood. Also see Glim­cher et al. Neu­roe­co­nomics and the Study of Valu­a­tion From that ar­ti­cle:

10 years of work (that) es­tab­lished the ex­is­tence of at least three in­ter­re­lated sub­sys­tems in these brain ar­eas that em­ploy dis­tinct mechanisms for learn­ing and rep­re­sent­ing value and that in­ter­act to pro­duce the val­u­a­tions that guide choice (Dayan & Bal­liene, 2002; Bal­liene, Daw, & O’Do­herty, 2008; Niv & Mon­tague, 2008).

The mechanisms for choice val­u­a­tion are com­pli­cated, and so are the con­straints for hu­man abil­ity in de­ci­sion mak­ing. In eval­u­at­ing whether an ac­tion was moral, it’s im­per­a­tive to avoid mak­ing the crite­rion “too high for hu­man­ity”.

One last thing I’d point out has to do with the ar­gu­ment you link to, be­cause you do seem to be be­ing in­con­sis­tent when you say:

What we in­tu­itively value for oth­ers is not.

Rele­vant to moral­ity, that is. The rea­son is that the ar­gu­ment cited rests en­tirely on in­tu­ition for what oth­ers value. The hy­po­thet­i­cal species in the ex­am­ple is not a hu­man species, but a slightly differ­ent one.

I can eas­ily imag­ine an in­di­vi­d­ual from species de­scribed along the lines of the au­thor’s hy­po­thet­i­cal read­ing the fol­low­ing:

If it is good be­cause it is loved by our genes, then any­thing that comes to be loved by the genes can be­come good. If hu­mans, like li­ons, had a dis­po­si­tion to not eat their ba­bies, or to be­head their mates and eat them, or to at­tack neigh­bor­ing tribes and tear their mem­bers to bits (all of which oc­curs in the nat­u­ral king­dom), then these things would not be good. We could not brag that hu­mans evolved a dis­po­si­tion to be moral be­cause moral­ity would be what­ever hu­mans evolved a dis­po­si­tion to do.

And be­ing hor­rified at the thought of such a bizarre and morally bankrupt group. I strongly recom­mend you read the se­quence I linked to in the quite if you haven’t. It’s quite an in­ter­est­ing (rele­vant) short story.

So, I have a bit more to write but I’m short on time at the mo­ment. I’d be in­ter­ested to hear if there is any­thing you find par­tic­u­larly ob­jec­tion­able here though.

• I ini­tially wrote up a bit of a rant, but I just want to ask a ques­tion for clar­ifi­ca­tion:

Do you think that evolu­tion­ary ethics is ir­rele­vant be­cause the neu­ro­science of ethics and neu­roe­co­nomics are much bet­ter can­di­dates for un­der­stand­ing what hu­mans value (and there­fore for guid­ing our moral de­ci­sions)?

I’m wor­ried that you don’t be­cause the ar­gu­ment you sup­plied can be aug­mented to ap­ply there as well: just re­place “genes” with “brains”. If your an­swer is a re­sound­ing ‘no’, I have a lengthy re­sponse. :)

• As I un­der­stand it, be­cause T proves in n sym­bols that “T can’t prove a false­hood in f(n) sym­bols”, tak­ing the speci­fi­ca­tion of R (pro­gram length) we could do a for­mal ver­ifi­ca­tion proof that R will not find any proofs, as R only finds a proof if T can prove a false­hood within g(n)<exp(g(n)<<f(n) sym­bols. So I’m guess­ing that the slightly-more-than-n-sym­bols-long is on the or­der of:

n + Length(proof in T that R won’t print with the start­ing true state­ment that “T can’t prove a false­hood in f(n) sym­bols”)

This would vary some with the length of R and with the choice of T.

• Typ­i­cally you make a “sink” post with these sorts of polls.

ETA: BTW, I went for the pa­per. I tend to skim blogs and then skip to the com­ments. I think the com­ments make the in­for­ma­tion con­tent on blogs much more pow­er­ful, how­ever.

• You can donate it to my startup in­stead, our board of di­rec­tors has just unan­i­mously de­cided to adopt this name. Pay­pal is fine. Our mis­sion is de­vel­op­ing heuris­tics for per­sonal in­come op­ti­miza­tion.

• Bob’s defi­ni­tion con­tains my definition

Well here’s what gets me. The idea is that you have to cre­ate Bob as well, and you had to hy­poth­e­size his ex­is­tence in at least some de­tail to rec­og­nize the is­sue. If you do not need to con­tain Bob’s com­plete defi­ni­tion, then It isn’t any more trans­par­ent to me. In this case, we could in­clude wor­lds with any suffi­ciently-Bob-like en­tities that can cre­ate you and so play a role in the deal. Should you pre-com­mit to make a deal with ev­ery suffi­ciently-Bob-like en­tity? If not, are there sorts of Bob-agents that make the deal fa­vor­able? Limit­ing to these sub-classes, is a world that con­tains your defi­ni­tion more likely than one that con­tains a fa­vor­able Bob-agent? I’m not sure.

So the root of the is­sue that I see is this: Your defi­ni­tion is already to­tally fixed, and if you com­pletely spec­ify Bob, the con­verse of your state­ment holds, and the wor­lds seem to have roughly equal K-com­plex­ity. Other­wise, Bob’s defi­ni­tion po­ten­tially in­cludes quite a bit of stuff—es­pe­cially if the only pa­ram­e­ters are that Bob is an ar­bi­trary agent that fits the stipu­lated con­di­tions. The less com­plete your defi­ni­tion of Bob is, the more gen­eral your de­ci­sion be­comes, the more com­plete your defi­ni­tion of Bob is, the more the com­plex­ity bal­ances out.

EDIT: Also, we could ex­tend the prob­lem some more if we con­sider Bob as an agent that will take into ac­count an anti-You that will cre­ate Bob and tor­ture it for all eter­nity if Bob cre­ates you. If we ad­just to that new set of cir­cum­stances, the is­sue I’m rais­ing still seems to hold.

• I’m not sure I com­pletely un­der­stand this, so In­stead of try­ing to think about this di­rectly I’m go­ing to try to for­mal­ize it and hope that (right or wrong) my at­tempt helps with clar­ifi­ca­tion. Here goes:

Agent A gen­er­ates a hy­poth­e­sis about an agent, B, which is analo­gous to Bob. B will gen­er­ate a copy of A in any uni­verse that agent B oc­cu­pies iff agent A isn’t there already and A would do the same. Agent B low­ers the daily ex­pected util­ity for agent A by X. Agent A learns that it has the op­tion to make agent B, should A have pre-com­mit­ted to B’s deal?

Let Y be the daily ex­pected util­ity with­out B. Then Y—X = EU post-B. The util­ity to agent A in a non-B-con­tain­ing world is

$\\sum\_\{i=0\}^t Yd\(i\$)

Where d(i) is a time de­pen­dent dis­count fac­tor (pos­si­bly equal to 1) and t is the lifes­pan of the agent in days. Ob­vi­ously, if $X \\geq Y$ the agent should not have pre-com­mit­ted (and if X is nega­tive or 0 the agent should/​might-as-well pre-com­mit, but then B would not be a jerk).

Other­wise, pre-com­mit­ment seems to de­pend on mul­ti­ple fac­tors. A wants to max­i­mize its sum util­ity over pos­si­ble wor­lds, but I’m not clear on how this calcu­la­tion would ac­tu­ally be made.

Just off the top of my head, if A pre-com­mits, ev­ery world in which A ex­ists and B does not, but A has the abil­ity to gen­er­ate B will drop from a daily util­ity of Y, to one of Y—X. On the other hand, ev­ery world in which B ex­ists but A does not, but B can cre­ate A goes from 0 to Y -X util­ity. Let’s as­sume a finite and equal num­ber of both sorts of wor­lds for sim­plic­ity. Then pairing up each type of world, we go from an av­er­age daily util­ity Y/​2 to Y-X. So we would prob­a­bly at least want it to be the case that: $Y/2 \\leq Y\-X$ so $Y \\geq 2X$

So then the ten­ta­tive an­swer would be “it de­pends on how much of a jerk Bob re­ally is”. The rule of thumb from this would in­di­cate that you should only pre-com­mit if Bob re­duces your daily ex­pected util­ity by less than half. This was un­der the as­sump­tion that we could just “av­er­age out” the wor­lds where the roles are re­versed. Maybe this could be re­fined some with some sort of K-com­plex­ity con­sid­er­a­tion, but I can’t think of any ob­vi­ous way to do that (that ac­tu­ally leads to a con­crete calcu­la­tion any­way).

Also, this isn’t quite like the Prometheus situ­a­tion, since Bob is not always your cre­ator. Pre­sum­ably you’re in a world where Bob doesn’t ex­ist, oth­er­wise you wouldn’t have any obli­ga­tion to use the Bob-maker Omega dropped off even if you did pre-com­mit. So I don’t think the same rea­son­ing ap­plies here.

An es­sen­tial part of who Bob the Jerk is is that he was cre­ated by you, with some help from Omega. He can’t ex­ist in a uni­verse where you don’t, so the hy­po­thet­i­cal bar­gain he offered you isn’t log­i­cally co­her­ent.

I don’t see how this can hold. Since we’re rea­son­ing over all pos­si­ble com­putable uni­verses in UDT, if Bob can be par­tially simu­lated by your brain, a more fleshed out ver­sion (fit­ting the stipu­lated pa­ram­e­ters) should ex­ist in some pos­si­ble worlds

Alright, well that’s what I’ve thought of so far.

• SPARC for un­der­grads is in plan­ning, if we can raise the fund­ing.

Awe­some, glad to hear it!

See here.

Alright, I think I’ll sign up for that.

• Any­thing for un­der­grads? It might be fea­si­ble to do a camp at the un­der­grad­u­ate level. Long term, do­ing an REU style pro­gram might be worth con­sid­er­ing. NSF grants are available to non-prof­its and it may be worth at least look­ing into how SIAI might get a pro­gram funded. This would likely re­quire some re­search, some­one who is knowl­edge­able about grant writ­ing and pos­si­bly some aca­demic con­tacts. Other than that I’m not sure.

In ad­di­tion, it might be benefi­cial to iden­tify skill sets that are likely to be use­ful for SI re­search for the benefit of those who might be in­ter­ested. What skills/​spe­cial­ized knowl­edge could SI use more of?

• My big­ger worry is more along the lines of “What if I am use­less to the so­ciety in which I find my­self and have no means to make my­self use­ful?” Not a prob­lem in a so­ciety that will retrofit you with the ap­pro­pri­ate aug­men­ta­tions/​up­load you etc. and I tend to think that is more likely that not, but what if, say, the Al­core trust gets us through a half-cen­tury-long freeze and we are re­vived, but things have moved more slowly than one might hope, yet fast enough to make any skill sets I have ob­so­lete? Well, if the ex­pected util­ity of liv­ing is suffi­ciently nega­tive I could kill my­self and it would be as if I hadn’t signed up for cry­on­ics in the first place, so we can chalk that up as a (roughly) zero util­ity situ­a­tion. So in or­der to re­ally be an is­sue, I would have to be in a sce­nario where I am not al­lowed to kill my­self or be re-frozen etc. Now, if I am not al­lowed to kill my­self in a net nega­tive util­ity situ­a­tion (I Have no Mouth and I Must Scream) that is a worst case sce­nario, and seems ex­ceed­ingly un­likely (though I’m not sure how you can get de­cent bounds for that).

So my quick calcu­la­tion would be some­thing like: P(“ex­pected util­ity of liv­ing is suffi­ciently nega­tive upon wak­ing up”)*P(“I can’t kill my­self” | “ex­pected util­ity of liv­ing is suffi­ciently nega­tive upon wak­ing up”) = P(“cry­on­ics is not worth it” | “cry­on­ics is suc­cess­ful”)

It’s difficult to jus­tify not sign­ing up for cry­on­ics if you ac­cept that it is likely to work in an ac­cept­able form (this is a sep­a­rate calcu­la­tion). AFAICT there are many more fore­see­able net pos­i­tive or (roughly) zero util­ity out­comes than fore­see­able net nega­tive util­ity out­comes.

• If I like and want to hug ev­ery­one at a gath­er­ing ex­cept one per­son, and that one per­son asks for a hug af­ter I’ve hugged all the other peo­ple and de­liber­ately not hugged them, that’s gonna be awk­ward no mat­ter what norms we have un­less I have a rea­son like “you have sprouted ven­omous spines”.

Out of cu­ri­os­ity, are there any par­tic­u­lar be­hav­iors you have en­coun­tered at a gath­er­ing (or worry you may en­counter) that you find off-putting enough to make the hug an is­sue?

• es­sen­tially eras­ing the dis­tic­tion of map and territory

This idea has been im­plied be­fore and I don’t think it holds wa­ter. That this has come up more than once makes me think that there is some ten­dency to con­flate the map/​ter­ri­tory dis­tinc­tion with some kind of more gen­eral philo­soph­i­cal state­ment, though I’m not sure what. In any event, the Teg­mark level 4 hy­poth­e­sis is or­thog­o­nal to the map/​ter­ri­tory dis­tinc­tion. The map/​ter­ri­tory dis­tinc­tion just pro­vides a nice way of fram­ing a prob­lem we already know ex­ists.

In more de­tail:

Firstly, even if you take some sort of Pla­tonic view where we have ac­cess to all the math, you still have to prop­erly cal­ibrate your map to figure out what part of the ter­ri­tory you’re in. In this case you could think of cal­ibrat­ing your map as ap­ply­ing an ap­pro­pri­ate au­to­mor­phism, so the map/​ter­ri­tory dis­tinc­tion is not dis­solved.

Se­cond, the first view is wrong, be­cause hu­man brains do not con­tain or have ac­cess to any­thing ap­proach­ing a com­plete math­e­mat­i­cal de­scrip­tion of the level 4 mul­ti­verse. At best a brain will con­tain a map­ping of a very small part of the ter­ri­tory in pretty good de­tail, and also a rel­a­tively vague map­ping that is much broader. Brains are not log­i­cally om­ni­scient; even given a com­plete math­e­mat­i­cal de­scrip­tion of the uni­verse, the deriva­tions are not all go­ing to be ac­cessible to us.

So the map ter­ri­tory dis­tinc­tion is not dis­solved, and in par­tic­u­lar you don’t some­how over­come the mind pro­jec­tion fal­lacy, which is a prac­ti­cal (rather than philo­soph­i­cal) is­sue that can­not be ex­plained away by adopt­ing a shiny new on­tolog­i­cal per­spec­tive.