This thread is for the dis­cus­sion of Less Wrong top­ics that have not ap­peared in re­cent posts. If a dis­cus­sion gets un­wieldy, cel­e­brate by turn­ing it into a top-level post.

• I’m in­trigued by the idea of try­ing to start some­thing like a PUA com­mu­nity that is ex­plic­itly NOT fo­cussed on se­cur­ing ro­man­tic part­ners, but rather the de­liber­ate prac­tice of gen­eral so­cial skills.

It seems like there’s a fair bit of real knowl­edge in the PUA world, that some of it is quite a good ex­am­ple of ap­plied ra­tio­nal­ity, and that much of it could be ex­tremely use­ful for pur­poses un­re­lated to mat­ing.

I’m won­der­ing:

• if this is an in­ter­est­ing idea to LWers?

• if this is the right venue to talk about it?

• does some­thing similar already ex­ist?

I’m aware that there was some pre­vi­ous con­ver­sa­tion around similar top­ics and their ap­pro­pri­ate­ness to LW, but if there was fi­nal con­sen­sus I missed it. Please let me know if these mat­ters have been deemed in­ap­pro­pri­ate.

• If you want non-PC ap­proaches there are two com­mu­ni­ties you could look into: sales-peo­ple and con­ning peo­ple. The sec­ond one ac­tu­ally has most of the how-to-hack-peo­ples minds. If you want a kinder ver­sion look at it ti­tled “so­cial en­g­ineer­ing”.

• Toast­mas­ters?

Gen­eral so­cial skills are needed in busi­ness, a lot of places teach them and they seem to be quite suc­cess­ful.

• From my limited ex­pe­rience with Toast­mas­ters, it’s very PC and tar­geted at me­dian-level in­tel­li­gence peo­ple—not the thing peo­ple here would be look­ing for. “PUA”-like im­plies XFre­quen­tist is con­sid­er­ing some­thing that is will­ing to teach the harsh, con­demned truths.

• I went to a Toast­mas­ters ses­sion, and was… un­der­whelmed. Even for pub­lic speak­ing skills, the pro­gram seemed kind of trite. It was more geared to­ward learn­ing the for­mal­ities of meet­ings. You’d prob­a­bly be a bet­ter com­mit­tee chair af­ter fol­low­ing their pro­gram, but I’m not sure you could give a great TED talk or wow po­ten­tial in­vestors.

Carnegie’s pro­gram seems closer to what I had in mind, but I want to repli­cate both the com­mu­nity as­pect and the fo­cus on “field” prac­tice of the PUAs, which I sus­pect is a big part of what makes them so formidable.

• The clubs vary in their stan­dard. I recom­mend you try a few in your area (big cities should have a bunch). For 2 years I used to com­mute 1 hour each way to at­tend Vic­to­ria Quay Toast­mas­ters in Fre­man­tle, it was that good. It was the 3rd club I tried af­ter mov­ing.

• I’ve heard smart peo­ple speak well of Toast­mas­ters. It may be a mat­ter of lo­cal vari­a­tion, or it may be that Toast­mas­ters is very use­ful for get­ting past fear of pub­lic speak­ing and ac­quiring ad­e­quate skills.

• My im­pres­sion could eas­ily be off; I only went to one open house.

It wasn’t all nega­tive. They seemed to have a log­i­cal pro­gres­sion of speech com­plex­ity, and quite a stan­dard­ized pro­cess for giv­ing feed­back. Some of the speak­ers were ex­cel­lent. It was fully bil­in­gual (English/​French), which was nice.

I don’t think it’s what I’m look­ing for, but it’s prob­a­bly okay for some other goals.

• I be­longed to TM for many years and I would still if there was a club near me. I found it great for many rea­sons. But I have to say that you get what you put in. And you get what you want to get. If you want friends and so­cial graces—OK get them. If you want to lose fear of speak­ing—get that. Ig­nore what you don’t want and take what you do.

• I’ve heard smart peo­ple speak well of Toast­mas­ters.

I’ve mostly heard them damn it with faint praises, as be­ing great for pol­ish­ing pre­sen­ta­tion skills, but not be­ing par­tic­u­larly use­ful for any­thing else.

In­ter­est­ingly enough, of peo­ple I know who are ac­tu­ally pro­fes­sional speak­ers (in the sense of be­ing paid to talk, ei­ther at their own events or other peo­ples’), ex­actly none of them recom­mend it. (Even amongst ones who do not sell any sort of speaker train­ing of their own.)

OTOH, I have heard a cou­ple of shout-outs for the Carnegie speak­ing course, but again, this is all just in the con­text of speak­ing… which has lit­tle re­la­tion­ship to gen­eral so­cial skills AFAICT.

• In­ter­est­ing, that jibes* pretty well with my im­pres­sions of Toast­mas­ters.

There are other Carnegie courses than the speak­ing one. This is the one I was think­ing of.

*See com­ment be­low for the dis­tinc­tion be­tween “jives” and “jibes”. It ain’t cool beein’ no jive turkey!

• Nit­pick: “jibes” means “is con­sis­tent with”.

“Jives” means “is talk­ing non­sense” or (ar­chaic) “dances”.

{Tries look­ing it up} Wikipe­dia says “jives” can be a term for Afri­can Amer­i­can Ver­nac­u­lar English. The Ur­ban Dic­tionary gives it a bunch of defi­ni­tions, in­clud­ing both of mine, “jibe”, and forms of Afri­can Amer­i­can speech which in­clude a lot of slang, but not any sort of Afri­can Amer­i­can speech in gen­eral.

On the other hand, the lan­guage may have moved on—I keep see­ing that mis­take (the Ur­ban Dic­tionary im­plies it isn’t a mis­take), and maybe I should give up.

I still re­tain a fond­ness for peo­ple who get it right.

• haha… thanks!

• a lot of places teach them

I’d be in­ter­ested in speci­fics...

• Would such “prac­tice” re­quire a phys­i­cal venue? -- or would an on­line set­ting—maybe even Skype—be suffi­cient?

• That’s a good ques­tion. I don’t know, but I sus­pect a purely on­line set­ting would be ad­e­quate for be­gin­ners, but in­suffi­cient for mas­tery.

What do you think?

• I don’t think you’d have much suc­cess mas­ter­ing non ver­bal com­mu­ni­ca­tion through skype.

• Gen­er­ally, I agree. There’s a time and a place for both on­line and offline venues.

Ideally, you’d want a very large num­ber of par­ti­ci­pants such that, dur­ing ses­sions, most of your peers are new and the situ­a­tion is some­what anony­mous/​ran­dom. If your ses­sions are with the same old peo­ple, these peo­ple will be­come well known—per­haps friends, and the so­cial simu­la­tion won’t be very mean­ingful. Who knows.. maybe there’s a way to pig­gy­back on the Cha­troulette con­cept?!

• I don’t know.

• Ex­tremely, yes, not to my knowl­edge.

• A lot of com­pa­nies con­duct anony­mous “360 re­view” pro­cesses which veer into this ter­ri­tory to some de­gree.

Also, sev­eral busi­ness schools con­duct lead­er­ship labs. In fact, a large chunk of the busi­ness school ex­pe­rience is re­ally about so­cial groom­ing /​ learn­ing how to net­work /​ and so forth.

So do we have any trac­tion for this idea? How about a meetup?

• Thanks, those are use­ful leads. I’ve done the 360 re­view thing but hadn’t con­nected it to this idea.

It seems to have got­ten a good amount of in­ter­est. I’ve got a draft post go­ing that still needs some pol­ish, but I should hope­fully be able to get it finished this week­end. If all goes to plan some sort of meetup should fol­low.

Any sug­ges­tions on lo­gis­tics? I’m not at all sure what the best way to or­ga­nize this is, I’d ap­pre­ci­ate any thoughts.

• I think you’re prob­a­bly cor­rect in your pre­sump­tions. I find it an in­ter­est­ing idea and would cer­tainly fol­low any fur­ther dis­cus­sion.

• 2 Aug 2010 12:44 UTC
20 points

The game of Mo­ral High Ground (re­pro­duced com­pletely be­low):

At last it is time to re­veal to an un­wit­ting world the great game of Mo­ral High Ground. Mo­ral High Ground is a long-play­ing game for two play­ers. The fol­low­ing origi­nal rules are for one M and one F, but feel free to mod­ify them to suit your player setup:

1. The ob­ject of Mo­ral High Ground is to win.

2. Play­ers pro­ceed to­wards vic­tory by scor­ing MHGPs (Mo­ral High Ground Points). MHGPs are scored by tak­ing the con­spicu­ously and/​or pas­sive-ag­gres­sively vir­tu­ous course of ac­tion in any situ­a­tion where cul­pa­bil­ity is in dis­pute.

(For ex­am­ple, if player M ar­rives late for a date with player F and player F sweetly ac­cepts player M’s apol­ogy and says no more about it, player F re­ceives the MHGPs. If player F gets an­gry and player M bears it humbly, player M re­ceives the MHGPs.)

1. Point val­ues are not fixed, vary from situ­a­tion to situ­a­tion and are usu­ally set by the per­son claiming them. So, in the above ex­am­ple, for­giv­ing player F might col­lect +20 MHGPs, whereas pen­i­tent player M might col­lect only +10.

2. Men’s MHG scores re­set ev­ery night at mid­night; women’s roll over ev­ery day for all time. There­fore, it is statis­ti­cally highly im­prob­a­ble that a man can ever beat a woman at MHG, as the game ends only when the re­la­tion­ship does.

3. Hav­ing a baby gives a woman +10,000 MHG points over the man in­volved and both par­ents +5,000 MHG points over any­one with­out chil­dren.

My ex-bf and I de­vel­oped Mo­ral High Ground dur­ing our re­la­tion­ship, and it has given us years of hilar­ity. Straight cou­ple­dom in­volves so much petty point-scor­ing any­way that we both found we were already ex­perts.

By mak­ing a pri­vate joke out of in­cred­ibly de­struc­tive gen­der pro­gram­ming, MHG re­leases a great deal of re­la­tion­ship stress and en­courages good be­hav­ior in oth­er­wise try­ing situ­a­tions, as when he once cy­cled all the way home and back to re­trieve some for­got­ten con­cert tick­ets “be­cause I couldn’t let you have the Mo­ral High Ground points”. We are still the best of friends.

Play and en­joy!

From Metafilter

• One of the first com­ments is some­thing I’ve been say­ing for a while, about how to ad­mit that you were wrong about some­thing, in­stead of cling­ing to a bro­ken opinion out of stub­born pride:

Try to make it a per­sonal policy to prove your­self WRONG on oc­ca­sion. And get ex­cited about it. Real­iz­ing you’ve been wrong about some­thing is a sure sign of growth, and growth is ex­cit­ing.

The key is to ac­tu­ally en­joy be­com­ing less wrong, and to take pride in ad­mit­ting mis­takes. That way it doesn’t take willpower, which makes ev­ery­thing so much eas­ier.

• But ap­par­ently it still wasn’t enough to keep them to­gether...

• Not all re­la­tion­ships need to last for­ever, and it’s not nec­es­sar­ily a failure if one doesn’t.

• But ap­par­ently it still wasn’t enough to keep them to­gether...

Yoreth may sub­tract 50 MHG points from hege­mon­i­con but also loses 15 him­self.

I was won­der­ing if it would be a good idea to offer a down­load of LW or at least the se­quences and Wiki. In the man­ner that Wikipe­dia is pro­vid­ing it.

The idea be­hind it is to have a re­dun­dant backup in case of some catas­tro­phe, for ex­am­ple if the same hap­pens to EY that hap­pened to John C. Wright. It could also provide the op­tion to read LW offline.

Every so of­ten, peo­ple de­ri­sively say to me “Oh, and you as­sume you’d never con­vert to re­li­gion then?” I always re­ply “I ab­solutely do not as­sume that, it might hap­pen to me; no-one is im­mune to men­tal ill­ness.”

• Tri­cy­cle has the data. Also if an event of JCW mag­ni­tude hap­pened to me I’m pretty sure I could beat it. I know at least one ra­tio­nal­ist with in­tense re­li­gious ex­pe­riences who suc­cess­fully man­aged to ask ques­tions like “So how come the di­v­ine spirit can’t tell me the twen­tieth digit of pi?” and dis­count them.

• Ac­tu­ally, you have to be sure that you wouldn’t con­vert if you had John Wright’s ex­pe­riences, oth­er­wise Au­mann’s agree­ment the­o­rem should cause you to con­vert already, sim­ply be­cause John Wright had the ex­pe­riences him­self—as­sum­ing you wouldn’t say he’s ly­ing. I ac­tu­ally know some­one who con­verted to re­li­gion on ac­count of a sup­posed mir­a­cle, who said af­ter­ward that since they in fact knew be­fore con­vert­ing that other peo­ple had seen such things hap­pen, they should have con­verted in the first place.

Although I have to ad­mit I don’t see why the di­v­ine spirit would want to tell you the 20th digit of pi any­way, so hope­fully there would be a bet­ter ar­gu­ment than that.

• What if you sus­tained hy­poxic brain in­jury, as JCW may well have done dur­ing his car­diac event? (This might also ex­plain why he think it’s cool to write BSDM scenes fea­tur­ing a 16-year-old school­girl as part of an os­ten­si­bly re­spectable work of SF, so it’s a pet sus­pi­cion of mine.)

• his might also ex­plain why he think it’s cool to write BSDM scenes fea­tur­ing a 16-year-old school­girl as part of an os­ten­si­bly re­spectable work of SF, so it’s a pet sus­pi­cion of mine.

It would seem he is just writ­ing for Ma­ture Au­di­ences. In this case ma­tu­rity means not just ‘the age at which we let peo­ple read porno­graphic text’ but the kind of ma­tu­rity that al­lows peo­ple to look be­yond their own cul­tural prej­u­dices.

16 is old. Not old enough ac­cord­ing to our cul­ture but there is no rea­son we should ex­pect a fic­tional time-dis­tant cul­ture to have our par­tic­u­lar moral or le­gal pre­scrip­tions. It wouldn’t be all that sur­pris­ing if some­one from an ac­tual fu­ture time to, when read­ing the work, scoff at how prud­ish a cul­ture would have to be to con­sider sex­u­al­ised por­tray­als of women that age to be taboo!

Mind you I do see how a hy­poxic brain in­jury could al­ter some­one’s moral in­hi­bi­tions and sen­si­bil­ities in the kind of way you sug­gest. I just don’t in­clude loaded lan­guage in the spec­u­la­tion.

• 16 is old. Not old enough ac­cord­ing to our cul­ture but there is no rea­son we should ex­pect a fic­tional time-dis­tant cul­ture to have our par­tic­u­lar moral or le­gal pre­scrip­tions. It wouldn’t be all that sur­pris­ing if some­one from an ac­tual fu­ture time to, when read­ing the work, scoff at how prud­ish a cul­ture would have to be to con­sider sex­u­al­ised por­tray­als of women that age to be taboo!

In­ter­est­ingly, if the book in ques­tion is the one I think it is, it takes place in Bri­tain, where the age of con­sent is, in fact, six­teen.

• Come to think of it, 16 is the age of con­sent here (Aus­tralia—most states) too. I should have used ‘your’ in­stead of ‘our’ in the para­graph you quote! It seems I was just run­ning with the as­sump­tion.

• Although “18 years old” does seem to be a hard-and-fast rule for when you can legally ap­pear in porn ev­ery­where, as far as I know...

• (This might also ex­plain why he think it’s cool to write BSDM scenes fea­tur­ing a 16-year-old school­girl as part of an os­ten­si­bly re­spectable work of SF, so it’s a pet sus­pi­cion of mine.)

Point of cu­ri­os­ity: Does any­one else still no­tice this sort of thing? I don’t think my gen­er­a­tion does any­more.

• I’ve only read his Golden Age tril­ogy, so if it’s there, then no, to this 50-some­thing it didn’t stand out from ev­ery­thing else that hap­pened. If it’s in some­thing else, I doubt it would. I mean, I’ve read Richard Mor­gan’s ul­tra-vi­o­lent stuff, in­clud­ing the gay mediæ­val-style fan­tasy one, and, well, no.

[ETA: from Google the book in ques­tion ap­pears to be Or­phans of Chaos.]

I could be an out­lier though.

• Well, I’m fe­male. Could be women tend to be more sen­si­tive to that kind of thing.

That said, I wasn’t re­ally plan­ning to start a dis­cus­sion about sex­u­ally ex­plicit por­tray­als of sub-18 teenagers and whether they’re ok, and I doubt I’ll par­ti­ci­pate fur­ther in one. Un­for­tu­nately I don’t own the book, so if any­one is cu­ri­ous about the de­tails of what I was refer­ring to, they’ll have to read Or­phans of Chaos (not that I recom­mend it on its mer­its). I wouldn’t haz­ard a guess as to how much a per­son can be oblivi­ous to (prob­a­bly a lot), but I’d be sur­prised if most peo­ple’s con­scious, ex­am­ined re­ac­tion to the sex­ual con­tent (which is abun­dant and spread through­out the book, though not hard­core) was closer to “That is nor­mal/​A nat­u­ral­is­tic por­trayal of a 16-year-old girl’s sex­ual feel­ings/​Liter­ary en­velope-push­ing” than to “That is weird/​creepy.”

• Eh, you see peo­ple try­ing to “push bound­aries” in “re­spectable” liter­a­ture all the time any­way.

• Cer­tainly there are other ex­pla­na­tions. If you can show me that JCW openly wrote highly sex­u­al­ized por­tray­als of peo­ple be­low the age of con­sent be­fore his re­li­gious ex­pe­rience/​heart at­tack, I will be happy to re­tract.

• Iron Sun­rise by Charles Stross and Cowl by Neal Asher fea­ture sex scenes with 16 year old girls. I don’t re­mem­ber to what de­tail though.

BSDM scenes fea­tur­ing a 16-year-old school­girl

That sounds sus­pi­cious in­deed and I would op­pose it in most cir­cum­stances. That is, if it isn’t just a 16 year old body or simu­la­tion of a body (yeah no differ­ence?) and if it isn’t just the de­scrip­tion of how bad some­one is...within SF you can nat­u­rally cre­ate ex­cep­tional cir­cum­stances.

Have you read books by Richard Mor­gan? The tor­ture scences in the Takeshi nov­els are some of the most de­tailed. As vir­tual re­al­ity al­lows them to load you into the body of preg­nant women, be­ing raped and hav­ing sol­der­ing-iron slided up your vagina. And if you die due to hours of tor­ture, just restart the simu­la­tion. Just one of the scenes from the first book.

• How­ever, if EY con­verted to re­li­gion, he would (in that con­di­tion) as­sert that he had had good rea­sons for do­ing it, i.e. that it was ra­tio­nal. So he would have no rea­son to take down this web­site any­way.

• You can use the wget pro­gram like this: ‘wget -m less­wrong.com’. A database down­load would be eas­ier on the servers though.

• I sup­port this idea.

But what about copy­right is­sues? What if posts and com­ments are owned by their writer?

• I would ar­gue that one can­not own the in­for­ma­tion stored on the com­put­ers of other, un­re­lated peo­ple.

I sup­port this idea also. I ac­tu­ally in­tend to make a ser­vice for up­load­ing the con­tent of fo­rum/​blog to al­ter­nate server for backup ser­vice, but who knows when it will hap­pen.

• We­bOffline can grab the whole thing to an iphone or ipad, for­mat­ting pre­served. There are similar pro­grams for PC/​MAC

• In his bio over at Over­com­ing Bias, Robin Han­son writes:

I am ad­dicted to “viewquakes”, in­sights which dra­mat­i­cally change my world view.

So am I. I sus­pect you are too, dear reader. I asked Robin how many viewquakes he had and what caused them, but haven’t got­ten a re­sponse yet. But I must know! I need more viewquakes. So I pro­pose we share our own viewquakes with each other so that we all know where to look for more.

I’ll start. I’ve had four ma­jor viewquakes, in roughly chronolog­i­cal or­der:

• (micro)Eco­nomics—Start­ing with a sim­ple ap­prox­i­ma­tion of how hu­mans be­have yields a startlingly effec­tive the­ory in a wide range of con­texts.

• Bayesi­anism—I learned how to think

• Yud­kowskyan/​Humean Me­taethics—Mak­ing the move from Ob­jec­tive the­o­ries of moral­ity to Sub­jec­tively Ob­jec­tive the­o­ries of moral­ity cleared up a large de­gree of con­fu­sion in my map.

• Evolu­tion—This is a two part quake: evolu­tion­ary biol­ogy and evolu­tion­ary psy­chol­ogy. The lat­ter is ex­tremely helpful for ex­plain­ing some of the be­hav­ior that eco­nomic the­ory misses and for un­der­stand­ing the in­puts into eco­nomic the­ory (i.e., prefer­ences).

• I’ve had some dozens of viewquakes, most minors, al­though it’s hard to eval­u­ate it in hind­sight now that I take them for granted.

Some are some­what com­mon­place here: Bayesi­anism, map–ter­ri­tory re­la­tions, evolu­tion etc.

One that I always feel peo­ple should be shout­ing Eureka — and when they are not im­pressed I as­sume that this is old news to them (and is of­ten not, as I don’t see it re­flected in their ac­tions) — is the Curse of Knowl­edge: it’s hard to be a tap­per. I feel that be­ing aware of it dra­mat­i­cally im­proved my per­cep­tions in con­ver­sa­tion. I also feel that if more peo­ple were aware of it, mi­s­un­der­stand­ings would be far less com­mon.

Maybe worth a post some­day.

• I can see how the Curse of Knowl­edge could be a pow­er­ful idea. I will dwell on it for a while—es­pe­cially the ex­am­ple given about JFK, as an ex­am­ple of a type of ap­pli­ca­tion that would be use­ful in my own life. (To re­mem­ber to de­scribe things us­ing broad strokes that are uni­ver­sally clear, rather than tech­ni­cal and ac­cu­rate,in con­texts where per­sua­sion and fuel­ing in­ter­est is most im­por­tant.)

For me, one of the main viewquakes of my life was a line I read from a lit­tle book of Kalil Gibran po­ems:

Your pain is the break­ing of the shell that en­closes your un­der­stand­ing.

It seemed to be a ham­mer that could be ap­plied to ev­ery­thing.. When­ever I was un­happy about some­thing, I thought about the prob­lem a while un­til I iden­ti­fied a mis­con­cep­tion. I fixed the mis­con­cep­tion (“I’m not the smartest per­son in grad­u­ate school”; “I’m not as kind as I thought I was”; “That per­son won’t be there for me when I need them”) by as­similat­ing the truth the pain pointed me to­wards, and the pain would dis­si­pate. (Why should I ex­pect grad­u­ate school to be easy? I’ll just work harder. Kind­ness is what you ac­tu­ally do, not how you ex­pect you’ll feel. That per­son is fun to hang out with, but I’ll need to find some closer friends.) After each dis­ap­point­ment, I felt stronger and the prob­lem just bounced off me, with­out my be­ing in de­nial about any­thing.

The “tech­nique” failed me when a good friend of mine died. There was a lot of pain, and I tried to iden­tify the truth that was cut­ting though, but I couldn’t find one. Where did my friend go? There is a part of my brain, I re­al­ized, that sim­ply can­not ex­cept on an emo­tional level that peo­ple are ma­te­rial. I be­lieve that they are (I don’t be­lieve in a soul or an af­ter­life) but I sim­ply couldn’t con­nect the essence of my friend with ‘gone’. If there was a truth there, it couldn’t find a place in my mind.

This seems like a tan­gent. .. but just to demon­strate it’s not all-pow­er­ful.

• Re­mark­able quote, thank you.

Re­minded me of the Anorexic Her­mit Crab Syn­drome:

The key to pur­su­ing ex­cel­lence is to em­brace an or­ganic, long-term learn­ing pro­cess, and not to live in a shell of static, safe medi­ocrity. Usu­ally, growth comes at the ex­pense of pre­vi­ous com­fort or safety. The her­mit crab is a col­or­ful ex­am­ple of a crea­ture that lives by this as­pect of the growth pro­cess (albeit with­out our psy­cholog­i­cal bag­gage). As the crab gets big­ger, it needs to find a more spa­cious shell. So the slow, lum­ber­ing crea­ture goes on a quest for a new home. If an ap­pro­pri­ate new shell is not found quickly, a ter­ribly del­i­cate mo­ment of truth arises. A soft crea­ture that is used to the pro­tec­tion of built-in ar­mor must now go out into the world, ex­posed to preda­tors in all its mushy vuln­er­a­bil­ity. That learn­ing phase in be­tween shells is where our growth can spring from. Some­one stuck with an en­tity the­ory of in­tel­li­gence is like an anorexic her­mit crab, starv­ing it­self so it doesn’t grow to have to find a new shell. —Josh Waitzkin, The Art of Learning

• Sounds like the illu­sion of trans­parency. We’ve got that post around. ;)

On the other hand, the tap­per/​listener game is a very evoca­tive in­stance.

• Cry­ron­ics Lot­tery.

Would it be eas­ier to sign up for cry­on­ics if there was a lot­tery sys­tem? A win­ner of the lot­tery could say “Well, I’m not a die-hard cryo-head, but I thought it was in­ter­est­ing so I bought a ticket (which was only $X) and I hap­pened to win, and it’s pretty valuable, so I might as well use it.” It’s a sort of “plau­si­ble de­ni­a­bil­ity” that might re­duce the so­cial bar­ri­ers to cryo. The lot­tery struc­ture might also be able to re­duce the con­scien­tous­ness bar­ri­ers—once you’ve won, then the lot­tery ad­minis­tra­tors (pos­si­bly vol­un­teers, pos­si­bly funded by a frac­tion of the lot­tery) walk you through a “greased path”. • On a com­pletely se­ri­ous, if not to­tally re­lated, note: it would be a lot eas­ier to con­vince peo­ple to sign up for cry­on­ics if the Cry­on­ics In­sti­tute’s and/​or Kri­oRus’s web­sites looked more pro­fes­sional. • I’m not sure if it would help get un­in­ter­ested peo­ple in­ter­ested; but I think it would help get in­ter­ested peo­ple signed up if there were a re­ally clear set of in­di­vi­d­u­ally ac­tion­able in­struc­tions—per­haps a flowchart so they can de­pend on in­di­vi­d­ual cir­cum­stances—that were all found in one place. • And Rudi Hoff­man’s page. • I doubt it. Sign­ing up for a lot­tery for cry­on­ics is still sus­pi­cious. There is only one pay­off, and that is of the sus­pi­cious thing. No one ob­jects to the end of lot­ter­ies be­cause we all like money, what is ob­jected to is the lot­tery as effi­cient means of ob­tain­ing money (or en­ter­tain­ment). Sup­pose that the ob­ject were some­thing you and I re­gard with equal re­vul­sion as many re­gard cry­on­ics. Child mo­lesta­tion, per­haps. Would you re­ally re­gard some­one buy­ing a ticket as not be­ing quite evil and con­don­ing and sup­port­ing the even­tual rape? • Who re­gards cry­on­ics as evil like child mo­lesta­tion? Gen­eral pub­lic sees cry­on­ics as fraud—some­think like buy­ing real es­tate on the moon or wait­ing for moth­er­ship, and some­one pay­ing for it as gullible fool. For ex­am­ple, look at dis­cus­sions when Brit­ney Spears http://​​www.freere­pub­lic.com/​​fo­cus/​​f-chat/​​2520762/​​posts wanted to be frozen. Lots of de­ri­sion, no ha­tred. • Bad ex­am­ple. Peo­ple want to make fun of celebri­ties (es­pe­cially a com­mu­nity as caus­tic and “anti-elitist” as the Freep­ers). She could have an­nounced that she was en­rol­ling in col­lege, or some­thing else similarly com­mon-sen­si­ble, and you would still have got a thread­ful of noth­ing but cheap jokes. A dis­cus­sion about “My neigh­bour /​ brother-in-law /​ old friend from high school told me he has de­cided to get frozen” would be more en­light­en­ing. • Does the fact that my spe­cific ex­am­ple may not be perfect re­fute my point that mere in­di­rec­tion & chance does not elimi­nate all crit­i­cism and this can be un­der­stood by merely in­tro­spect­ing one’s in­tu­itions? • Rather than us­ing an undiluted nega­tive as an ex­am­ple, sup­pose that there was some­thing more ar­guable, that might have some pos­i­tive as­pects—sex seg­re­ga­tion of schools, for ex­am­ple. As­sum­ing that my over­all judge­ment of sex seg­re­ga­tion is nega­tive, if some­one pur­sued sex seg­re­ga­tion fiercely and ded­i­cat­edly, then my over­all nega­tive val­u­a­tion of their goal would color my judge­ment of them. If they can plau­si­bly claim to have sup­ported it mo­men­tar­ily on a whim, while think­ing about the pos­i­tive as­pects, then there is some in­su­la­tion be­tween my judge­ment of the goal and my judge­ment of the per­son. • Who re­gards cry­on­ics as evil like child mo­lesta­tion? Gen­eral pub­lic sees cry­on­ics as fraud—some­think like buy­ing real es­tate on the moon or wait­ing for moth­er­ship, and some­one pay­ing for it as gullible fool. For ex­am­ple, look at dis­cus­sions when [Brit­ney Spears] (http://​​www.freere­pub­lic.com/​​fo­cus/​​f-chat/​​2520762/​​posts) wanted to be frozen. Lots of de­ri­sion, no ha­tred. • Let­ting Go by Atul Gawande is a de­scrip­tion of typ­i­cal end of life care in the US, and how it can and should be done bet­ter. Typ­i­cal care de­faults to tak­ing dras­tic mea­sures to ex­tend life, even if the odds of suc­cess are low and the pro­cess is painful. Hospice care, which fo­cuses on qual­ity of life, not only re­sults in more com­fort, but also ei­ther no loss of lifes­pan or a some­what longer life, de­pend­ing on the dis­ease. And it’s a lot cheaper. The ar­ti­cle also de­scribes the long care­ful pro­cess needed to find out what peo­ple re­ally want for the end of their life—in par­tic­u­lar, what the bot­tom line is for them to want to go on liv­ing. This is of in­ter­est for Less Wrong, not just be­cause Gawande is a solidly ra­tio­nal­ist writer, but be­cause a lot of the util­i­tar­ian talk here goes in the di­rec­tion of re­strain­ing em­pathic im­pulses. Here we have a case where em­pa­thy leads to big util­i­tar­ian wins, and where treat­ing peo­ple as hav­ing unified con­scious­ness if you give it a chance to op­er­ate works out well. As good as hos­pices sound, I’m con­cerned that if they get a bet­ter rep­u­ta­tion, less com­pe­tent or­ga­ni­za­tions call­ing them­selves hos­pices will spring up. From a util­i­tar­ian an­gle, I won­der if those dras­tic meth­ods of treat­ment some­times lead to effec­tive meth­ods, and if so, whether the in­for­ma­tion could be got­ten more hu­manely. • End of life reg­u­la­tion is one rea­son cry­on­ics is suffer­ing, as well: with­out the abil­ity to en­sure preser­va­tion when the brain is still rel­a­tively healthy, the chances diminish sig­nifi­cantly. I think it’d be in­ter­est­ing to see cry­on­ics or­ga­ni­za­tions put field offices in coun­tries or states with le­gal suicide laws. Here’s a Frontline spe­cial on suicide tourists. • The fram­ing of the end of life is­sue as a gain or a loss as in the mon­key to­ken ex­change prob­a­bly makes a gi­gan­tic differ­ence in the choices made. http://​​less­wrong.com/​​lw/​​2d9/​​open_thread_june_2010_part_4/​​2cnn?c=1 When you feel you are in a des­per­ate situ­a­tion, you will do des­per­ate things and clutch at straws, even when you know those choices are ir­ra­tional. I think this is the mind­set be­hind the clutch­ing at straws that quacks ex­ploit with CAM, as in the Gon­za­lez Pro­to­col for pan­cre­atic can­cer. http://​​www.sci­ence­basedmedicine.org/​​?p=1545 It is ac­tu­ally worse than do­ing noth­ing, worse than do­ing what main stream medicine recom­mends, but be­cause there is the promise of com­plete re­cov­ery (even if it is a false promise), that is what peo­ple choose based on their ir­ra­tional aver­sion to risk. • An ex-English Pro­fes­sor and ex-Cop, Ge­orge Thomp­son, who now teaches a method he calls “Ver­bal Judo”. Very rem­i­nis­cent of Eliezer’s Bayesian Dojo, this is a primer on ra­tio­nal­ist com­mu­ni­ca­tions tech­niques, fo­cus­ing on defen­sive & redi­rec­tion tac­tics. http://​​fora.tv/​​2009/​​04/​​10/​​Ver­bal_Judo_Diffus­ing_Con­flict_Through_Conversation • I wrote up some notes on this, be­cause there’s no tran­script and it’s good in­for­ma­tion. Let’s see if I can get the com­ment syn­tax to co­op­er­ate here. How to win in con­ver­sa­tions, in gen­eral. Never get an­gry. Stay calm, and use com­mu­ni­ca­tion tac­ti­cally to achieve your goals. Don’t com­mu­ni­cate nat­u­rally; com­mu­ni­cate tac­ti­cally. If you get up­set, you are weak­ened. How to deflect. To get past an un­pro­duc­tive and pos­si­bly an­gry con­ver­sa­tion, you need to deflect the un­pro­duc­tive bluster and get down to the heart of things: goals, and how to achieve them. Use a sen­tence of the form: “[Ac­knowl­edge what the other guy said], but/​how­ever/​and [in­sert po­lite, goal-cen­tered lan­guage here].” You spring past what the other per­son said, and then re­cast the con­ver­sa­tion in your own terms. Did he say some­thing an­gry, meant to up­set you? Let it run off you like wa­ter, and move on to what you want the con­ver­sa­tion to be about. This dis­em­pow­ers him and puts you in charge. How to mo­ti­vate peo­ple. There’s a se­cret to mo­ti­vat­ing peo­ple, whether they’re stu­dents, co-work­ers, what­ever. To mo­ti­vate some­one, raise his ex­pec­ta­tions of him­self. Don’t put peo­ple down; raise them up. When you want to rep­ri­mand some­one for not liv­ing up to your ex­pec­ta­tions, men­tion the pos­i­tive first. Raise his ex­pec­ta­tions of him­self. Empathy To calm some­body down, or get him to do what you want, em­pa­thy is the key. Em­pa­thy, the abil­ity to see through the eyes of an­other, is one of the great­est pow­ers that hu­mans have. It gives you power over peo­ple, of a kind that they won’t get mad about. Un­der­stand the other guy, and then think for him as he ought to think. The speaker worked as a po­lice officer, so most of the peo­ple he dealt with were un­der the in­fluence of some­thing. Maybe they were drugged, or drunk; maybe they were fright­ened, or out­raged. What­ever it is, it clouds their judge­ment; be the lev­el­headed one and help them think clearly. Em­pa­thy is what you need for this. How to in­ter­rupt some­one. Use the most pow­er­ful sen­tence in the English lan­guage: “Let me see if I un­der­stand what you just said.” It shuts any­body up, with­out piss­ing them off, and they’ll listen. Even if they’re hop­ping mad and were scream­ing their lungs out at you a minute ago, they’ll listen. Use this sen­tence, and then para­phrase what you un­der­stand them as say­ing. When you para­phrase, that lets you con­trol the con­ver­sa­tion. You get to put their point of view in your own words, and in do­ing so, you calm them down and sieze con­trol of the con­ver­sa­tion. How to be a good boss. This was a talk at Colom­bia Univer­sity busi­ness school; peo­ple came to learn how to be good bosses. And the se­cret is that if you’re a boss, don’t fo­cus di­rectly on your own ca­reer; fo­cus on lift­ing up the peo­ple un­der you. Do this, and they will lift you up with them. To be pow­er­ful in a group set­ting, you must dis­ap­pear. Put your own ego aside, don’t worry about who gets the credit, and fo­cus on your goals. How to dis­ci­pline effec­tively. This is his biggest point. The se­cret of good dis­ci­pline is to use lan­guage dis­in­ter­est­edly. You can show anger, con­de­scen­sion, ir­ri­ta­tion, etc., OR you can dis­ci­pline some­body. You can’t do both at the same time. If you show anger when dis­ci­plin­ing some­one, you give them an ex­cuse to be an­gry, and you de­stroy your own effec­tive­ness. Con­versely, if you want to ex­press anger, then don’t let pun­ish­ment even en­ter the con­ver­sa­tion. Keep these sep­a­rate. How to deal with some­one who says no. There are five stages to this. Try the first one; if it fails, go to the next one, and so on. Usu­ally you won’t have to go past the first one or two. 1. Ask. Be po­lite. In­ter­rog­a­tive tone. “Sir, will you please step out of the car?” This usu­ally works, and the con­ver­sa­tion ends here. 2. Tell him why. Declar­a­tive tone. This gives you au­thor­ity, it’s a sign of re­spect, and it gives the other guy a way of sav­ing face. It builds a con­text for what you’re ask­ing. If ask­ing failed, ex­plain­ing usu­ally works. “I see an open liquor bot­tle in your cup-holder, and I’m re­quired by law to search your ve­hi­cle. For our safety, I need you to step out of the car.” 3. Create and pre­sent op­tions. There are four se­crets for this: • Voice: friendly and re­spect­ful. • Always list good op­tions first (“You can go home tonight, have din­ner with your fam­ily, sleep in your own bed.”). Then the bad op­tions (“If you don’t get out of this car, the law says you’re go­ing to jail overnight, and you’ll get your car towed, and they’ll charge you like 300 bucks.”). Then re­mind him of the good op­tions, to get the con­ver­sa­tion back to what you want him to do. (“I just need you to get out of your car, let me have a look around, and we’ll be done in a few min­utes.”) • Be spe­cific. Paint a men­tal pic­ture for peo­ple. Vivid imagery. WIIFM: What’s In It For Me? Ap­peal to the other guy’s self-in­ter­est. It’s not about you; it’s about him. 4. Con­firm non­com­pli­ance. “Is there any­thing I can say to get you to co­op­er­ate, and step out of the car for me, so you don’t go to jail?” Give them a way to save face. 5. Act—Disen­gage or es­ca­late. This is the part where you ei­ther give up or get se­ri­ous. In the “get out of the car” ex­am­ple, this is the part where you ar­rest him. Very sel­dom does it get to this stage, if you did the pre­vi­ous stages right. If you want more on ver­bal judo, watch the video; he’s a good speaker. • Thank you for writ­ing this up. The one thing I won­dered about was whether the tech­niques for get­ting com­pli­ance in­terfere with get­ting in­for­ma­tion. For ex­am­ple, what if some­one who isn’t con­sent­ing to a search is ac­tu­ally right about the law? • The thing that both­ers me about the talk is that most of it makes the as­sump­tion that you’re be­ing calm and ra­tio­nal, that you’re right, and that who­ever you’re talk­ing to is ir­ra­tional and needs to be ver­bally judo’d into com­pli­ance. Some­times that’s the case, but most of the tech­niques don’t re­ally ap­ply to situ­a­tions where you’re deal­ing with an­other calm, sane per­son as an equal. • Thomp­son is ac­tu­ally am­bigu­ous on the point. Some­times he’s re­ally clear that what you’re aiming for is com­pli­ance. • Does the talk provide any ev­i­dence for the effi­cacy of the tac­tics? • The speaker has a whole ca­reer of ex­pe­rience deal­ing with peo­ple who are ir­ra­tional be­cause they’re drunk, an­gry, fright­ened, or some com­bi­na­tion of the above. He says this stuff is what he does, and that it works great. That’s anec­do­tal, but it’s about the strongest kind of anec­do­tal ev­i­dence it’s pos­si­ble to get. It would be nice if some­one did a prop­erly con­trol­led study on this. • This is good, you should float it as a top-level post • Thanks. That was a com­pact and helpful 90 min­utes. The first 30 min­utes were OK, but the 2nd 30 were bet­ter, and the 3rd was the best. Towards the end I got the im­pres­sion that he was ex­plain­ing les­sons that were the kind of thing peo­ple spend 5 years learn­ing the hard way and that lots of peo­ple never learn for var­i­ous rea­sons. • That sounds re­ally in­ter­est­ing. I wish there were a tran­script available! • There’s an mp3 ver­sion available, which sounds just as good at 1.4x speed. And it cuts the 90 min­utes down to about an hour. • Was Kant im­plic­itly us­ing UDT? Con­sider Kant’s cat­e­gor­i­cal im­per­a­tive. It says, roughly, that you should act such that you could will your ac­tion as a uni­ver­sal law with­out un­der­min­ing the in­tent of the ac­tion. For ex­am­ple, sup­pose you want to ob­tain a loan for a new car and never pay it back—you want to break a promise. In a world where ev­ery­one broke promises, the so­cial prac­tice of promise keep­ing wouldn’t ex­ist and thus nei­ther would the prac­tice of giv­ing out loans. So you would un­der­mine your own ends and thus, ac­cord­ing to the cat­e­gor­i­cal im­per­a­tive, you shouldn’t get a loan with­out the in­tent to pay it back. Another way to put Kant’s po­si­tion would be that you should choose such that you are choos­ing for all other ra­tio­nal agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choos­ing for ev­ery agent run­ning the same de­ci­sion al­gorithm as your­self. It wouldn’t be a stretch to call UDT agents ra­tio­nal. So Kant thinks we should be us­ing UDT! Of course, Kant can’t draw the con­clu­sions he wants to draw be­cause no hu­man is ac­tu­ally us­ing UDT. But that doesn’t change the de­ci­sion al­gorithm Kant is en­dors­ing. Ex­cept… Kant isn’t a con­se­quen­tial­ist. If the cat­e­gor­i­cal im­per­a­tive de­mands some­thing, it de­mands it no mat­ter the cir­cum­stances. Kant fa­mously ar­gued that ly­ing is wrong, pe­riod. Even if the fate of the world de­pends on it. So Kant isn’t re­ally en­dors­ing UDT, but I thought the sur­face similar­ity was pretty funny. • Kant fa­mously ar­gued that ly­ing is wrong, pe­riod. Even if the fate of the world de­pends on it. I re­mem­ber Eliezer say­ing some­thing similar, though I can’t find it right now (the clos­est I could find was this ). It was some­thing about the benefits of be­ing the kind of per­son that doesn’t lie, even if the fate of the world is at stake. Be­cause if you aren’t, the minute the fate of the world is at stake is the minute your word be­comes worth­less. • I re­call it too. I think the key dis­tinc­tion is that if the choice was liter­ally be­tween ly­ing and ev­ery­one in the world—in­clud­ing your­self—per­ish­ing, Kant would let us all die. Eliezer would not. What I took Eliezer to be say­ing (work­ing from mem­ory, I may try to find the post later) is that if you think the choice is be­tween ly­ing and the sun ex­plod­ing (or some­thing analo­gous) in any real life situ­a­tion… you’re wrong. It’s far more likely that you’re ra­tio­nal­iz­ing the way you’re com­pro­mis­ing your val­ues than that it’s ac­tu­ally nec­es­sary to com­pro­mise your val­ues, given what we know about hu­mans. So a con­se­quen­tial­ist sys­tem im­plies ba­si­cally de­on­tolog­i­cal rules once hu­man na­ture is taken into ac­count. Once again, this is all from my mem­ory, so I could be wrong. • Although Eliezer didn’t put it pre­cisely in these terms, he was sort of sug­gest­ing that if one could self-mod­ify in such a way that it be­came im­pos­si­ble to break a cer­tain sort of ab­solutely bind­ing promise, it would be good to mod­ify one­self in that way, even though it would mean that if the situ­a­tion ac­tu­ally came up where you had to break the promise or let the world per­ish, you would have to let the world per­ish. • Drescher has some im­por­tant things to say about this dis­tinc­tion in Good and Real. What I got out of it, is that the CI is jus­tifi­able on con­se­quen­tial­ist or self-serv­ing grounds, so long as you re­lax the con­straint that you can only con­sider the causal con­se­quences (or “means-end links”) of your de­ci­sions, i.e., things that hap­pen “fu­ture­ward” of your de­ci­sion. Drescher ar­gues that speci­fi­cally eth­i­cal be­hav­ior is dis­t­in­guished by its recog­ni­tion of these “acausal means-end links”, in which you act for the sake of what would be the case if-coun­ter­fac­tu­ally you would make that de­ci­sion, even though you may already know the re­sult. (Though I may be butcher­ing it—it’s tough to get my head around the ar­gu­ments.) And I saw a par­allel be­tween Drescher’s rea­son­ing and UDT, as the former ar­gues that your de­ci­sions set the out­put of all similar pro­cesses to the ex­tent that they are similar. • I thought Kant sounded a lot more like TDT than UDT. Or was that what you meant? • I’m not fa­mil­iar enough with Pearl’s for­mal­ism to re­ally un­der­stand TDT—or at least that’s why I haven’t re­ally dove into TDT yet. I’d love to hear why you think Kant sounds more like TDT though. I’m sus­pect­ing it has some­thing to do with con­sid­er­ing coun­ter­fac­tu­als. • I’m not fa­mil­iar at all with Pearl’s for­mal­ism. But from what I see on this site, I gather that the key in­sight of up­date­less de­ci­sion the­ory is to max­i­mize util­ity with­out con­di­tion­ing on in­for­ma­tion about what world you’re in, and the key in­sight of time­less de­ci­sion the­ory is what you’re de­scribing (Eliezer sum­ma­rizes it as “Choose as though con­trol­ling the log­i­cal out­put of the ab­stract com­pu­ta­tion you im­ple­ment, in­clud­ing the out­put of all other in­stan­ti­a­tions and simu­la­tions of that com­pu­ta­tion.”) • I gather that the key in­sight of up­date­less de­ci­sion the­ory is to max­i­mize util­ity with­out con­di­tion­ing on in­for­ma­tion about what world you’re in, and the key in­sight of time­less de­ci­sion the­ory is what you’re de­scribing (Eliezer sum­ma­rizes it as “Choose as though con­trol­ling the log­i­cal out­put of the ab­stract com­pu­ta­tion you im­ple­ment, in­clud­ing the out­put of all other in­stan­ti­a­tions and simu­la­tions of that com­pu­ta­tion.”) I think Eliezer’s sum­mary is also a fair de­scrip­tion of UDT. The differ­ence be­tween UDT and TDT ap­pears to be sub­tle, and I don’t com­pletely un­der­stand it. From what I can tell, UDT just does choose in the way Eliezer de­scribes, com­pletely ig­nor­ing any up­dat­ing pro­cess. TDT chooses this way as a re­sult of how it rea­sons about coun­ter­fac­tu­als. Some­how, TDT’s coun­ter­fac­tual rea­son­ing causes it to choose slightly differ­ently from UDT, but I’m not sure why at this point. • I found TobyBar­tels’s re­cent ex­pla­na­tion of why he doesn’t want to sign up for cry­on­ics a use­ful les­son in how differ­ent peo­ple’s goals in liv­ing a long time (or not) can be from mine. Now I am won­der­ing if maybe it would be a good idea to state some of the rea­sons peo­ple would want to wake up 100 years later if hit by a bus. Can’t say I’ve been around here very long but it seems to me it’s been as­sumed as some sort of “com­mon sense”—is that ac­cu­rate? I was won­der­ing if other peo­ple’s rea­sons for sign­ing up /​ in­tend­ing to sign up (I am not cur­rently signed up and prob­a­bly will not get around to such for sev­eral years) also differed in­ter­est­ingly from mine. Or is this too off topic? As for me, I would think the ob­vi­ous rea­son is what Hilbert said: “If I were to awaken af­ter hav­ing slept for a thou­sand years, my first ques­tion would be: Has the Rie­mann hy­poth­e­sis been proven?” Find­ing your­self in the fu­ture means you now have the an­swers to a lot of pre­vi­ously open prob­lems! As well as get­ting to learn the his­tory of what hap­pened af­ter you were frozen. I have for a long time found not get­ting to learn the fu­ture his­tory of the world to be the most trou­bling as­pect of dy­ing. (Post­ing this here as it seems a bit off-topic un­der The Threat of Cry­on­ics.) • It sure seems like a lot of peo­ple could feed their will to live by read­ing just the first half of an ex­cit­ing fic­tion book. • One thought is that it’s tempt­ing to think of your­self as be­ing the only one (pre­sum­ably with help from na­tives) try­ing to deal with the changed world. Ac­tu­ally I think it’s more likely that there will be many peo­ple from your era, and there will be im­mi­grants’ clubs, with peo­ple who’ve been in the fu­ture for a while helping the green­horns. I find this makes the fu­ture seem more com­fortable. The two ma­jor rea­sons I can think of for want­ing to be in the fu­ture is that I rather like be­ing me, and the fu­ture should be in­ter­est­ing. • The sin­gle largest mo­ti­va­tion for me is just that a fu­ture which is pow­er­ful enough, and rich enough, and benev­olent enough to re­vive cry­on­i­cists is likely to be a very pleas­ant place to be in. If noth­ing else, lots of their ev­ery­day de­vices are likely to look like mar­velous toys from my point of view. The com­bi­na­tion of that with the like­li­hood that if they can re­pair me at all, I’d guess that they would use a youth­ful body (phys­i­cal or simu­lated) as a model is quite enough to be an at­trac­tive prospect. • I’ve been on a Wikipe­dia binge, read­ing about peo­ple push­ing var­i­ous New Age silli­ness. The tragic part is that a lot of these guys ac­tu­ally do sound fairly smart, and they don’t seem to be af­flicted with biolog­i­cal forms of men­tal ill­ness. They just hap­pen to be memet­i­cally crazy in a profound and crip­pling way. Take Ervin Las­zlo, for in­stance. He has a the­ory of ev­ery­thing, which in­volves say­ing the word “quan­tum” a lot and talk­ing about a mys­ti­cal “Akashic Field” which I would de­scribe in more de­tail ex­cept that none of the ex­pla­na­tions of it re­ally say much. Here’s a rep­re­sen­ta­tive snip­pet from Wikipe­dia: Lás­zló de­scribes how such an in­for­ma­tional field can ex­plain why our uni­verse ap­pears to be fine-tuned as to form galax­ies and con­scious life­forms; and why evolu­tion is an in­formed, not ran­dom, pro­cess. He be­lieves that the hy­poth­e­sis solves sev­eral prob­lems that emerge from quan­tum physics, es­pe­cially non­lo­cal­ity and quan­tum en­tan­gle­ment. Then we have pages like this one, talk­ing more about the Akashic Records (be­cause ap­par­ently it’s a quan­tum field thingy and also an in­finite library or some­thing). The very first sen­tence sums it up: “The Akashic Records re­fer to the fre­quency gird pro­grams that cre­ate our re­al­ity.” Okay, ac­tu­ally that didn’t sum up crap; but it sounded cool, didn’t it? That page is full of refer­ences to the works of var­i­ous peo­ple, cited very nicely, and the spel­ling and gram­mar sug­gest some­one with ed­u­ca­tion. There are a lot of pages like this float­ing around. The thing they all have in com­mon is that they don’t seem to con­sider ev­i­dence to be im­por­tant. It’s not even on their radar. Schol­arly writ­ings from New Age peo­ple is a pretty breath­tak­ing ex­am­ple of dark side episte­mol­ogy, if any­body wants a case study in ex­actly what not to do. It’s pretty in­tense. • In­ter­est­ing SF by Robert Charles Wil­son! I nor­mally stay away from post­ing news to less­wrong.com—al­though I think an Open Thread for rele­vant news items would be a good idea—but this one sounds es­pe­cially good and might be of in­ter­est for peo­ple vis­it­ing this site... Many-Wor­lds in Fic­tion: “Di­vided by In­finity” In the year af­ter Lor­raine’s death I con­tem­plated suicide six times. Con­tem­plated it se­ri­ously, I mean: six times sat with the fat bot­tle of Clon­azepam within reach­ing dis­tance, six times failed to reach for it, be­trayed by some in­stinct for life or dis­gusted by my own weak­ness. I can’t say I wish I had suc­ceeded, be­cause in all like­li­hood I did suc­ceed, on each and ev­ery oc­ca­sion. Six deaths. No, not just six. An in­finite num­ber. Times six. There are greater and lesser in­fini­ties. But I didn’t know that then. • Thank you. The idea re­minded me of Mo­ravec’s thoughts on death: When we die, the rules surely change. As our brains and bod­ies cease to func­tion in the nor­mal way, it takes greater and greater con­trivances and co­in­ci­dences to ex­plain con­tin­u­ing con­scious­ness by their op­er­a­tion. We lose our ties to phys­i­cal re­al­ity, but, in the space of all pos­si­ble wor­lds, that can­not be the end. Our con­scious­ness con­tinues to ex­ist in some of those, and we will always find our­selves in wor­lds where we ex­ist and never in ones where we don’t. The na­ture of the next sim­plest world that can host us, af­ter we aban­don phys­i­cal law, I can­not guess. Does phys­i­cal re­al­ity sim­ply loosen just enough to al­low our con­scious­ness to con­tinue? Do we find our­selves in a new body, or no body? It prob­a­bly de­pends more on the de­tails of our own con­scious­ness than did the origi­nal phys­i­cal life. Per­haps we are most likely to find our­selves re­con­sti­tuted in the minds of su­per­in­tel­li­gent suc­ces­sors, or per­haps in dream­like wor­lds (or AI pro­grams) where psy­cholog­i­cal rather than phys­i­cal rules dom­i­nate. Our mind chil­dren will prob­a­bly be able to nav­i­gate the al­ter­na­tives with in­creas­ing fa­cil­ity. For us, now, barely con­scious, it re­mains a leap in the dark. • I already wrote this fic (“The Grand Fi­nale of the Ul­ti­mate Meta Mega Crossover”). • I wouldn’t be sur­prised to find out that many peo­ple who know about you and the SIAI are oblivi­ous of your fic­tion. At least I my­self only found out about it some time af­ter learn­ing about you and SIAI. It is gen­er­ally awe­some stuff and would be enough in it­self to donate to SIAI. Spread­ing such fic­tion sto­ries might ac­tu­ally at­tract more peo­ple to dig deeper and find out about SIAI than to be be thrown in at the deep end. Edit: I my­self came to know about SIAI due to SF, es­pe­cially Orion’s Arm. • I thought I’d pose an in­for­mal poll, pos­si­bly to be­come a top-level, in prepa­ra­tion for my ar­ti­cle about How to Ex­plain. The ques­tion: on all the top­ics you con­sider your­self an “ex­pert” or “very knowl­edge­able about”, do you be­lieve you un­der­stand them at least at Level 2? That is, do you be­lieve you are aware of the in­fer­en­tial con­nec­tions be­tween your ex­per­tise and layper­son-level knowl­edge? Or, to put it an­other way, do you think that, given enough time, but us­ing only your pre­sent knowl­edge, you could teach a rea­son­ably-in­tel­li­gent layper­son, one-on-one, to un­der­stand com­plex top­ics in your ex­per­tise, teach­ing them ev­ery in­ter­me­di­ate topic nec­es­sary for ground­ing the hard­est level? Edit: Per DanAr­mak’s query, any­thing you can re-de­rive or in­fer from your pre­sent knowl­edge counts as part of your pre­sent knowl­edge for pur­poses of an­swer­ing this ques­tion. I’ll save my an­swer for later—though I sus­pect many of you already know it! • I have a (I sus­pect un­usual) ten­dency to look at ba­sic con­cepts and try to see them in as many ways as pos­si­ble. For ex­am­ple, here are seven equa­tions, all of which could be referred to as Bayes’ The­o­rem: $P\(H|E\$=\frac{P(E%7CH).P(H)}{P(E)}\\[10]P(H%7CE)=\frac{P(E%7CH)}{P(E)}.P(H)\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{P(E%7CH).P(H)+P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}\\[10]P(H%7CE)=\frac{1}{1+\frac{P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}{P(E%7CH).P(H)}}\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{\sum%20P(E%7CH_i).P(H_i)}\\[10]%0Aodds(H%7CE)=\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)}.odds(H)\\[10]%0Alo­godds(H%7CE)=log(\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)})+lo­godds(H)) How­ever, each one is differ­ent, and forces a differ­ent in­tu­itive un­der­stand­ing of Bayes’ The­o­rem. The fourth one down is my favourite, as it makes ob­vi­ous that the up­date de­pends only on the ra­tio of like­li­hoods. It also gives us our mo­ti­va­tion for tak­ing odds, since this clears up the 1/​(1+x)ness of the equa­tion. Be­cause of this way of un­der­stand­ing things, I find ex­pla­na­tions easy, be­cause if one method isn’t work­ing, an­other one will. ETA: I’d love to see more ver­sions of Bayes’ The­o­rem, if any­one has any more to post. • P (H|E) = P (H and E) /​ P(E) which tends to be how con­di­tional prob­a­bil­ity is defined, and ac­tu­ally the first ver­sion of Bayes that I re­call see­ing. • Very well said, and dou­bles as a re­ply to the last part of my com­ment here. (When I read your com­ment in my in­box, I thought it was ac­tu­ally a re­ply to that one! Need­less to say, I my fa­vorite ver­sions of the the­o­rem are the last two you listed.) • us­ing only your pre­sent knowledge This strikes me as an un-lifelike as­sump­tion. If I had to ex­plain things in this way, I would ex­pect to en­counter some things that I don’t ex­plic­itly know (and other that I knew and have for­got­ten), and to have to (re)de­rive them. But I ex­pect that I would be able to red­erive al­most all of them. Refin­ing my own un­der­stand­ing is a nat­u­ral part of build­ing a com­plex ex­pla­na­tion-story to tell to oth­ers, and will hap­pen un­less I’ve already built this pre­cise story be­fore and re­mem­ber it. • For pur­poses of this ques­tion, things you can red­erive from your pre­sent knowl­edge count as part of your pre­sent knowl­edge. • I think I have level 2 un­der­stand­ing of many ar­eas of Biol­ogy but of course not all of it. It is too large a field. But there are gray ar­eas around my high points of un­der­stand­ing where I am not sure how deep my un­der­stand­ing would go un­less it was put to the test. And around the gray ar­eas sur­round­ing the level 2 ar­eas there is a sea of su­perfi­cial un­der­stand­ing. I have some small ar­eas of com­puter sci­ence at level 2 but they are fewer and smaller, ditto chem­istry and ge­ol­ogy. I think your ques­tion over­looks the na­ture of teach­ing skills. I am pretty good at teach­ing (ver­bally and one/​few to one) and did it of­ten for years. There is a real knack in find­ing the right place to start and the right analo­gies to use with a par­tic­u­lar per­son. Some­one could have more un­der­stand­ing than me and not be able to trans­fer that un­der­stand­ing to some­one else. And oth­ers could have less un­der­stand­ing and trans­fer it bet­ter. Fi­nally I like your use of the word ‘un­der­stand­ing’ rather than ‘knowl­edge’. It im­plies the con­nect­ed­ness with other ar­eas re­quired to re­late to lay peo­ple. • Per­haps the rea­son ex­perts aren’t always good teach­ers is be­cause their thought pro­cesses /​ prob­lem solv­ing al­gorithms op­er­ate at a level of ab­strac­tion that is in­ac­cessible to a be­gin­ner. • I will re­ply to this in the sense of “do you be­lieve you are aware of the in­fer­en­tial con­nec­tions be­tween your ex­per­tise and layper­son-level knowl­edge?”, since I am not so fa­mil­iar with the for­mal­ism of a “Level 2” un­der­stand­ing. My un­in­ter­est­ing, sim­ple an­swer is: yes. My philo­soph­i­cal an­swer is that I find the en­tire ques­tion to be very in­ter­est­ing and strange. That is, the re­la­tion­ship be­tween teach­ing and un­der­stand­ing is quite strange IMO. There are many peo­ple who are poor teach­ers but who ex­cel in their dis­ci­pline. It seems to be a con­tra­dic­tion be­cause high-level teach­ing skill seems to be a suffi­cient, and pos­si­bly nec­es­sary con­di­tion for mas­ter­ful un­der­stand­ing. Per­son­ally I re­solve this con­tra­dic­tion in the fol­low­ing way. I feel like my own limi­ta­tions make it to where I am forced to learn a sub­ject by pro­gress­ing at it in very sim­plis­tic strokes. By the time I have reached a mas­tery, I feel very ca­pa­ble of teach­ing it to oth­ers, since I have been forced to un­der­stand it my­self in the most sim­plis­tic way pos­si­ble. Other peo­ple, who are pos­si­bly quite brilli­ant, are able to mas­ter some sub­jects with­out hav­ing to trans­mute the in­for­ma­tion into a sim­pler level. Con­se­quen­tially, they are un­able to make the sort of con­nec­tions that you de­scribe as be­ing nec­es­sary for teach­ing. Per­son­ally I feel that the lat­ter cat­e­gory of peo­ple must be miss­ing some­thing, but I am un­able to make a con­vinc­ing ar­gu­ment for this point. • A lot of the ques­tions you pose, in­clud­ing the defi­ni­tion of the Level 2 for­mal­ism, are ad­dressed in the ar­ti­cle I linked (and wrote). I clas­sify those who can do some­thing well but not ex­plain or un­der­stand the con­nec­tions from the in­puts and out­puts to the rest of the world, to be at a Level 1 un­der­stand­ing. It’s cer­tainly an ac­com­plish­ment, but I agree with you that it’s miss­ing some­thing: the abil­ity to rec­og­nize where it fits in with the rest of re­al­ity (Level 2) and the com­mand of a re­li­able truth-de­tect­ing pro­ce­dure that can “re­pair” gaps in knowl­edge as they arise (Level 3). “Level 1 sa­vants” are cer­tainly do­ing some­thing very well, but that some­thing is not a deep un­der­stand­ing. Rather, they are in the po­si­tion of a com­puter that can trans­form in­puts into the right out­puts, but do noth­ing more with them. Or a cat, which can fall from great heights with­out in­jury, but not know why its method works. (Yes, this com­ment seems a bit in­ter­nally repet­i­tive.) • Ah, OK, I read your ar­ti­cle. I think that’s an ad­mirable task to try to clas­sify or iden­tify the lev­els of un­der­stand­ing. How­ever, I’m not sure I am con­vinced by your cat­e­go­riza­tion. It seems to me that many of these “Level 1 sa­vants” as you call them are quite ca­pa­ble of fit­ting their un­der­stand­ing with the rest of re­al­ity. Ac­tu­ally it seems like the claim of “Level 1 un­der­stand­ing” ba­si­cally triv­ial­izes that un­der­stand­ing. Yet many of these peo­ple who are bad teach­ers have a very non­triv­ial un­der­stand­ing—else I don’t think this would be such a com­mon phe­nom­ena, for ex­am­ple, in academia. I would ar­gue that these peo­ple have some fur­ther com­pli­ca­tions or is­sues which are not rec­og­nized in the 1-2-3 hi­er­ar­chy. That be­ing said, you have to start some­where, and the 0-1-2-3 hi­er­ar­chy looks like a good place to start. I’d definitely be in­ter­ested in hear­ing more about this anal­y­sis. • Thanks for read­ing it and giv­ing me feed­back. I’m in­ter­ested in your claim: It seems to me that many of these “Level 1 sa­vants” as you call them are quite ca­pa­ble of fit­ting their un­der­stand­ing with the rest of re­al­ity. Well, they can fit it in the sense that they (over a typ­i­cal prob­lem set) can match in­puts with (what re­al­ity deems) the right out­puts. But, as I’ve defined the level, they don’t know how those in­puts and out­puts re­late to more dis­tantly-con­nected as­pects of re­al­ity. Yet many of these peo­ple who are bad teach­ers have a very non­triv­ial un­der­stand­ing—else I don’t think this would be such a com­mon phe­nom­ena, for ex­am­ple, in academia. I had a dis­cus­sion with oth­ers about this point re­cently. My take is ba­si­cally: if their un­der­stand­ing is so deep, why ex­actly is their teach­ing skill so brit­tle that no one can fol­low the in­fer­en­tial paths they trace out? Why can’t they switch to the in­finite other paths that a Level 2 un­der­stand­ing en­ables them to see? If they can’t, that would sug­gest a lack of depth to their un­der­stand­ing. And re­gard­ing the archety­pal “deep un­der­stand­ing, poor teacher” you have in mind, do you en­vi­sion that they could, say, trace out all the as­sump­tions that could ac­count for an anoma­lous re­sult, start­ing with the most ten­u­ous, and con­tin­u­ing out­side their sub­field? If not, I would call that fal­ling short of Level 2. • My take is ba­si­cally: if their un­der­stand­ing is so deep, why ex­actly is their teach­ing skill so brit­tle that no one can fol­low the in­fer­en­tial paths they trace out? Why can’t they switch to the in­finite other paths that a Level 2 un­der­stand­ing en­ables them to see? If they can’t, that would sug­gest a lack of depth to their un­der­stand­ing. I would LOVE to agree with this state­ment, as it jus­tifies my crit­i­cism of poor teach­ers who IMO are (not usu­ally mal­i­ciously) putting their stu­dents through hell. How­ever, I don’t think it’s ob­vi­ous, or I think maybe you just have to take it as an ax­iom of your sys­tem. It seems there is some no­tion of in­di­vi­d­u­al­ism or per­sonal differ­ence which is miss­ing from the sys­tem. If some­one is just ter­rible at learn­ing, can you re­ally ex­pect to suc­ceed in ex­plain­ing, for ex­am­ple? Real­is­ti­cally I think it’s prob­a­bly im­pos­si­ble to clas­sify the mas­sive con­cept of un­der­stand­ing by merely three lev­els, and these prob­lems are just a symp­tom of that fact. As an­other ex­am­ple, in or­der to un­der­stand some­thing, it’s clearly nec­es­sary to be able to ex­plain it to your­self. In your sys­tem, you are ad­di­tion­ally re­quiring that your un­der­stand­ing means you must be able to ex­plain things to other peo­ple. In or­der to ex­plain things to oth­ers, you have to un­der­stand them, as has been dis­cussed. There­fore you have to be able to ex­plain other peo­ple to your­self. Why should an ex­pla­na­tion of other in­di­vi­d­u­als be­hav­ior be nec­es­sary for un­der­stand­ing some ran­dom area of ex­per­tise, say, math­e­mat­ics? It’s not clear to me. And re­gard­ing the archety­pal “deep un­der­stand­ing, poor teacher” you have in mind, do you en­vi­sion that they could, say, trace out all the as­sump­tions that could ac­count for an anoma­lous re­sult, start­ing with the most ten­u­ous, and con­tin­u­ing out­side their sub­field? It cer­tainly seems like some­one with a deep un­der­stand­ing of their sub­ject should be able to iden­tify the val­idity or un­cer­tainty in their as­sump­tions about the sub­ject. If they are a poor teacher, I think I would still be­lieve this to be true. • I’ve thought about this some, and I think I see your point now. I would phrase it this way: It’s pos­si­ble for a “Level 3 sa­vant” to ex­ist. A Level 3 sa­vant, let’s posit, has a very deeply con­nected model of re­al­ity, and their ex­cel­lent truth-de­tect­ing pro­ce­dure al­lows them to in­ter­nally re­pair loss of knowl­edge (per­haps be­low the level of their con­scious aware­ness). Like an ex­pert (un­der the pop­u­lar defi­ni­tion), and like a Level 1 sa­vant, they perform well within their field. But this per­son differs in that they can also pe­form well in trac­ing out where its ground­ing as­sump­tions go wrong—ex­cept that they “just have all the an­swers” but can’t ex­plain, and don’t know, where the an­swers came from. So here’s what it would look like: Any prob­lem you pose in the field (like an anoma­lous re­sult), they im­me­di­ately say, “look at fac­tor X”, and it’s usu­ally cor­rect. They even tell you to check crit­i­cal as­pects of sen­sors, or iden­tify cir­cu­lar­ity in the liter­a­ture that grounds the field (i.e. sources which gen­er­ate false knowl­edge by ex­ces­sively cit­ing each other), even though most in the field might not even think about or know how all those sen­sors work. All they can tell you is, “I don’t know, you told me X, and I im­me­di­ately figured it had to be a prob­lem with Y mis­in­ter­pret­ing Z. I don’t know how Z re­lates to W, or if W di­rectly re­lates to X, I just know that Y and Z were the prob­lem.” I would agree that there’s no con­tra­dic­tion in the ex­is­tence of such a per­son. I would just say that in or­der to get this level of skill you have to ac­com­plish so many sub­goals that it’s very un­likely, just as it’s hard to make some­thing act and look like a hu­man with­out also mak­ing it con­scious. (Ob­vi­ous dis­claimer: I don’t think my case is as solid as the one against P-zom­bies.) • I think I know a fair amount about do­ing cal­lig­ra­phy, but I’m du­bi­ous that some­one could get a com­pa­rable level of knowl­edge with­out do­ing a good bit of cal­lig­ra­phy them­selves. If I were do­ing a se­ri­ous job of teach­ing, I would be learn­ing more about how to teach as I was do­ing it. I con­sider my­self to be a good but not ex­pert ex­plainer. Pos­si­bly of in­ter­est: The 10-Minute Re­ju­ve­na­tion Plan: T5T: The Revolu­tion­ary Ex­er­cise Pro­gram That Re­stores Your Body and Mind : a book about an ex­er­cise sys­tem which in­volves 5 yoga moves. It’s by a woman who’d taught 700 peo­ple how to do the sys­tem, and shows an ex­ten­sive knowl­edge of the pos­si­ble mis­takes stu­dents can make and adap­ta­tions needed to make the moves fea­si­ble for a wide va­ri­ety of peo­ple. My point is that ex­pla­na­tion isn’t an ab­stract perfectible pro­cess ex­ist­ing sim­ply in the mind of a teacher. • But in some limited ar­eas ex­pla­na­tion is com­pletely ad­e­quate. I taught co-worker how to do su­doku puz­zles. After teach­ing him the hu­man-ac­cessible al­gorithms and al­low­ing time for prac­tice, I was still con­sis­tently beat­ing his time. I knew why, and he didn’t. After I ex­plained the differ­ence in men­tal state I was us­ing, he be­gan beat­ing my time on reg­u­lar ba­sis. {In­stead of check­ing the list of 1-9 for each box or line, al­low your brain to sub­con­sciously spot the miss­ing num­ber and then ver­ify its ab­sence.} He is more mo­ti­vated and has more fo­cus, while I do puz­zles to kill time when wait­ing. In an­other job where I be­lieve I had a thor­ough un­der­stand­ing of the sub­ject, I was never able to teach any of my (~20) trainees to pro­duce vec­tor graphic maps with the speed and ac­cu­racy I ob­tained be­cause I was un­able to im­part a math­e­mat­i­cal in­tu­ition for the ap­prox­i­ma­tion of curves. I let them go home with full pay when they com­pleted their work, so they definitely had mo­ti­va­tion. But they also had ed­i­tors who were highly de­tail ori­ented. I mean to sug­gest that there is a con­tinuum of sub­jec­tive abil­ity com­par­ing differ­ent skills. Su­doku is highly pro­ce­du­ral, once fa­mil­iar all that is re­quired is con­cen­tra­tion. Yoga, in the sense men­tioned above, is also pro­ce­du­ral, pro­scrip­tive; the joints al­low a limited num­ber of de­grees of free­dom. Cal­lig­ra­phy strives for an ideal, but de­pend­ing on the tra­di­tion, there is a de­gree of in­ter­pre­ta­tion al­lowed for aes­thetic con­sid­er­a­tions. Map­ping, par­tic­u­larly in vec­tor graph­ics, has many ways to be ad­e­quate and no way to be perfect. The num­ber of ac­cept­able out­comes and the de­gree of vari­a­tion in use­ful paths de­ter­mines the teach-abil­ity of a skil­lset. The pro­ce­du­ral skills can be taught more eas­ily than the sub­jec­tive, and prac­tice is use­ful to ac­com­plish mas­tery of pro­ce­du­ral skills. Deeper un­der­stand­ing of a field al­lows more of the skill’s do­main to be ex­pressed pro­ce­du­rally rather than sub­jec­tively. • I’m in gen­eral agree­ment, but I think you’re un­der­es­ti­mat­ing yoga—a big piece of it is im­prov­ing ac­cess to your body’s abil­ity to self-or­ga­nize. I like “many ways to be ad­e­quate and no way to be perfect”. I think most of life is like that, though I’ll add “many ways to be ex­cel­lent”. • No slight to yoga in­tended. I only wanted to ad­dress the start­ing point of yoga. I know it is a quite com­pre­hen­sive field. • I think that the “teach­ing” bench­mark you claim here is ac­tu­ally a bit weaker than a Level 2 un­der­stand­ing. To suc­cess­fully teach a topic, you don’t need to know lots of con­nec­tions be­tween your topic and ev­ery­thing else; you only need to know enough such con­nec­tions to con­vey the idea. I re­ally think this lies some­where be­tween Level 1 and Level 2. I’ll claim to have Level 2 un­der­stand­ing on the core top­ics of my grad­u­ate re­search, some math­e­mat­ics, and some core al­gorith­mic rea­son­ing. I’m sure I don’t have all of the con­nec­tions be­tween these things and the rest of my world model, but I do have many, and they per­vade my un­der­stand­ing. • I think that the “teach­ing” bench­mark you claim here is ac­tu­ally a bit weaker than a Level 2 un­der­stand­ing. To suc­cess­fully teach a topic, you don’t need to know lots of con­nec­tions be­tween your topic and ev­ery­thing else; you only need to know enough such con­nec­tions to con­vey the idea. I re­ally think this lies some­where be­tween Level 1 and Level 2. I agree in the sense that full com­ple­tion of Level 2 isn’t nec­es­sary to do what I’ve de­scribed, as that im­plies a very deeply-con­nected set of mod­els, truly per­vad­ing ev­ery­thing you know about. But at the same time, I don’t think you ap­pre­ci­ate some of the hur­dles to the teach­ing task I de­scribed: re­mem­ber, the only as­sump­tion is that the stu­dent has lay knowl­edge and is rea­son­ably in­tel­li­gent. There­fore, you do not get to as­sume that they find any par­tic­u­lar chain of in­fer­ence easy, or that they already know any par­tic­u­lar do­main above the lay level. This means you would have to be able to gen­er­ate al­ter­nate in­fer­en­tial paths, and fall back to more ba­sic lev­els “on the fly”, which re­quires healthy progress into Level 2 in or­der to achieve—enough that it’s fair to say you “round to” Level 2. I’ll claim to have Level 2 un­der­stand­ing on the core top­ics of my grad­u­ate re­search, some math­e­mat­ics, and some core al­gorith­mic rea­son­ing. I’m sure I don’t have all of the con­nec­tions be­tween these things and the rest of my world model, but I do have many, and they per­vade my un­der­stand­ing. If so, I deeply re­spect you and find that you are the ex­cep­tion and not the rule. Do you find your­self crit­i­cal of how peo­ple in the field (i.e. through text­books, for ex­am­ple) pre­sent it to new­com­ers (who have un­der­grad pre­req­ui­sites), pre­sent it to laypeo­ple, and use ex­ces­sive or un­in­tu­itive jar­gon? • There­fore, you do not get to as­sume that they find any par­tic­u­lar chain of in­fer­ence easy, or that they already know any par­tic­u­lar do­main above the lay level. This means you would have to be able to gen­er­ate al­ter­nate in­fer­en­tial paths, and fall back to more ba­sic lev­els “on the fly”, which re­quires healthy progress into Level 2 in or­der to achieve—enough that it’s fair to say you “round to” Level 2. I agree that the teach­ing task does re­quire a thick bun­dle of con­nec­tions, and not just a sin­gle chain of in­fer­ences. So much so, ac­tu­ally, that I’ve found that teach­ing, and prepar­ing to teach, is a pretty good way to learn new con­nec­tions be­tween my Level 1 knowl­edge and my world model. That this “rounds” to Level 2 de­pends, I sup­pose, on how in­tel­li­gent you as­sume the stu­dent is. If so, I deeply re­spect you and find that you are the ex­cep­tion and not the rule. Do you find your­self crit­i­cal of how peo­ple in the field (i.e. through text­books) pre­sent it to new­com­ers (who have un­der­grad pre­req­ui­sites), pre­sent it to laypeo­ple, and use ex­ces­sive or un­in­tu­itive jar­gon? Yes, con­stantly. Fre­quently, I’m frus­trated by such pre­sen­ta­tions to the point of anger at the au­thor’s ap­par­ent dis­re­gard for the reader, even when I un­der­stand what they’re say­ing. • I have some trou­ble an­swer­ing your ques­tion, chiefly be­cause my defi­ni­tion of “ex­pert” is ap­prox­i­mately syn­ony­mous with your defi­ni­tion of “Level 2″. Or, to put it an­other way, do you think that, given enough time, but us­ing only your pre­sent knowl­edge, you could teach a rea­son­ably-in­tel­li­gent layper­son, one-on-one, to un­der­stand com­plex top­ics in your ex­per­tise, teach­ing them ev­ery in­ter­me­di­ate topic nec­es­sary for ground­ing the hard­est level? “Enough time” would be quite a long pe­riod of time. One prob­lem is that there are a lot of text­book re­sults that I would have to use in in­ter­me­di­ate steps that would take me a long time to de­rive. Another is that there are a lot of ex­per­i­men­tal pa­ram­e­ters that I haven’t mem­o­rized and would have to look up. But I think I could teach ar­ith­metic, alge­bra, ge­om­e­try, calcu­lus, differ­en­tial equa­tions, and New­to­nian physics enough that I could teach them proper en­g­ineer­ing anal­y­sis. • Crim­i­nal Law: Yes to Level 2. Yes to teach­ing a layper­son. It would take a while, for sure, but it’s doable. Some of the work re­quires an un­der­stand­ing of a differ­ent lifestyle; if you can’t see the po­ten­tial is­sues with pros­e­cut­ing a rob­bery by a pros­ti­tute and her armed male friend, or you can’t pre­dict that a do­mes­tic vi­o­lence vic­tim will have a non-cred­ible re­can­ta­tion, you’ll need some other ed­u­ca­tion. I’ve done a lot of in­struc­tion in this field. It is com­mon for in­struc­tion not to take un­til there’s other ex­pe­rience in the field which helps things join up. Bridge: Yes to Level 2. Pos­si­bly to teach­ing a layper­son. The abil­ity to play bridge well is cor­re­lated heav­ily to in­tel­li­gence, but it also cor­re­lates to a cer­tain zeal for win­ning. I have taught one per­son to play very well in­deed, but that may not be repli­ca­ble, and took years. (On an aside, I am very likely the world’s fore­most ex­pert on on­line bridge cheat­ing; teach­ing cheat­ing pre­ven­tion would re­quire teach­ing bridge first.) Teach­ing re­quires more than rea­son­able in­tel­li­gence on the part of the teachee. Some peo­ple who are very in­tel­li­gent are in­ed­u­ca­ble. (Many of these are vi­o­la­tors of my 40% rule: You are al­lowed to think you are 40% smarter/​faster/​stronger/​bet­ter than you are. After that, it’s ob­nox­ious.) Some peo­ple are not in­ter­ested in learn­ing a given sub­ject. Some peo­ple will not over­come pre­set bi­ases. Some peo­ple have high ap­ti­tudes in some ar­eas and lit­tle ap­ti­tude in oth­ers (though in­tel­li­gence strongly tends to spill over.) Any­way, I’m in­ter­ested in the ar­ti­cle. My penul­ti­mate effort to ex­plain some­thing to many peo­ple—Bayes’ The­o­rem to lawyers—was a mod­er­ate failure; my last effort to ex­plain some­thing less mathy to a crowd was a sub­stan­tial suc­cess. (My last ex­pe­rience in ex­plain­ing some­thing, with as­sis­tance, to 12 peo­ple was a com­plete failure.) --JRM • I’m cu­ri­ous, why did you chose 40% for your “40% rule”? • It’s non-ar­bi­trary, but nei­ther is it pre­cise. 100% is clearly too high, and 10% is clearly too low. And since I started call­ing it The 40% Rule fif­teen years ago or there­about, a num­ber of my friends and ac­quain­tances have em­braced the rule in this in­car­na­tion. Ob­vi­ously, some things are un­quan­tifi­able and the spe­cific num­ber has rather limited ap­pli­ca­tion. But peo­ple like it at this num­ber. That counts for some­thing—and it gets the mes­sage across in a way that other for­mu­la­tions don’t. Some are non­plussed by the rule, but the vi­gor of sup­port by some sup­port­ers gives me some thought that I picked a num­ber peo­ple like. Since I never tried an­other num­ber, I could be wrong—but I don’t think I am. --JRM • Some of the work re­quires an un­der­stand­ing of a differ­ent lifestyle; if you can’t see the po­ten­tial is­sues with pros­e­cut­ing a rob­bery by a pros­ti­tute and her armed male friend, or you can’t pre­dict that a do­mes­tic vi­o­lence vic­tim will have a non-cred­ible re­can­ta­tion, you’ll need some other ed­u­ca­tion. • “The peo­ple who buy the ser­vices of a pros­ti­tute gen­er­ally don’t want to go on record say­ing so, which they would have to do at some point to pros­e­cute such a rob­bery. This is ei­ther be­cause they’re mar­ried, or the shame as­so­ci­ated with us­ing one.” • “Vic­tims of do­mes­tic vi­o­lence have a lot in­vested in the re­la­tion­ship, and, no mat­ter how much they feel hurt by the abuse, they will not want to tear apart the fam­ily and crip­ple their spouse with a felony con­vic­tion. This in­ner con­flict will be pre­sent when the vic­tim tries to re­cant their tes­ti­mony.” Did that re­ally re­quire pass­ing the learner off for some other ed­u­ca­tion? Or did I get the ex­pla­na­tion wrong? Any­way, I’m in­ter­ested in the ar­ti­cle. My penul­ti­mate effort to ex­plain some­thing to many peo­ple—Bayes’ The­o­rem to lawyers—was a mod­er­ate failure; my last effort to ex­plain some­thing less mathy to a crowd was a sub­stan­tial suc­cess. (My last ex­pe­rience in ex­plain­ing some­thing, with as­sis­tance, to 12 peo­ple was a com­plete failure.) I’d ac­tu­ally tried teach­ing in­for­ma­tion the­ory to my mom a week ago, which in­volved start­ing with the Bayes The­o­rem (my preferred phras­ing [1]). She’s a pro­fes­sional en­g­ineer, and found it very in­ter­est­ing (to the point where she kept prod­ding me for the next les­son), say­ing that it made much more sense of statis­tics. In about 1.5-2 hours to­tal, I cov­ered the The­o­rem, ap­pli­ca­tion to a car alarm situ­a­tion, ag­gre­gat­ing in­de­pen­dent pieces of ev­i­dence, the use of log-odds, and some stuff on Bayes nets and us­ing de­pen­dent pieces of ev­i­dence. [1] O(H|E) = O(H) * L(E|H) = O(H) * P(E|H) /​ P(E|~H) = “On ob­serv­ing ev­i­dence, am­plify the odds you as­sign to a be­lief by the prob­a­bil­ity of see­ing the ev­i­dence if the be­lief were true, rel­a­tive to if it were false.” • Ex­pan­sion on the ex­pla­na­tion about do­mes­tic vi­o­lence vic­tims—the vic­tim may also be afraid that the gov­ern­ment will not pro­tect them from the abuser, and the abuser will be an­grier be­cause of the at­tempt at pros­e­cu­tion. • “That is, do you be­lieve you are aware of the in­fer­en­tial con­nec­tions be­tween your ex­per­tise and layper­son-level knowl­edge?” This is re­lated to an idea that has been brew­ing at the back of my mind for a while now: Ex­perts aren’t always good teach­ers be­cause their prob­lem solv­ing al­gorithms may op­er­ate at a level of ab­strac­tion that is in­ac­cessible to a be­gin­ner. • Hmm… I’m not sure if I think of my­self as an ex­pert at any­thing, other than when peo­ple ask. But I’m pretty sure I have about the best un­der­stand­ing of logic I can hope to have, and could ex­plain vir­tu­ally all of it to an at­ten­tive small child given suffi­cient time. And I might be an ex­pert at some sort of com­puter pro­gram­ming, though I can think of peo­ple much bet­ter at any bit of it that I can think of; at any rate, I am also con­fi­dent I could teach that to any­one, or at least any­one who passes a ba­sic test • Com­puter pro­gram­ming: I’m not sure if I am at Level 2 or not on this. In fa­vor of be­ing at Level 2: I reg­u­larly think about non-com­puter-re­lated top­ics with a CS-like ap­proach (i.e. us­ing in­for­ma­tion the­ory ideas when play­ing the in­fer­ence game Zendo). Also, I strongly as­so­ci­ate my knowl­edge of “folk psy­chol­ogy” and “folk sci­ence” to com­puter sci­ence ideas, and these in­sights work in both di­rec­tions. For ex­am­ple, the “learned hel­pless­ness” phe­nomenon, where in­ex­pe­rienced users be­come so un­com­fortable with a sys­tem that they pre­fer to cling to their in­ex­pe­rienced sta­tus than to risk failure in an at­tempt to un­der­stand the sys­tem bet­ter, ap­pears in many ar­eas of life hav­ing noth­ing di­rectly to do with com­put­ers. Ev­i­dence against be­ing at Level 2: I do not have the nec­es­sary com­puter en­g­ineer­ing knowl­edge to con­nect my un­der­stand­ing of com­puter pro­gram­ming to my un­der­stand­ing of physics. And, al­though I have not tried this very of­ten, my ex­per­i­ments in at­tempt­ing to teach com­puter pro­gram­ming to laypeo­ple have been mid­dling at best. My as­sess­ment at this point is that I am prob­a­bly near to Level 2 in com­puter pro­gram­ming, but not quite there yet. • Can you teach a tal­ented, un­trained per­son a skill so that they ex­ceed your own abil­ity? Can you then iden­tify why they are su­pe­rior? If you have deep level knowl­edge of your area of ex­per­tise that you can im­part to oth­ers, you ought to be able to eval­u­ate and train a re­place­ment based on “raw tal­ent.” Con­sid­er­ing that in­tel­lec­tual or artis­tic en­deav­ors may have a va­ri­ety of de­tails hid­den even from the ex­pert, per­haps a clearer ex­am­ple may be found in sports coaches. • Per­haps a clearer ex­am­ple may be found in sports coaches. The main rea­son that coaches are im­por­tant (not just in sports) is be­cause of blind spots—i.e., things that are out­side of a per­son’s di­rect per­cep­tual aware­ness. Think of the Dun­ning-Kreuger effect: if you can’t per­ceive it, you can’t im­prove it. (This is also why pub­li­ca­tions have ed­i­tors; if a writer could per­ceive the er­rors in their work, they could fix them them­selves.) • 30 Aug 2010 23:41 UTC 5 points PZ Mey­ers’ com­ments on Kurzweil gen­er­ated some con­tro­versy here re­cently on LW—see here. Ap­par­ently PZ doesn’t agree with some of Kurzweil’s as­sump­tions about the hu­man mind. But that’s be­sides the point—what I want want to dis­cuss is this: ac­cord­ing to an­other blog, Kurzweil has been sel­l­ing bo­gus nu­tri­tional sup­ple­ments. What does ev­ery­one think of this? • I would like a bet­ter source than a blog com­ment for the claim that Kurzweil has been sel­l­ing bo­gus nu­tri­tional sup­ple­ments. The ob­vi­ous al­ter­na­tive pos­si­bil­ity is that some­one else, with less of a rep­u­ta­tion to worry about, at­tached Kurzweil’s name to their product with­out his knowl­edge. • Ok, I’ve found some bet­ter sources. See the first three links. • I would have preferred a more spe­cific link than that, to save me the time of do­ing a de­tailed in­ves­ti­ga­tion of Kurzweil’s com­pany my­self. But I ended up do­ing one any­ways, so here are the re­sults. That “Ray and Terry’s Longevity Prod­ucts” com­pany’s front page screams low-cred­i­bil­ity. It dis­plays three things: an ad for a book, which I can’t judge as I don’t have a copy, an ad for snack bars, and a news box. Neu­tral, silly, and, ah, some­thing amenable to a qual­ity test! The cur­rent top head­line in their Healthy Head­lines box looked to me like an ob­vi­ous false­hood (“Dirty Elec­tric­ity May Cause Type 3 Di­a­betes”), and on a topic im­por­tant to me, so I fol­lowed it up. It links to a blog I don’t rec­og­nize, which dug it out of a two year old study, which I found on PubMed. And I per­son­ally ver­ified that the study was wrong—by the most gen­er­ous in­ter­pre­ta­tion, as­sum­ing no placebo effect or pub­li­ca­tion bias (both of which were ob­vi­ously pre­sent), the study con­tains ex­actly 4 bits of ev­i­dence (4 case stud­ies in which the ob­served out­come had a 50% chance of hap­pen­ing as­sum­ing the null hy­poth­e­sis, and a 100% chance of hap­pen­ing as­sum­ing the con­clu­sion). A re­view ar­ti­cle con­firmed that it was flawed. That said, he prob­a­bly just figured the news box was unim­por­tant and del­e­gated the job to some­one who wasn’t smart enough to keep the lies out. But it means I can’t take any­thing else on the site se­ri­ously with­out a very time-con­sum­ing in­ves­ti­ga­tion, which is bad enough. The bit about Kurzweil tak­ing 250 nu­tri­tional sup­ple­ments per day jumps out, too, since it’s an ob­vi­ously wrong thing to do; the risks as­so­ci­ated with tak­ing a sup­ple­ment (ad­verse re­ac­tion, con­tam­i­na­tion, mis­la­bel­ing) scale lin­early with the num­ber taken, while the up­side has diminish­ing re­turns. You take the most valuable thing first, then the sec­ond-most, by the time you get to the 250th thing it’s a du­pli­cate or worth­less. Which leads me to be­lieve that he just fudged the num­ber, by count­ing things that are prop­erly con­sid­ered du­pli­cates like split doses of the same thing. • Kurzweil should be con­cerned that his name is as­so­ci­ated with junk sci­ence, and the over­all re­sult, but I think its a lit­tle far-fetched to think the man is ac­tu­ally sel­l­ing nu­tri­tional sup­ple­ments that he thinks are bo­gus. The state of medicine and nu­tri­tion to­day is such that we know there is so much we don’t know. The hu­man body is supremely com­plex, to make an un­der­state­ment. The ev­i­dence is pretty strong that most sup­ple­ments, and even most multi-vi­tam­ins, don’t do much or even do harm. How­ever that is cer­tainly not true in ev­ery case, and there are par­tic­u­lar sup­ple­ments where we have strong ev­i­dence for net pos­i­tive effect (vi­tamin D and fish oil have very strong ev­i­dence for net benefit at this point—ev­ery­one should be on them) . But if you are some­one like Kurzweil, and you want to make it to the Sin­gu­lar­ity, you prob­a­bly will do the re­search and be­lieve you have some in­side knowl­edge on op­ti­miz­ing the hu­man body. I find it more likely that he ac­tu­ally does take a boat­load of sup­ple­ments. • I’m sure he does take a lot of them him­self, but the prob­lem is that Kurzweil tak­ing sup­ple­ments will still make peo­ple think he is delu­sional (be­cause most peo­ple are in­stantly sus­pi­cious of peo­ple who do so, gen­er­ally for good rea­sons). On a re­lated note, Ben Best also sells sup­ple­ments on his web­site, and many of them look pretty ques­tion­able. • So I’m cu­ri­ous, do you be­lieve that typ­i­cal sup­ple­ments have net nega­tive effect, vs just neu­tral? It was my un­der­stand­ing that the weight of ev­i­dence points to most hav­ing neu­tral over­all effect, which to me wouldn’t jus­tify in­stant sus­pi­cion. I mean you may be wast­ing money, but you prob­a­bly aren’t hurt­ing your­self. And if you re­ally do the re­search, you prob­a­bly are go­ing to get some net pos­i­tive gain, statis­ti­cally speak­ing. Don’t you think? I know of at least 2 cases (vi­tamin D and fish oil, where the ev­i­dence for net benefit is strong—but mainly due to defi­ciency in the mod­ern diet). • I think it is a mixed bag: Some sup­ple­ments are po­ten­tially dan­ger­ous, but oth­ers (like the ones you men­tion) can be very helpful. The ma­jor­ity, how­ever, prob­a­bly have lit­tle to no effect what­so­ever. As a re­sult, I don’t think peo­ple should mess around with what they eat with­out it be­ing sub­jected to rigor­ous clini­cal tri­als first; though there might be a pos­i­tive net gain, one dose of some­thing bad can kill you. In any case, though, be­liev­ing that some­thing is helpful when it has not yet been tested is clearly ir­ra­tional. (This is more what I con­cerned about with Best and Kurzweil.) Sel­ling or pro­mot­ing some­thing that isn’t tested is even worse; it bor­ders on fraud and char­latanry. Edit: No, let me amend that: it is char­latanry. • If you want to elimi­nate hind­sight bias, write down some rea­sons that you think jus­tify your judg­ment. Those who con­sider the like­li­hood of an event af­ter it has oc­curred ex­ag­ger­ate their like­li­hood of hav­ing been able to pre­dict that event in ad­vance. We at­tempted to elimi­nate this hind­sight bias among 194 neu­ropsy­chol­o­gists. Fore­sight sub­jects read a case his­tory and were asked to es­ti­mate the prob­a­bil­ity of three differ­ent di­ag­noses. Sub­jects in each of the three hind­sight groups were told that one of the three di­ag­noses was cor­rect and were asked to state what prob­a­bil­ity they would have as­signed to each di­ag­no­sis if they were mak­ing the origi­nal di­ag­no­sis. Fore­sight-rea­sons and hind­sight-rea­sons sub­jects performed the same task as their fore­sight and hind­sight coun­ter­parts, ex­cept they had to list one rea­son why each of the pos­si­ble di­ag­noses might be cor­rect. The fre­quency of sub­jects suc­cumb­ing to the hind­sight bias was lower in the hind­sight-rea­sons groups than in the hind­sight groups not asked to list rea­sons. ARKES, H.R., et al., 1988. Elimi­nat­ing the hind­sight bias. Jour­nal of ap­plied psy­chol­ogy. • I’ve been want­ing to change my user­name for a while, and have heard from a few other peo­ple who do too, but I can see how this could be a bit con­fus­ing if some­one with a well-es­tab­lished iden­tity changes their user­name. (Fur­ther­more, at LW mee­tups, when I’ve told peo­ple my user­name, a cou­ple of peo­ple have said that they didn’t re­mem­ber spe­cific things I’ve posted here, but had some gen­er­ally pos­i­tive af­fect as­so­ci­ated with the name “ata”. I would not want to lose that af­fect!) So I pro­pose the fol­low­ing: Add a “Dis­play name” field to the Prefer­ences page on LW; if you put some­thing in there, then this name would be shown on your user page and your posts and com­ments, next to your user­name. (Per­haps some­thing like “ata (Adam At­las)” — or the other way around? Com­ments and sug­ges­tions are wel­come.) I’m will­ing to code this if there’s sup­port for it and if the ad­minis­tra­tors deem it ac­cept­able. • “The differ­ences are dra­matic. After track­ing thou­sands of civil ser­vants for decades, Mar­mot was able to demon­strate that be­tween the ages of 40 and 64, work­ers at the bot­tom of the hi­er­ar­chy had a mor­tal­ity rate four times higher than that of peo­ple at the top. Even af­ter ac­count­ing for ge­netic risks and be­hav­iors like smok­ing and binge drink­ing, civil ser­vants at the bot­tom of the peck­ing or­der still had nearly dou­ble the mor­tal­ity rate of those at the top.” “Un­der Pres­sure: The Search for a Stress Vac­cine” http://​​www.wired.com/​​mag­a­z­ine/​​2010/​​07/​​ff_stress_cure/​​all/​​1 • It was in­ter­est­ing that most of the com­menters were op­posed to the idea of a stress vac­cine, though their rea­sons didn’t seem very good. I’m won­der­ing whether the vac­cine would mean that peo­ple would be more in­clined to ac­cept low sta­tus (it’s less painful) or less in­clined to ac­cept low sta­tus (more en­ergy, less pes­simism.) I also won­der how much of the stress from low sta­tus is from ob­jec­tively worse con­di­tions (less be­nign stim­u­lus, worse sched­ules, more noise, etc.) as dis­tinct from less con­trol, and whether there’s a phys­i­cal ba­sis for the in­cli­na­tion to crank up stress on sub­or­di­nates. • their rea­sons didn’t seem very good. Wired has un­usu­ally crappy com­men­ta­tors; YouTube qual­ity. I wouldn’t put much stock in their re­ac­tions. I’m won­der­ing whether the vac­cine would mean that peo­ple would be more in­clined to ac­cept low sta­tus (it’s less painful) or less in­clined to ac­cept low sta­tus (more en­ergy, less pes­simism.) /​blatant speculation Stress re­sponse evolved for fight-or-flight—ba­boons and chimps fight nasty. Not for think­ing or health. Re­duce that, and like mind­ful­ness med­i­ta­tion, one can think bet­ter and solve one’s prob­lems bet­ter. is from ob­jec­tively worse conditions IIRC, the de­scrip­tion made it sound like the study con­trol­led for con­di­tions—com­par­ing cler­i­cal work with con­trol­ling bosses to cler­i­cal work sans con­trol­ling bosses. • Wired has un­usu­ally crappy com­men­ta­tors; YouTube qual­ity. Oh come on, they’re bad, but they’re not YouTube bad. • One men­tion is of un­sup­port­ive bosses and the other is of mean bosses. I think we need more de­tail to find out what is ac­tu­ally meant. • What sim­ple ra­tio­nal­ity tech­niques give the most bang for the buck? I’m talk­ing about tech­niques you might be able to ex­plain to a rea­son­ably smart per­son in five min­utes or less: re­ally the ba­sics. If part of the goal here is to raise the san­ity wa­ter­line in the gen­eral pop­u­lace, not just among sci­en­tists, then it would be nice to have some ra­tio­nal­ity tech­niques that some­one can use with­out much study. Carl Sa­gan had a slo­gan: “Ex­traor­di­nary claims re­quire ex­traor­di­nary ev­i­dence.” He would say this phrase and then ex­plain how, when some­one claims some­thing ex­traor­di­nary (i.e. some­thing for which we have a very low prob­a­bil­ity es­ti­mate), they need cor­re­spond­ingly stronger ev­i­dence than if they’d made a higher-like­li­hood claim, like “I had a sand­wich for lunch.” Now, I’m sure ev­ery­body here can talk about this very pre­cisely, in terms of Bayesian up­dat­ing and odds ra­tios, but Sa­gan was able to get a lot of this across to ran­dom laypeo­ple in about a minute. Maybe two min­utes. What tech­niques for ra­tio­nal­ity can be ex­plained to a nor­mal per­son in un­der five min­utes? I’m look­ing for small and sim­ple memes that will make peo­ple more ra­tio­nal, on av­er­age. I’ll try a few can­di­dates, to get the dis­cus­sion started. Can­di­date 1: Carl Sa­gan’s con­cise ex­pla­na­tion of how ev­i­dence works, as men­tioned above. Can­di­date 2: Every­thing that has an effect in the real world is part of the do­main of sci­ence (and, more broadly, ra­tio­nal­ity). A lot of peo­ple have the truly bizarre idea that some the­o­ries are spe­cial, im­mune to what­ever stan­dards of ev­i­dence they may ap­ply to any other the­ory. My fa­vorite ex­am­ple is peo­ple who be­lieve that prayers for heal­ing ac­tu­ally make peo­ple who are prayed for more likely to re­cover, but that this can­not be sci­en­tifi­cally tested. This is an ob­vi­ous con­tra­dic­tion: they’re claiming a mea­surable effect on the world and then pre­tend­ing that it can’t pos­si­bly be mea­sured. I think that if you pointed out a few ex­am­ples of this kind of spe­cial plead­ing to peo­ple, they might start to re­al­ize when they’re do­ing it. Can­di­date 3: Ad­mit­ting that you were wrong is a way of win­ning an ar­gu­ment. There’s a say­ing that “It takes a big man to ad­mit he’s wrong,” and when peo­ple say this, they don’t seem to re­al­ize that it’s a huge prob­lem! It shouldn’t be hard to ad­mit that you were wrong about some­thing! It shouldn’t feel like defeat; it should feel like vic­tory. When you lose an ar­gu­ment with some­one, it should be time for high fives and mu­tual ju­bila­tion, not shame and anger. I know that it’s pos­si­ble to re­train your­self to feel this way, be­cause I’ve done it. This wasn’t even too difficult; it was more a mat­ter of just re­al­iz­ing that feel­ing good about con­ced­ing an ar­gu­ment was even an op­tion. Anti-can­di­date: “Just be­cause some­thing feels good doesn’t make it true.” I call this an anti-can­di­date be­cause, while it’s true, it’s sel­dom helpful. Peo­ple trot out this line as an ar­gu­ment against other peo­ple’s ideas, but rarely ap­ply it to their own. I want memes that will make peo­ple ac­tu­ally be more ra­tio­nal, in­stead of just feel­ing that way. Any ideas? I know that the main goal of this com­mu­nity is to strive for ra­tio­nal­ity far be­yond such low-hang­ing fruit, but if we can come up with sim­ple and easy tech­niques that ac­tu­ally help peo­ple be more ra­tio­nal, there’s a lot of value in that. You could use it as ra­tio­nal­ist pro­pa­ganda, or some­thing. EDIT: I’ve ex­panded this into a top-level post. • I think some of the statis­ti­cal fal­la­cies that most peo­ple fall for are quite high up the list. One such is the “What a co­in­ci­dence!” fal­lacy. Peo­ple no­tice that some un­likely event has oc­curred, and won­der how many mil­lions to one against this event must have been—and yet it ac­tu­ally hap­penned ! Surely this means that my life is in­fluenced by some su­per­nat­u­ral in­fluence! The typ­i­cal mis­take is to sim­ply calcu­late the like­li­hood of the oc­cur­rence of the par­tic­u­lar event that oc­curred. Noth­ing wrong with that, but one should also com­pare that num­ber against the whole bas­ket of other pos­si­ble un­likely events that you would have no­ticed if they’d hap­penned (of which there are surely mil­lions), and all the pos­si­ble oc­ca­sions where all these un­likely events could have also oc­curred. When you do that, you dis­cover that the like­li­hood of some un­likely thing hap­pen­ning is quite high—which is in ac­cor­dance with our ex­pe­rience that un­likely events do ac­tu­ally hap­pen. Another way of look­ing at it is that non-no­table un­likely events hap­pen all the time. Look, that par­tic­u­lar car just passed me at ex­actly 2pm ! Most are not not­i­ca­ble. But some­times we no­tice that a par­tic­u­lar un­likely event just oc­curred, and of course it causes us to sit up and take no­tice. The ques­tion is how many other un­likely events you would also have no­ticed. The key ra­tio­nal skill here is notic­ing the ac­tual size of the set of un­likely things that might have hap­penned, and would have caught our at­ten­tion if they had. • The con­cept of in­fer­en­tial dis­tance is good. You wouldn’t want to in­tro­duce it in the con­text of ex­plain­ing some­thing com­pli­cated—you’d just sound self-serv­ing—but it’d be a good thing to crack out when peo­ple com­plain about how they just can’t un­der­stand how any­one could be­lieve$CLAIM.

Edit: It’s also a use­ful con­cept when you are think­ing about teach­ing.

• I’m go­ing to be run­ning a se­ries of Ra­tion­al­ity & AI sem­i­nars with Alex Flint in the Au­tumn, where we’ll in­tro­duce as­piring ra­tio­nal­ists to new con­cepts in both fields; stan­dard cog­ni­tive bi­ases, a bit of Bayesi­anism, some of the ba­sic prob­lems with both AI and Friendli­ness. As such, this could be a very helpful thread.

We were think­ing of in­tro­duc­ing Over­con­fi­dence Bias; ask peo­ple to give 90% con­fi­dence in­ter­vals, and then re­veal (sur­prise sur­prise!) that they’re wrong half the time.

• Since it seemed like this could be helpful, I ex­panded this into a top-level post.

That 90% con­fi­dence in­ter­val thing sounds like one hell of a dirty trick. A good one, though.

• #3 is a fa­vorite of mine, but I like #1 too.

How about “Your in­tu­itions are not magic”? Grant­ing in­tu­itions the force of au­thor­ity seems to be a com­mon failure mode of philos­o­phy.

• That’s a good les­son to in­ter­nal­ize, but how do you get some­one to in­ter­nal­ize it? How do you ex­plain it (in five min­utes or less) in such a way that some­one can ac­tu­ally use it?

I’m not say­ing that there’s no easy way to ex­plain it; I just don’t know what that way would be. When I ar­gue with some­one who acts like their in­tu­itions are magic, I usu­ally go back to ba­sic episte­mol­ogy: define con­cisely what it means to be right about what­ever we’re dis­cussing, and show that their in­tu­itions here aren’t magic. If there’s a sim­ple way to ex­plain in gen­eral that in­tu­ition isn’t magic, I’d re­ally love to hear it. Any ideas?

• Given that we haven’t con­structed a de­cent AI, and don’t know how those in­tu­itions ac­tu­ally work, we only re­ally be­lieve they’re not magic on the grounds that we don’t be­lieve in magic gen­er­ally, and don’t see any rea­son why in­tu­itions should be an ex­cep­tion to the rule that all things can be ex­plained.

Per­haps an eas­ier les­son is that in­tu­itions can some­times be wrong, and it’s use­ful to know when that hap­pens so we can cor­rect for it. For ex­am­ple, most peo­ple are in­tu­itively much more afraid of dy­ing in dra­matic and un­usual ways (like air crashes or psy­chotic kil­lers) than in more mun­dane ways like driv­ing the car or eat­ing un­healthy foods, Once it’s es­tab­lished that in­tu­itions are some­times wrong, the fact that we don’t ex­actly know how they work isn’t so dan­ger­ous to one’s think­ing.

• Well, I thought Kaj_Sotana’s ex­pla­na­tion was good, but the five-minute con­straint makes things very difficult. I tend to be so long-winded that I’m not sure I could get across any in­sight in five min­utes, hon­estly, but you’re right that “Your in­tu­itions are not magic” is likely to be harder than many.

• Re­lated—ver­bal over­shad­ow­ing, where de­scribing some­thing ver­bally blocks re­triev­ing per­cep­tual mem­o­ries of it. Crit­i­cally, ver­bal over­shad­ow­ing doesn’t always oc­cur - some­times ver­bal de­scrip­tions im­prove rea­son­ing.

Doesn’t re­fute Lehrer’s main point ex­actly, but does com­pli­cate it some­what.

• One lit­tle anti-akra­sia thing I’m try­ing is edit­ing my crontab to pe­ri­od­i­cally pop up an xmes­sage with a me­mento mori phrase. It checks that my lap­top lid is open, gets a ran­dom in­te­ger and oc­ca­sion­ally pops up the # of sec­onds to my ac­tu­ar­ial death (got­ten from Death Clock; ac­cu­rate enough, I figure):

 1,16,31,46 * * * * if grep open /​proc/​acpi/​but­ton/​lid/​LID0/​state; then if [ $((date \+\%\s % 6)) = 1 ]; then xmes­sage “$(((date—date=”9 Au­gust 2074″ \+\%\s - date \+\%\s) /​ 60)) min­utes left to live. Is what you are do­ing im­por­tant?”; fi; fi


(I figure it’s stupid enough a tac­tic and cheap enough to be worth try­ing. This shell stuff works in both bash and dash/​sh, how­ever, you prob­a­bly want to edit the first con­di­tional, since I’m not sure Linux puts the lid data at the same place in /​proc/​acpi in ev­ery sys­tem.)

• OK, I can’t seem to get the es­cap­ing to work right with crontab no mat­ter how I fid­dle, so I’ve re­placed the one-liner with a reg­u­lar script and mean­ingful vari­ables names and all:

 1,14,32,26 * * * * ~/​bin/​bin/​me­mento-mori


The script it­self be­ing (with the 32-bit hack men­tioned be­low):

#!/​bin/​sh
set -e

if grep open /​proc/​acpi/​but­ton/​lid/​LID?/​state > /​dev/​null
then
CURRENT=date +%s;
if [ $(($CURRENT % 8 )) = 1 ]
then
# DEATH_DATE=date—date=‘9 Au­gust 2074’ +%s
DEATH_DATE=”3300998400″
REMAINING=$(($DEATH_DATE  - $CURRENT )) REMAINING_MINUTES=$(( $REMAINING /​ 60 )) REMAINING_MINUTES_FMT=env printf “%’d”$REMAINING_MINUTES
(sleep 10m && kil­lall xmes­sage &)
xmes­sage “$REMAINING_MINUTES_FMT min­utes left to live. Is what you are do­ing im­por­tant?” fi fi  • Dates that far into the fu­ture don’t seem to work with the date on 32-bit Linux. Fun idea oth­er­wise. You should re­port back in a month or so if you’re still us­ing it. • I had to re­in­stall with 32-bit to use a doc­u­ment scan­ner, so this be­came a prob­lem for me. What I did was punch my 2074 date into a on­line con­verter, and use that gen­er­ated date:  - DEATH_DATE=date—date=‘9 Au­gust 2074’ +%s + # DEATH_DATE=date—date=‘9 Au­gust 2074’ +%s + DEATH_DATE=”3300998400″  • It might have an op­po­site effect to what is in­tended since the num­ber would sim­ply be too large. • Peo­ple still use 32-bit OSs? But se­ri­ously, you could prob­a­bly shell out to some­thing else. Or you could change the out­put—it doesn’t have to be in sec­onds or min­utes. For ex­am­ple, you could call date to get the cur­rent year, and sub­tract that against 2074 or what­ever. • An amus­ing case of ra­tio­nal­ity failure: Stock­well Day, a long­stand­ing al­ba­tross around Canada’s neck, says that more pris­ons need to be built be­cause of an ‘in­crease in un­re­ported crime.’ As my brother-in-law amus­ingly noted on FB, quite apart from whether the ac­tual claim is true (no ev­i­dence is forth­com­ing), un­less these un­re­ported crimes are lead­ing to un­re­ported tri­als and un­re­ported in­car­cer­a­tions, it’s not clear why we would need more pris­ons. • I think one of the other rea­sons many peo­ple are un­com­fortable with cry­on­ics is that they imag­ine their souls be­ing stuck—they aren’t get­ting the ad­van­tages of be­ing al­ive or of heaven. • In all hon­esty, I sus­pect an­other rea­son peo­ple are un­com­fortable with cry­on­ics is that they don’t like be­ing cold. • what’s a soul? • Well, for the peo­ple pre­sum­ably feel­ing un­com­fortable, it’s an im­mor­tal spirit that houses your per­son­al­ity and gets at­tached to a body for your pil­gri­mage on Earth. There might be some­thing to this for peo­ple who re­ject this meta­physic, even be­yond un­con­sciously car­ry­ing it around. If you’re go­ing to come back, you don’t get the sec­u­lar heaven of “be­ing fondly re­mem­bered af­ter you die.” In a long re­tire­ment or va­ca­tion, the book hasn’t been shut on you. Per­haps there’s some­thing im­por­tant many peo­ple find in the book be­ing shut—of oth­ers, af­ter­wards, be­ing able to eval­u­ate a life as a com­pleted story. Some­one frozen is maybe a “com­pleted story” and maybe not. • 2 Aug 2010 10:32 UTC 4 points I’m not yet good enough at writ­ing posts to ac­tu­ally prop­erly post some­thing but I hoped that if I wrote some­thing here peo­ple might be able to help me im­prove. So ob­vi­ously peo­ple can com­ment how­ever they nor­mally would but it would be great if peo­ple would be will­ing to give me the sort of ad­vice that would help me to write a bet­ter post next time. I know that nor­mal com­ments do this to some ex­tent but I’m also just look­ing for the ba­sics – is this a good enough topic to write a post on but not well enough ex­e­cuted (there­fore, I should work on my writ­ing). Is it not a good enough topic? Why not? Is it not in depth enough? And so on. Is your graph com­plete? The red gnomes are known to be the best ar­guers in the world. If you asked them whether the only crea­ture that lived in the Graph Moun­tains was a Dwon­gle, they would say, “No, be­cause Dwon­gles never live in moun­tains.” And this is true, Dwon­gles never live in moun­tains. But if you want to know the truth, you don’t talk to the red gnomes, you talk to the green gnomes who are the sec­ond best ar­guers in the world. And they would say. “No, be­cause Dwon­gles never live in moun­tains.” But then they would say, “Both we and the red gnomes are so good at ar­gu­ing that we can con­vince peo­ple that false things are true. Even worse though, we’re so good that we can con­vince our­selves that false things are true. So we always ask if we can ar­gue for the op­po­site side just as con­vinc­ingly.” And then, af­ter think­ing, they would say, “We were wrong, they must be Dwon­gles, for only Dwon­gles ever live in places where no other crea­tures live. So we have a para­dox and para­doxes can never be re­solved by giv­ing counter ex­am­ples to one or the other claim. In­stead of coun­ter­ing, you must in­val­i­date one of the ar­gu­ments.” Even­tu­ally, they would say, “Ah. My mag­i­cal fairy mush­room has in­formed me that Graph Moun­tain is in fact a hill, iron­i­cally named, and Dwon­gles of­ten live in hills. So yes, the crea­ture is a Dwon­gle.” The point of all of that is best dis­cussed af­ter in­tro­duc­ing a method of di­a­gram­ming the rea­son­ing made by the green gnomes. The fol­low­ing se­ries of di­a­grams should be rea­son­ably self ex­plana­tory. A is a propo­si­tion that we want to know the truth of (the crea­ture in the Graph Moun­tains a Dwon­gle) and not-A is its nega­tion (the crea­ture in the Graph Moun­tains is not a Dwon­gle). If a path is drawn be­tween a propo­si­tion and the “Truth” box, then the propo­si­tion is true. Paths are not di­rect but go through a proof (in this case P1 stands in for “Dwon­gles never live in moun­tains” and P2 stands in for “Only Dwon­gles live in a place where no other crea­tures live). The di­a­grams con­nect to the ar­gu­ment made above by the green gnome. First, we have the ar­gu­ment that it mustn’t be a Dwon­gle be­cause of P1. The sec­ond di­a­gram shows the green gnome re­al­is­ing that they have an ar­gu­ment that it must be a Dwon­gle too due to P2. This mid­dle type of di­a­gram could be called a “Para­dox Di­a­gram.” Figure 1. The green gnomes pro­cess of ar­gu­ment. In his book, Good and Real, Gary Drescher notes that para­doxes can’t be re­solved by mak­ing more coun­ter­ar­gu­ments (which would re­late to the ap­proach shown in figure 2 be­fore, which when con­sid­ered graph­i­cally is ob­vi­ously not helpful, we still have both propo­si­tions be­ing shown to be true) but rather, by in­val­i­dat­ing one of the ar­gu­ments. That’s what the green gnomes did when they re­al­ised that Graph Moun­tain was ac­tu­ally a hill and that’s what the fi­nal di­a­gram in figure 1 shows the re­sult of (when you re­move a ver­tex, like P1, you re­move all the lines con­nected to it as well). Figure 2. At­tempt­ing to re­solve a para­dox via counter ar­gu­ments rather than in­val­i­da­tion. The in­ter­est­ing thing in all of this is that the first and third di­a­grams in figure 1 look very similar. In fact, they’re the same but sim­ply with differ­ent propo­si­tions proven. And this raises some­thing: It can be very difficult to tell the differ­ence be­tween an in­com­plete para­dox di­a­gram and a com­pleted proof di­a­gram. The differ­ence be­tween the two is whether you’ve tried to find an ar­gu­ment for the op­po­site of the propo­si­tion proven and, if you do find one, whether you’ve man­aged to in­val­i­date that ar­gu­ment. What this means is, if you’re not con­fi­dent that your proof for a propo­si­tion is true, you can’t be sure that you’ve taken all of the ap­pro­pri­ate steps to es­tab­lish its truth un­til you’ve asked: Is my graph com­plete? • So my pre­sump­tion is that 4 points means this ar­ti­cle isn’t hope­less—it hasn’t at­tracted crit­i­cism, some peo­ple have up­voted it—but isn’t of a LW stan­dard—it hasn’t been voted highly enough, there is only 1 com­ment en­gag­ing with the topic. Is any­one able to give me a sense at to why it isn’t good enough? Should the topic nec­es­sar­ily be backed up by peer re­viewed liter­a­ture? Is it just not a big enough in­sight? Is it the writ­ing? Is it the lack of spe­cific ex­am­ples noted by Gw­ern? Is it too similar to other ideas? And so on. I hope I’m not bug­ging peo­ple by try­ing to figure it out but I’m try­ing to get bet­ter at writ­ing posts with­out filling the main bit of less wrong with un­in­ter­est­ing stuff and this seemed like a less in­tru­sive way to do this. I also feel like the best way to im­prove isn’t sim­ply read­ing the posts but in­volves ac­tu­ally try­ing to write posts and (hope­fully) get­ting feed­back. Thanks • I tried com­pos­ing a re­sponse a day or two ago, but had difficulty find­ing the words. In a nut­shell, I thought you should start with last two para­graphs, boil that down to a co­her­ent and spe­cific claim. Then write an en­tirely new es­say that puts that claim at the top, in an in­tro­duc­tory/​sum­mary para­graph. The rest of the post should be spent jus­tify­ing and elab­o­rat­ing on the claim di­rectly and clearly, with­out talk­ing about gnomes or de­ploy­ing the fal­lacy of equiv­o­ca­tion on the sly, but hope­fully with cita­tion to peer re­viewed ev­i­dence and/​or more gen­er­ally ac­cessible works about rea­son­ing (like from a book). • Thanks for the com­ment. That’s re­ally helpful. So I should ba­si­cally start with the idea, pre­sent it more clearly (no gnomes) and try to provide peer re­viewed ev­i­dence or at least some sup­port. • I like this, but in Good and Real, Drescher’s paradigm works be­cause he then sup­plies a few ex­am­ples where he in­val­i­dates a para­dox-caus­ing ar­gu­ment, and then goes on to ap­ply this gen­eral ap­proach. Asides from your dwarf hy­po­thet­i­cal ex­am­ple, where do you ac­tu­ally check that your graph is com­plete? • 3 Aug 2010 7:43 UTC 0 points Parent I think that you’re ask­ing when would you check that your graph is com­plete in a real world case, sorry if I mi­s­un­der­stood. If so, take the ques­tion of whether global warm­ing is an­thro­pogenic. There are peo­ple who claim to have ev­i­dence that it is and peo­ple who claim to have ev­i­dence that it isn’t so the ba­sic di­a­gram that we have for this case is a para­dox di­a­gram similar to that in figure 2 of the ar­ti­cle above. Now there are a num­ber of pos­si­ble re­sponses to this: Some peo­ple could be stuck on the para­dox di­a­gram and be un­sure as to the right an­swer, some peo­ple may have in­val­i­dated one or the other side of the ar­gu­ment and may have de­cided one or the other claim is true, and some may be adding more and more proofs to one side or the other—coun­ter­ing rather than in­val­i­dat­ing. I think there’s also a fourth group who’s be­lief graph will look the same as those who have in­val­i­dated one side and have hence reached a con­clu­sion. How­ever, these will be peo­ple who, while they may tech­ni­cally know that ar­gu­ments ex­ist for the nega­tion of their be­lief, have not taken op­pos­ing no­tions into ac­count in their be­lief graph. So to them, it will look like a graph demon­strat­ing the truth of their be­lief but, in fact, it’s sim­ply an in­com­plete para­dox graph and they have some dis­tance to go to figure out the truth of the mat­ter. So to sum­marise: I think there are peo­ple on both sides of the an­thro­pogenic global warm­ing de­bate who know that pur­ported proofs against their be­liefs ex­ist on one level but who don’t fac­tor these into their be­lief graphs. I think they could benefit from ask­ing them­selves whether their graph is com­plete. I should men­tion that this par­tic­u­lar case isn’t what mo­ti­vated the post—in some ways I worry that by pro­vid­ing spe­cific ex­am­ples peo­ple stop judg­ing an idea on its merit and start judg­ing it based on their be­liefs re­gard­ing the ex­am­ple men­tioned and how they feel this is meant to tie in with the idea. Re­gard­less, I could be mis­taken. Is it con­sid­ered a good idea to always provide real world ex­am­ple in LW posts on ra­tio­nal­ity tech­niques? Or if you meant a more per­sonal ex­am­ple then at my work there’s cur­rently a de­bate over whether a pro­posed elec­tronic sys­tem will work. I’m one of the few peo­ple that thinks it won’t (and I have some ar­gu­ments to sup­port that) but I haven’t in­val­i­dated any ar­gu­ments that show it will work, I sim­ply haven’t come across any such ar­gu­ments. But it’s a cir­cum­stance where I might benefit from ask­ing, is my graph com­plete? As a side note, I think the tech­nique can also be ex­tended to other cir­cum­stances. For ex­am­ple, some as­pects of Eliezer’s Guess­ing the teacher’s pass­word could be mod­el­led by a “Pass­word Graph” a graph like those above but where the truth of both A and not-A go through the same proof (say P1 for ex­am­ple). If you have a proof for A then you could ask if you have an in­com­plete Pass­word graph be­cause, if so, you could be in trou­ble. So you could ex­tend the cir­cum­stances where the ques­tion ap­plies by ask­ing if you have com­pleted any of a num­ber of graphs. Of course, do­ing so comes at the cost of sim­plic­ity. • Sup­pose you know from good sources that there is go­ing to be a huge catas­tro­phe in the very near fu­ture, which will re­sult in the near-ex­ter­mi­na­tion of hu­man­ity (but the nat­u­ral en­vi­ron­ment will re­cover more eas­ily). You and a small group of or­di­nary men and women will have to restart from scratch. You have a limited time to com­pile a com­pendium of knowl­edge to pre­serve for the new era. What is the most im­por­tant knowl­edge to pre­serve? I am hum­bled by how poorly my own per­sonal knowl­edge would fare. • I sus­pect that peo­ple are over­es­ti­mat­ing in their replies how much could be done with Wikipe­dia. Peo­ple in gen­eral un­der­es­ti­mate a) how much tech­nol­ogy re­quires boot­strap­ping (met­al­lurgy is a great ex­am­ple of this) b) how much many tech­nolo­gies, even prim­i­tive ones, re­quire large pop­u­la­tions so that spe­cial­iza­tion, lo­ca­tional ad­van­tages and com­par­a­tive ad­van­tage can kick in (Peo­ple even in not very tech­nolog­i­cally ad­vanced cul­tures have had tech lev­els regress when they set­tle large is­lands or when their lo­ca­tions get cut off from the main­land. Tas­ma­nia is the clas­si­cal ex­am­ple of this. The in­abil­ity to trade with the main­land caused large drops in tech level). So while Wikipe­dia makes sense, it would also be helpful to have a lot of de­tails on do-it-your­self pro­jects that could use pre-ex­ist­ing rem­nants of ex­ist­ing tech­nol­ogy. There are a lot of web­sites and books de­voted to that topic, so that shouldn’t be too hard. If we are re­duc­ing to a small pop­u­la­tion, we may need also to fo­cus on get­ting through the first one or two gen­er­a­tions with an in­tact pop­u­la­tion. That means that a hand­ful of prac­ti­cal books on field surgery, mid­wifing, and similar ba­sic med­i­cal is­sues may be­come very nec­es­sary. Also, when you spec­ify “or­di­nary men and women” do you mean who all speak the same lan­guage? And do you mean by “or­di­nary” roughly de­vel­oped world coun­tries? That’s what many peo­ple seem to mean when ques­tions like this are pro­posed. They could al­ter things con­sid­er­ably. For ex­am­ple, if it re­ally is a ran­dom sam­ple, then in­ter-lan­guage dic­tio­nar­ies will be very im­por­tant. But, if the sam­ple in­volves some peo­ple from the de­vel­op­ing world, they are more likely to have some of the knowl­edge base for work­ing in a less tech­nolog­i­cally ad­vanced situ­a­tion that peo­ple in the de­vel­oped world will lack (even this may only be true to a very limited ex­tent be­cause the tech level of the de­vel­op­ing world is in many re­spects very high com­pared to the tech level of hu­mans for most of hu­man his­tory. Many coun­tries de­scribed as de­vel­op­ing world are in bet­ter shape than for ex­am­ple much of Europe in the Mid­dle Ages.) • how much tech­nol­ogy re­quires boot­strap­ping (met­al­lurgy is a great ex­am­ple of this) I would love to see a re­al­ity TV show about a met­al­lurgy ex­pert mak­ing a knife or other metal tool from scratch. The ex­pert would be pro­vided food and shelter but would have no equip­ment or ma­te­ri­als for mak­ing metal, and so would have to find and dig up the ore them­selves, build their own oven, and what­ever else you would have to do to make metal if you were trans­ported to the stone age. • One prob­lem you would face with such a show is if the eas­ily-available ore is gone. • Yes, this is in fact con­nected to a gen­eral prob­lem that Nick Bostrom has pointed out, each time you try to go back from stone age tech to mod­ern tech you use re­sources up that you won’t have the next time. How­ever, for pur­poses of ac­tu­ally get­ting back to high lev­els of tech­nol­ogy rather than hav­ing a fun re­al­ity show, we’ve got a few ad­van­tages. One can use the re­main­ing metal that is in all the left over ob­jects from mod­ern civ­i­liza­tion (cars be­ing one com­mon easy source of a num­ber of met­als). Some met­als are ac­tu­ally very difficult to ex­tract from ore (alu­minum is the pri­mary ex­am­ple of this. Un­til the tech­nolo­gies for ex­trac­tion were de­vel­oped, it was ex­pen­sive and had al­most no uses) whereas the ru­ins of civ­i­liza­tion will have those met­als in near pure forms if one knows where to look. • The ar­gu­ment that no one per­son in the face of Earth knows how to build a mouse from scratch is plau­si­ble. Matt Ridley • He [pratch­ett] has to hide it from the au­thor­i­ties. • Maps. Lo­ca­tions of pre-dis­aster set­tle­ments to be used as sup­ply caches. Lo­ca­tions of struc­tures to be used for defense. Lo­ca­tions of phys­i­cal re­sources for on­go­ing ex­ploita­tion: wa­ter, fish­eries, quar­ries. Lo­ca­tions of no travel zones to avoid pathogens. • In rough or­der of ad­di­tion to the cor­pus of knowl­edge: 1. Ba­sic sur­vival skills (e.g. nav­i­ga­tion). 2. Edit: Ba­sic agri­cul­ture (e.g. an­i­mal hus­bandry, crop cul­ti­va­tion). 3. Calcu­lus. 4. Clas­si­cal me­chan­ics. 5. Ba­sic chem­istry. 6. Ba­sic medicine. 7. Ba­sic poli­ti­cal sci­ence. • Ba­sic san­i­ta­tion! • Yes! Insert san­i­ta­tion be­tween 3 and 4, and in­sert con­struc­tion (e.g. whit­tling, car­pen­try, metal cast­ing) be­tween san­i­ta­tion and 3. • For sur­vival skills, I’d sug­gest buy­ing this one be­fore the dis­aster, while there’s still in­ter­net. • Pre­sup­pos­ing that only a limited amount of knowl­edge could be saved seems wrong. You could bury petabytes of data in digi­tal form, then print out a few books’ worth of hints for get­ting back to the tech­nol­ogy level nec­es­sary to read it. • If the re­sources for print­ing are still handy. I don’t feel com­fortable count­ing on that at pre­sent lev­els of tech­nol­ogy. • 2 Aug 2010 6:40 UTC 2 points Parent A dead tree copy of Wikipe­dia. A his­tory book about an­cient hand­made tools and tech­niques from pre­his­tory to now. A bunch of K-12 school books about math and sci­ence. Also as many var­i­ous un­der­grad­u­ate and post­grad­u­ate level text­books as pos­si­ble. • Wikipe­dia is a great an­swer be­cause we know that most but no all the in­for­ma­tion is good. Some is non­sense. This will force the fu­ture gen­er­a­tions to ques­tion and maybe de­velop their own ‘sci­ence’ rather than wor­ship the great au­thor­ity of ‘the old and holy books’. • The knowl­edge about sci­ence is­sues gen­er­ally tracks our cur­rent un­der­stand­ing very well. And his­tor­i­cal knowl­edge that is wrong will be ex­tremely difficult for peo­ple to check post an apoc­a­lyp­tic event, and even then is largely cor­rect. In fact, if Wikipe­dia’s sci­ence con­tent re­ally were bad enough to mat­ter it would be an awful thing to bring into this situ­a­tion since hav­ing cor­rect knowl­edge or not could al­ter whether or not hu­man­ity sur­vives at all. • Wikipe­dia would also con­tain a lot of info about cur­rent peo­ple and places, which would no longer be re­motely use­ful. • And a lot of pop­u­lar cul­ture which would no longer be available. • A dead-tree copy of Wikipe­dia has been es­ti­mated at around 1,420 vol­umes. Here’s an illus­tra­tion, with a hu­man for scale. It’s big. You might as well go for broke and hole up in a library when the Big Catas­tro­phe hap­pens. • One of these http://​​thewik­ireader.com/​​ with recharge­able bat­ter­ies and a so­lar charger could work. • Un­til some crit­i­cal part ox­i­dates or oth­er­wise breaks. Which will likely be a long time be­fore the new so­ciety is able to build a re­place­ment. • But the Wik­iReader is prob­a­bly a step in the right di­rec­tion that is worth men­tion­ing. While most of the cur­rent tech­nol­ogy de­pend on many other tech­nol­ogy to be use­ful (cel­lu­lar phones need cel­lu­lar net­works, most gad­gets won’t last a day on their in­ter­nal bat­ter­ies etc), the Wik­iReader is a wel­come step in the di­rec­tion less trav­el­led. I only hope that we will have more of that. • Let’s ex­am­ine the prob­lem in more de­tail: Differ­ent dis­aster sce­nar­ios would re­quire differ­ent pieces of in­for­ma­tion, so it would help if you knew ex­actly what kind of catas­tro­phe. How­ever, if you can pre­serve a very large com­pendium of knowl­edge, then you can cre­ate a cat­a­logue of nec­es­sary in­for­ma­tion for al­most ev­ery type of dooms­day sce­nario (nu­clear war, en­vi­ron­men­tal catas­tro­phe, etc.) so that you will be pre­pared for al­most any­thing. If the amount of in­for­ma­tion you can save is more limited, then you should save the pieces of in­for­ma­tion that are the most likely to be use­ful in any given sce­nario in “catas­tro­phe-space.” Now we have to go about de­ter­min­ing what these pieces of in­for­ma­tion are. We can start by look­ing at the most likely dooms­day sce­nar­ios—Yoreth, since you started the thread, what do you think the most likely ones are? • I sup­pose, per­haps, an as­ter­oid im­pact or nu­clear holo­caust? It’s hard for me to imag­ine a dis­aster that wipes out 99.999999% of the pop­u­la­tion but doesn’t just finish the job. The sce­nario is more a prompt to pro­voke ex­am­i­na­tion of the amount of knowl­edge our civ­i­liza­tion re­lies on. (What first got me think­ing about this was the idea that if you went up into space, you would find that the Earth was no longer pro­tected by the an­thropic prin­ci­ple, and so you would shortly see the LHC pro­duce a black hole that de­vours the Earth. But you would be hard pressed to restart civ­i­liza­tion from a space sta­tion, at least at cur­rent tech lev­els.) • The other prob­lem is this: if there is a dis­aster that wipes out such a large per­centage of the Earth’s pop­u­la­tion, the few peo­ple who did sur­vive it would prob­a­bly be in very iso­lated ar­eas and might not have ac­cess to any of the knowl­edge we’ve been talk­ing about any­way. Still, it is in­ter­est­ing to look at what knowl­edge our civ­i­liza­tion rest on. It seems to me that a lot of the in­fras­truc­ture we rely on in our day-to-day lives is “ir­re­ducibly com­plex”—for ex­am­ple, we know how to make com­put­ers, but this is not a nec­es­sary skill in a dis­aster sce­nario (or our an­ces­tral en­vi­ron­ment). • the idea that if you went up into space, you would find that the Earth was no longer pro­tected by the an­thropic prin­ci­ple, and so you would shortly see the LHC pro­duce a black hole that de­vours the Earth. I am not fol­low­ing this. Why would the an­thropic prin­ci­ple no longer ap­ply if you went into space? • I think it’s a quan­tum im­mor­tal­ity ar­gu­ment. If you, the ob­server, are no longer on Earth, the Earth can be de­stroyed be­cause its de­struc­tion no longer ne­ces­si­tates your death. • How to start a fire only us­ing sticks. How to make a cut­ting blade from rocks. How to cre­ate a bow, and make ar­rows. Ba­sic san­i­ta­tion. • That seems like ad­vice for liv­ing in the woods—not a bad idea, but it prob­a­bly needs to be ad­justed for differ­ent en­vi­ron­ments (find wa­ter in dry land, stay­ing warm in ex­treme cold, etc.) and es­pe­cially for scav­eng­ing from ru­ins. Any thoughts about peo­ple skills you’d need af­ter the big dis­aster? • I thought about those a bit, but came to a few con­clu­sions that made sense to me. Be­ing in a very dry land is sim­ply a bad idea, best to move. Any group of sur­vivors that is more than three days from fresh wa­ter won’t be sur­vivors, and once they’ve made it to the fresh wa­ter source there won’t be many rea­sons to stray far from it for at least a cou­ple gen­er­a­tions, so wa­ter-find­ing skills will prob­a­bly not be use­ful and be quickly lost. Stay­ing warm in ex­treme cold would be cov­ered both by the fire-start­ing skills and the bow-mak­ing skills. I wanted to put some­thing about peo­ple skills, but I don’t have any my­self and didn’t know what I could pos­si­bly say that would be re­motely use­ful. Hope­fully some­one with more ex­pe­rience on that sub­ject will sur­vive as well. :) • I’m tempted to say “a uni­ver­sity library” as the short an­swer. More speci­fi­cally, what­ever I could get from the sci­ence and en­g­ineer­ing de­part­ments. Pick the clas­sic works in each field if you have some­one to filter them. Look for stuff that’s more uni­ver­sal than spe­cific to the way we’ve done things—in com­put­ing terms, you want The Art of Com­puter Pro­gram­ming and not The C Pro­gram­ming Lan­guage. In the short term, any­thing you can find on farm­ing and prim­i­tive medicine—all the stuff the bet­ter class of sur­vival­ist would have on their book­shelf. • I only need one item: The Holy Bible (kid­ding) • Depends what level you want to achieve post-catas­tro­phe; some, if not most of your re­sources and knowl­edge will be needed to deal with spe­cific effects. In short, your suit­case will be full of sur­vival­ist and med­i­cal ma­te­rial. In an thought ex­per­i­ment where you freeze your­self un­til the ecosys­tem is re­stored, you can prob­a­bly use an al­gorithm of tak­ing the best library ma­te­ri­als from each cen­tury, cor­rected for er­rors, to achieve the level of that cen­tury. Both Robin­son Cru­soe and Jules Verne’s “Mys­te­ri­ous Is­land” and ex­plore similar boot­strap­ping sce­nar­ios, in­ter­est­ingly both use some “out­side in­jec­tions”. • There’s an idea I’ve seen around here on oc­ca­sion to the effect that cre­at­ing and then kil­ling peo­ple is bad, so that for ex­am­ple you should be care­ful that when mod­el­ing hu­man be­hav­ior your mod­els don’t be­come peo­ple in their own right. I think this is bunk. Con­sider the fol­low­ing: -- Sup­pose you have an up­loaded hu­man, and fork the pro­cess. If I un­der­stand the meme cor­rectly, this cre­ates an ad­di­tional per­son, such that kil­ling the sec­ond pro­cess counts as mur­der. Does this still hold if the two pro­cesses are not made to di­verge; that is, if they are de­ter­minis­tic (or use the same pseu­do­ran­dom seed) and are never given differ­ing in­puts? Sup­pose that in­stead of fork­ing the pro­cess in soft­ware, we con­structed an ad­di­tional iden­ti­cal com­puter, set it on the table next to the first one, and copied the pro­gram state over. Sup­pose fur­ther that the com­put­ers were cued up to each other so that they were not only perform­ing the same com­pu­ta­tion, but ex­e­cut­ing the steps at the same time as each other. (We won’t read­just the sync on an on­go­ing ba­sis; it’s just part of the ini­tial con­di­tions, and the de­ter­minis­tic na­ture of the al­gorithm en­sures that they stay in step af­ter that.) Sup­pose that the com­put­ers were not elec­tronic, but in­sanely com­plex me­chan­i­cal ar­rays of gears and pul­leys perform­ing the same com­pu­ta­tion—em­u­lat­ing the elec­tronic com­put­ers at re­duced speed, per­haps. Let us fur­ther spec­ify that the com­put­ers oc­cupy one fewer spa­tial di­men­sion than the space they’re em­bed­ded in, such as flat com­put­ers in 3-space, and that the com­put­ers are pressed flush up against each other, cor­re­spond­ing gears mov­ing to­gether in uni­son. What if the cor­re­spond­ing parts (which must be stay­ing in synch with each other any­way) are su­per­glued to­gether? What if we sim­ply build a sin­gle com­puter twice as thick? Do we still have two peo­ple? -- No, of course not. And, on re­flec­tion, it’s ob­vi­ous that we never did: re­dun­dant com­pu­ta­tion is not ad­di­tional com­pu­ta­tion. So what if we cause the ems to di­verge slightly? Let us stipu­late that we give them some triv­ial differ­ences, such as the mil­lisec­ond timing of when they re­ceive their emails. If they are not ac­tively try­ing to di­verge, I an­ti­ci­pate that this would not have much differ­ence to them in the long term—the ems would still be, for the most part, the same per­son. Do we have two dis­tinct peo­ple, or two mostly re­dun­dant peo­ple—per­haps one and a tiny frac­tion, on ag­gre­gate? I think a lot of peo­ple will be tempted to an­swer that we have two. But con­sider, for a mo­ment, if we were not talk­ing about peo­ple but—say—works of liter­a­ture. Two very similar sto­ries, even if by a raw diff they share al­most no words, are of not much more value than only one of them. The at­ti­tude I’ve seen seems to treat peo­ple as a spe­cial case—as a sep­a­rate mag­is­terium. -- I wish to as­sert that this value sys­tem is best mod­eled as a be­lief in souls. Not im­mor­tal souls with an af­ter­life, you un­der­stand, but mor­tal souls, that are cre­ated and de­stroyed. And the world sim­ply does not work that way. If you re­ally be­lieved that, you’d try to cause global ther­monu­clear war, in or­der to pre­vent the birth of billions or more of peo­ple who will in­evitably be kil­led. It might take the heat death of the uni­verse, but they will die. • You make good points. I do think that mul­ti­ple in­de­pen­dent iden­ti­cal copies have the same moral sta­tus as one. Any­thing else is go­ing to lead to ab­sur­di­ties like those you men­tioned, like the idea of cut­ting a me­chan­i­cal com­puter in half and dou­bling its moral worth. I have for a while had a feel­ing that the moral value of a be­ing’s ex­is­tence has some­thing to do with the amount of unique in­for­ma­tion gen­er­ated by its mind, re­sult­ing from its in­ner emo­tional and in­tel­lec­tual ex­pe­rience. (Where “has some­thing to do with” = it’s some­where in the for­mula, but not the whole for­mula.) If you have 100 iden­ti­cal copies of a mind, and you delete 99 of them, you have not lost any in­for­ma­tion. If you have two slightly di­ver­gent copies of a mind, and you delete one of them, then that’s bad, but only as bad as de­stroy­ing what­ever in­for­ma­tion ex­ists in it and not the other copy. Abor­tion doesn’t seem to be a bad thing (apart from any pain caused; that should still be min­i­mized) be­cause a fe­tus’s brain con­tains al­most no in­for­ma­tion not com­press­ible to its DNA and en­vi­ron­men­tal noise, nei­ther of which seems to be morally valuable. Similar with an­i­mals; it ap­pears many an­i­mals have some in­ner emo­tional and in­tel­lec­tual ex­pe­rience (to vary­ing de­grees), so I con­sider delet­ing an­i­mal minds and caus­ing them pain to have ter­mi­nal nega­tive value, but not nearly as great as do­ing the same to hu­mans. (I also sus­pect that a be­ing’s value has some­thing to do with the de­gree to which its mind’s unique in­for­ma­tion is en­tan­gled with and mod­eled (in lower re­s­olu­tion) by other minds, à la I Am A Strange Loop.) • I think… there’s more to this wrong­ness-feel­ing I have than I’ve ex­pressed. I would read­ily sub­ject a mil­lion forks of my­self to hor­rific suffer­ing for the mod­er­ate benefit of just one of me. The main rea­son I’d have reser­va­tions about re­leas­ing my­self on the in­ter­net for any­one to down­load would be be­cause they could learn how to ma­nipu­late me. The main prob­lem I have with slav­ery and star­va­tion is that they’re a waste of hu­man re­sources, and that mono­lithic power struc­tures are brit­tle against black swans. In short, I don’t con­sider it a moral is­sue what al­gorithm is com­puted to pro­duce a par­tic­u­lar re­sult. I’m not sure how to for­mal­ize this prop­erly. • 24 Aug 2010 4:06 UTC 3 points Some hobby Bayesi­anism. A typ­i­cal challenge for a ra­tio­nal­ist is that there is some claim X to be eval­u­ated, it seems pre­pos­ter­ous, but many peo­ple be­lieve it. How should you take ac­count of this when con­sid­er­ing how likely X is to be true? I’m go­ing to pro­pose a math­e­mat­i­cal model of this situ­a­tion and dis­cuss two of it’s fea­tures. This is based on a con­tin­u­ing dis­cus­sion with Un­knowns, who I think dis­agrees with what I’m go­ing to pre­sent, or with its rele­vance to the “typ­i­cal challenge.” Sum­mary: If you learn that a pre­pos­ter­ous hy­poth­e­sis X is be­lieved by many peo­ple, you should not cor­rect your prior prob­a­bil­ity P(X) by a fac­tor larger than the re­cip­ro­cal of P(Y), your prior prob­a­bil­ity for the hy­poth­e­sis Y = “X is be­lieved by many peo­ple.” One can de­duce an es­ti­mate of P(Y) from an es­ti­mate of the quan­tity “if I already knew that at least n peo­ple be­lieved X, how likely it would be that n+1 peo­ple be­lieved X” as a func­tion of n. It is not clear how use­ful this method of es­ti­mat­ing P(Y) is. The right way to un­pack “X seems pre­pos­ter­ous, but many be­lieve it” math­e­mat­i­cally is as fol­lows. We have a very low prior prob­a­bil­ity P(X), and then we have new ev­i­dence Y = “many peo­ple be­lieve X”. The prob­lem is to eval­u­ate P(X|Y). One way to phrase the typ­i­cal challenge is “How much larger than P(X) should P(X|Y) be?” In other words, how large is the ra­tio P(X|Y)/​P(X)? Bayes for­mula im­me­di­ately says some­thing in­ter­est­ing about this: P(X|Y)/​P(X) = P(Y|X)/​P(Y) More­over, since P(Y|X) < 1, the right-hand side of that equa­tion is less than 1/​P(Y). My in­ter­pre­ta­tion of this: if you want to know how se­ri­ously to take the fact that many peo­ple be­lieve some­thing, you should con­sider how likely you find it that many peo­ple would be­lieve it ab­sent any ev­i­dence. Or a lit­tle more pre­cisely, how likely you find it that many peo­ple would be­lieve it if the amount of ev­i­dence available to them was un­known to you. You should not cor­rect your prior for X by more than the re­cip­ro­cal of this prob­a­bil­ity. Com­ment: how much less than 1 P(Y|X) is de­pends on the na­ture of X. For in­stance, if X is the claim “the Rie­mann hy­poth­e­sis is false” then it is un­clear to me how to es­ti­mate P(Y|X), but (since it is con­ceiv­able to me that RH is false, but still it is widely be­lieved) it might be quite small. If X is an ev­ery­day claim like “it’s a full moon to­mor­row”, or a spec­tac­u­lar claim like “Je­sus rose from the dead”, it seems like P(Y|X) is very close to 1. So some­times 1/​P(Y) is a good ap­prox­i­ma­tion to P(X|Y)/​P(X), but maybe some­times it is a big over­es­ti­ma­tion. What about P(Y)? Is there a way to es­ti­mate it, or at least ap­proach its es­ti­ma­tion? Let’s give our­selves a lit­tle more to work with, by quan­tify­ing “many peo­ple” in “many peo­ple be­lieve X”. Let Y(n) be the as­ser­tion “at least n peo­ple be­lieve X.” Note that this model doesn’t spec­ify what “be­lieve” means—in par­tic­u­lar it does not spec­ify how strongly n peo­ple be­lieve X, nor how smart or ex­pert those n peo­ple are, nor where in the world they are lo­cated… if there is a se­ri­ous weak­ness in this model it might be found here. Another ap­pli­ca­tion of Bayes the­o­rem gives us P(Y(n+1))/​P(Y(n)) = P(Y(n+1)|Y(n)) (Since P(Y(n)|Y(n+1)) = 1, i.e. if we know n+1 peo­ple be­lieve X, then of course n peo­ple be­lieve X). Squint­ing a lit­tle, this gives us a for­mula for the deriva­tive of the log­a­r­ithm of P(Y(n)). Yud­kowsky has sug­gested nam­ing the log of a prob­a­bil­ity an “ab­sur­dity,” let’s write A(Y(n)) for the ab­sur­dity of Y(n). d/​dn A(Y(n)) = A(Y(n+1)|Y(n)) So up to an ad­di­tive con­stant A(Y(n)) is the in­te­gral from 1 to n of A(Y(m+1)|Y(m))dm. So an ansatz for P(Y(n+1)|Y(n)) = exp(A(Y(n+1)|Y(n)) will al­low us to say some­thing about P(Y(n)), up to a mul­ti­plica­tive con­stant. The shape of P(Y(n+1)|Y(n)) seems like it could have a lot to do with what kind of state­ment X is, but there is one thing that seems likely to be true no mat­ter what X is: if N is the to­tal pop­u­la­tion of the world and n/​N is close to zero, then P(Y(n+1)|Y(n)) is also close to zero, and if n/​N is close to one then P(Y(n+1)|Y(n)) is also close to one. I might work out an ex­am­ple ansatz like this in a fu­ture com­ment, if this one stands up to scrutiny. • Here is my pro­posal for an ansatz for P(Y(n+1)|Y(n)). That is, given that at least n peo­ple already be­lieve X, how likely it is that at least one more per­son also be­lieves X. Let N be the to­tal pop­u­la­tion of the world. If n/​N is close to zero, then I ex­pect P(Y(n+1)|Y(n)) is also close to zero, and if n/​N is close to 1, then P(Y(n+1)|Y(n)) is also close to 1. That is, if I know that a tiny pro­por­tion of peo­ple be­lieve some­thing, that’s very weak ev­i­dence that a slightly larger pro­por­tion be­lieve it also, and if I know that al­most ev­ery­one be­lieves it, that’s very strong ev­i­dence that even more peo­ple be­lieve it. One fam­ily of func­tions that have this prop­erty are the func­tions f(n) = (n/​N)^C, where C is some fixed pos­i­tive num­ber. Ac­tu­ally it’s con­ve­nient to set C = c/​N where c is some other fixed pos­i­tive num­ber. I don’t have a story to tell about why P(Y(n+1)|Y(n)) should be­have this way, I bring it up only be­cause f(n) does the right thing near 1 and N, and is pretty sim­ple. To eval­u­ate P(Y(n)), we take the in­te­gral of (c/​N)log(t/​N)dt from 1 to n, and ex­po­nen­ti­ate it. The re­sult is, up to a mul­ti­plica­tive constant exp(c times (x log x—x)) = (x/​e)^(cx) where x = n/​N. I think it’s a good idea to leave this as a func­tion of x. Write K for the mul­ti­plica­tive con­stant. We have P(Pro­por­tion x of the pop­u­la­tion be­lieves X) = K(x/​e)^(cx). A graph of this func­tion for K = 1, c = 1 can be found here and a graph of its re­cip­ro­cal (whose rele­vance is ex­plained in the par­ent) can be found here • It’s an in­ter­est­ing anal­y­sis—have you con­firmed the ap­pear­ance of that dis­tri­bu­tion with real-world data? I sup­pose you’d need a sub­stan­tial body of fac­tual claims about which statis­ti­cal in­for­ma­tion is available... • Thanks. I of course have no data, al­though I think there are lots of sur­veys done about weird things peo­ple be­lieve. But even if this is the cor­rect dis­tri­bu­tion, I think it would be difficult to fit data to it, be­cause I would guess/​worry that the con­stants K and c would de­pend on the na­ture of the claim. (c is so far just an ar­ti­fact of the ansatz. K is some­thing like P(Y(1)|Y(0)). Differ­ent for bigfoot than for Chris­ti­an­ity.) Do you have any ideas? • 11 Aug 2010 6:38 UTC 3 points Where should the line be drawn re­gard­ing the sta­tus of an­i­mals as moral ob­jects/​en­tities? E.G Do you think it is eth­i­cal to boil lob­sters al­ive? It seems to me there is a full spec­trum of pos­si­ble an­swers: at one ex­treme only hu­mans are val­ued, or only pri­mates, only mam­mals, only vet­er­brates, or at the other ex­treme, any or­ganism with even a rudi­men­tary ner­vous sys­tem (or any com­pu­ta­tional, digi­tal iso­mor­phism thereof), could be seen as a moral ob­ject/​en­tity. Now this is not nec­es­sar­ily a bi­nary dis­tinc­tion, if shrimp have in­trin­sic moral value it does not fol­low that they must have a equal value to hu­mans or other ‘higher’ an­i­mals. As I see it, there are two pos­si­bil­ities; ei­ther we come to a point where the moral value drops to zero, or else we de­cide that en­tities ap­proach zero to some ar­bi­trary limit: e.g. a c. el­e­gans round­worm with its 300 neu­rons might have a ‘he­do­nic co­effi­cient’ of 3x10^-9. I per­son­ally fa­vor the former, the lat­ter just seems ab­surd to me, but I am open to ar­gu­ments or any com­ments/​crit­i­cisms. • Sup­pose sen­tient be­ings have in­trin­sic value in pro­por­tion to how in­tensely they can ex­pe­rience hap­piness and suffer­ing. Then the value of in­ver­te­brates and many non-mam­mal ver­te­brates is hard to tell, while any mam­mal is likely to have al­most as much in­trin­sic value as a hu­man be­ing, some pos­si­bly even more. But that’s just the in­trin­sic value. Hu­mans have a tremen­dously greater in­stru­men­tal value than any non-hu­man an­i­mal, since hu­mans can cre­ate su­per­in­tel­li­gence that can, with time, save tremen­dous amounts of civil­i­sa­tions in other parts of the uni­verse from suffer­ing (yes, they are sparse, but with time our su­per­in­tel­li­gence will find more and more or them, in the­ory ul­ti­mately in­finitely many). The in­stru­men­tal value of most hu­mans is enor­mously higher than the in­trin­sic value of the same per­sons—given that they do suffi­ciently good things. • As I see it, there are two pos­si­bil­ities; ei­ther we come to a point where the moral value drops to zero, or else we de­cide that en­tities ap­proach zero to some ar­bi­trary limit: e.g. a c. el­e­gans round­worm with its 300 neu­rons might have a ‘he­do­nic co­effi­cient’ of 3x10^-9. I per­son­ally fa­vor the former, the lat­ter just seems ab­surd to me, but I am open to ar­gu­ments or any com­ments/​crit­i­cisms. Less ab­surd than that some or­ganism is in­finitely more valuable than its sibling that differs in lack­ing a sin­gle mu­ta­tion (in the case of the first or­ganism of a par­tic­u­lar species to have evolved “high” enough to have min­i­mal moral value)? • My an­swer: if it shows signs of not want­ing some­thing to hap­pen, such as avoid­ing a situ­a­tion, it’s best not to have it hap­pen. Of course, sim­ple stim­u­lus re­sponse doesn’t count, but if an an­i­mal can learn, it shouldn’t be tor­tured for fun. This only ap­plies to an­i­mals, though. I’m not sure about ma­chines. • There isn’t a very mean­ingful dis­tinc­tion be­tween an­i­mals and ma­chines. What does or doesn’t count as a “sim­ple stim­u­lus re­sponse”? Or learn­ing? • Okay, more de­tails: if an an­i­mal’s be­hav­ior changes when it’s re­peat­edly in­jured, it can learn. And learn­ing is goal-ori­ented. But if it always does the same thing in the same situ­a­tion, what­ever that ac­tion is, it doesn’t cor­re­spond to a de­sire. And the rea­son why this is im­por­tant for an­i­mals is that I as­sume that what­ever it is that suffer­ing is, I guess that it evolved quite long ago. After all, avoid­ing in­jury is a big part of the point of hav­ing a brain that can learn. • I’ve pro­grammed a robot to be­have in the way you de­scribe, treat­ing bright lights as painful stim­uli. Was test­ing it im­moral? • That’s why I said it’s hairier with ma­chines. Um, ac­tual pain or just di­su­til­ity? • That would de­pend pretty heav­ily on how you define pain. This is a good ques­tion; my first in­stinct was to say that they’re the same thing, but it’s not quite that sim­ple. Pain in an­i­mals is re­ally just an in­ac­cu­rate sig­nal of per­ceived di­su­til­ity. The robot’s code con­tained a func­tion that “pun­ished” states in which its pho­tore­cep­tor was highly stim­u­lated, and the robot made changes to its be­hav­ior in re­sponse, but I’m re­ally not sure if that’s equiv­a­lent to an­i­mal pain, or where ex­actly that line is. • Pain has been the topic of a top-level post. I think my own com­ment on that thread is rele­vant here. • Ahh, I hadn’t seen that be­fore. Thanks for the link. So, did my robot ex­pe­rience suffer­ing then? Or is there some broader cat­e­gory of nega­tive stim­u­lus that in­cludes both suffer­ing and the pun­ish­ment of states in which cer­tain vari­ables are above cer­tain thresh­olds? I think it’s pretty clear that the robot didn’t ex­pe­rience pain, but I’m still con­fused. • Say a “cat­alytic pat­tern” is some­thing like scaf­fold­ing, an en­tity that makes it eas­ier to cre­ate (or oth­er­wise ob­tain) an­other en­tity. An “au­to­cat­alytic pat­tern” is a sort of cir­cu­lar ver­sion of that, where the ex­is­tence of an in­stance of the pat­tern acts as scaf­fold­ing for cre­at­ing or oth­er­wise ob­tain­ing an­other en­tity. Au­to­catal­y­sis is nor­mally men­tioned in the “ori­gin of life” sci­en­tific field, but it also ap­plies to cul­tural ratch­ets. An au­to­cat­alytic so­cial struc­ture will cat­alyze a few more in­stances of it­self (fre­quently not ex­pand­ing with­out end—rather, a niche is filled), and then the pop­u­la­tion has some re­dun­dancy and re­cov­er­abil­ity, act­ing as a ratchet. For ex­am­ple, driv­ing on the right(left) in one re­gion cat­alyzes driv­ing on the right(left) in an ad­ja­cent re­gion. De­sign­ing cir­cu­lar or self-ap­pli­ca­ble en­tities is kindof tricky, but it’s not as tricky as it might be—of­ten, theres an at­trac­tion basin around a hy­poth­e­sized cir­cu­lar en­tity, where X cat­alyzes Y which is very similar to X, and Y cat­alyzes Z which is very similar to Y, and so fo­cus­ing your search suffi­ciently, and then iter­at­ing or iter­at­ing-and-tweak­ing can of­ten get the last, trick­iest steps. Dou­glas Hofs­tadter cat­alyzed the cre­ation (by Lee Sal­lows) of a “Pan­gram Ma­chine” that ex­ploits this at­trac­tion basin to cre­ate a self-de­scribing sen­tence that starts “This Pan­gram con­tains four as, [...]” - see http://​​en.wikipe­dia.org/​​wiki/​​Pangram Has there been any work on mea­sur­ing, study­ing at­trac­tion bas­ins around au­to­cat­alytic en­tities? • Has there been any work on mea­sur­ing, study­ing at­trac­tion bas­ins around au­to­cat­alytic en­tities? I don’t know of any work on the ques­tion, but it’s a good topic. Na­tions seem to be au­to­catylitic. • Would peo­ple be in­ter­ested in a place on LW for col­lect­ing book recom­men­da­tions? I’m read­ing The Logic of Failure and en­joy­ing it quite a bit. I wasn’t sure whether I’d heard of it here, and I found Great Books of Failure, an ar­ti­cle which hadn’t crossed my path be­fore. There’s a re­cent thread about books for a gifted young tween which might or might not get found by some­one look­ing for good books..… and so on. Would it make more sense to have a top level ar­ti­cle for book recom­men­da­tions or put it in the wiki? Or both? • Con­sid­er­ing most of my fa­vorite books are the re­sult of men­tions in com­ment threads here, I’d say a book recom­men­da­tion thread is in or­der. Tan­gen­tal, but I re­mem­ber “Logic of Failure” to be mostly be­ing men­tal phe­nom­ena I was already fa­mil­iar with, and gen­er­al­iza­tions from com­puter ex­per­i­ments that I didn’t find par­tic­u­larly com­pel­ling. I’ll have to give it an­other look. • I liked the sec­tion near the be­gin­ning about the var­i­ous ways of be­ing bad at op­ti­miz­ing com­plex com­puter sce­nar­ios. It was a tidy de­scrip­tion of the ways peo­ple think too lit­tle about what they’re do­ing and/​or overfo­cus on the wrong things. Part of my en­joy­ment was see­ing those mat­ters de­scribed so com­pactly, and part of it was the emo­tional tone which com­bined a re­al­iza­tion that this is a se­ri­ous prob­lem with a to­tal lack of gloat­ing over other peo­ple’s idiocy. That last may in­di­cate that I’ve been spend­ing too much time on­line. If you didn’t no­tice any­thing new to you in the book the first time, there may not be a good rea­son for you to reread it. • I’d say new top-level thread. The wiki can get a cu­rated ver­sion of that. • I think I may have ar­tifi­cially in­duced an Ugh Field in my­self. A lit­tle over a week ago it oc­curred to me that per­haps I was think­ing too much about X, and that this was dis­tract­ing me from more im­por­tant things. So I re­solved to not think about X for the next week. Of course, I could not stop X from cross­ing my mind, but as soon as I no­ticed it, I would sternly think to my­self, “No. Shut up. Think about some­thing else.” Now that the week’s over, I don’t even want to think about X any more. It just feels too weird. And maybe that’s a good thing. • I have also ar­tifi­cially in­duced an Ugh Field in my­self. A few months ago, I was hav­ing a hor­rible prob­lem with web­sur­fing pro­cras­ti­na­tion. I started us­ing Fire­fox for brows­ing and LeechBlock to limit (but not elimi­nate) my op­por­tu­ni­ties for web­sur­fing in­stead of do­ing work. I’m on a Win­dows box, and for the first three days I dis­abled IE, but do­ing so caused knock-on effects, so I had to re-en­able it. How­ever, I knew that re­sort­ing to IE to surf would sim­ply recre­ate my pro­cras­ti­na­tion prob­lem, so… I just didn’t. Now, when the thought oc­curs to me to do so, it auto-squelches. • I pre­dict with 95% con­fi­dence that within six months you will have recre­ated your pro­cras­ti­na­tion prob­lem with some other means. • Does any­one have any book recom­men­da­tions for a gifted young teen? My nephew is 13, and he re­cently blew the lid off of a school-ad­ministered IQ test. For his birth­day, I want to give him some books that will in­spire him to achieve great things and live a happy life full of hard work. At the very least, I want to give him some good math and sci­ence books. He has already has taken alge­bra, ge­om­e­try and in­tro­duc­tory calcu­lus, so he knows some math already. • Books are not enough. Smart kids are lonely. Get him into a good school (or other com­mu­nity) where he won’t be the smartest one. That hap­pened to me at 11 when I was ac­cepted into Rus­sia’s best math school and for the first time in my life I met other peo­ple worth talk­ing to, peo­ple who ac­tu­ally thought be­fore say­ing words. Sud­denly, to re­gain my usual po­si­tion of the smart kid, I had to ac­tu­ally work hard. It was very very im­por­tant. I still go to school re­unions ev­ery year, even though I finished it 12 years ago. • Alter­na­tively, not hav­ing any equally smart kids to talk to will force him to read books and/​or go on­line for in­ter­est­ing ideas and con­ver­sa­tion. I don’t think I had any re­ally in­ter­est­ing real-life con­ver­sa­tions un­til col­lege, when I did an in­tern­ship at Microsoft Re­search, and I’d like to think that I turned out fine. My fa­vorite book, BTW, is A Fire Upon the Deep. But one of the rea­sons I like it so much is that I was heav­ily into Usenet when I first read it, and I’m not sure that as­pect of the book will res­onate as much to­day. (I was de­ter­mined to be­come a one-man San­dor Ar­bi­tra­tion In­tel­li­gence. :) • You turned out fine, but if you had my back­ground (spend­ing a big chunk of your child­hood solv­ing math prob­lems and com­mu­ni­cat­ing the solu­tions ev­ery day), you’d con­vert way more of your de­ci­sion-the­ory ideas into small the­o­rems with con­clu­sive proofs, in­stead of leav­ing the low-hang­ing fruit to peo­ple like me. • Se­conded. Whether he’s ex­posed to a group of peo­ple who think ideas can be cool could be the biggest in­fluence on him for the rest of his life. • Thirded. My ex­pe­rience is that most schools can be very dam­ag­ing for smart kids. • Fo­rum fa­vorite Good and Real looks rea­son­ably ac­cessible to me, and cov­ers a lot of ground. Also sec­ond­ing Gödel, Escher Bach. The Math­e­mat­i­cal Ex­pe­rience has es­says about do­ing math­e­mat­ics, writ­ten by ac­tual math­e­mat­i­ci­ans. It seems like very good read­ing for some­one who might be con­sid­er­ing study­ing math. The Road to Real­ity has Roger Pen­rose try­ing to ex­plain all of mod­ern physics and the re­quired math­e­mat­ics with­out pul­ling any punches and start­ing from grade school math in a sin­gle book. Will prob­a­bly cause a brain melt­down at some point on any­one who doesn’t already know the stuff, but just hav­ing a pop­u­lar sci­ence style book that nev­er­the­less goes on to ex­plain the gen­eral the­ory of rel­a­tivity with­out hand­wav­ing is pretty im­pres­sive. Doesn’t in­clude any of Pen­rose’s less for­tu­nate forays into cog­ni­tive sci­ence and AI. Dar­win’s Danger­ous Idea by Daniel Den­nett ex­plains how evolu­tion isn’t just some­thing that hap­pens in biol­ogy, but how it turns up in all sorts of sys­tems. Arm­chair Uni­verse and old book about “com­puter recre­ations”, prob­a­bly most fa­mous is the in­tro­duc­tion of the Core War game. The other top­ics are similar, set­ting up an en­vi­ron­ment with a sim­ple pro­gram that has elab­o­rate emer­gent be­hav­ior com­ing out of it. As­sumes the reader might ac­tu­ally pro­gram the recre­ations them­selves, and pro­vides ap­pro­pri­ate de­tail. Surely You’re Jok­ing, Mr. Feyn­man is pretty much en­ter­tain­ment, but still very good. Feyn­man is still the req­ui­site trick­ster-god pa­tron saint of math and sci­ence. Code: The Hid­den Lan­guage of Com­puter Hard­ware and Soft­ware ex­plains how com­put­ers are put to­gether, start­ing from re­ally con­crete first prin­ci­ples (flash­ing Morse code with flash­lights, me­chan­i­cal re­lay cir­cuits) and get­ting up to micro­pro­ces­sors, RAM and ex­e­cutable pro­gram code. • Good and Real is su­perb, but re­ally too dry for a 13-year-old. I’d wait on that one. Surely You’re Jok­ing is also fan­tas­tic, but get it read and ap­proved by your nephew’s par­ents first; there’s a few sex­ual sto­ries with a hint of a PUA wor­ld­view. • I loved “The Math­e­mat­i­cal Ex­pe­rience” when I was 13-ish, and I re-read it re­cently; still good! I strongly sec­ond this recom­men­da­tion. • Thanks, I just or­dered ‘Dar­win’s Danger­ous Idea’ and ‘Code: The Hid­den Lan­guage of Com­puter Hard­ware and Soft­ware’. I’ve already got the oth­ers. Here a tid­bit from ‘The Math­e­mat­i­cal Ex­pe­rience In the 3,000 cat­e­gories of math­e­mat­i­cal writ­ing, new math­e­mat­ics is be­ing cre­ated at a con­stantly in­creas­ing rate. The ocean is ex­pand­ing, both in depth and in breadth. By mul­ti­ply­ing the num­ber of pa­pers per is­sue and the av­er­age num­ber of the­o­rems per pa­per, their es­ti­mate came to nearly two hun­dred thou­sand the­o­rems a year. If the num­ber of the­o­rems is larger than one can pos­si­bly sur­vey, who can be trusted to judge what is ‘im­por­tant’? One can­not have sur­vival of the fittest if there is no in­ter­ac­tion. It is ac­tu­ally im­pos­si­ble to keep abreast of even the more out­stand­ing and ex­cit­ing re­sults. How can one rec­on­cile this with the view that math­e­mat­ics will sur­vive as a sin­gle sci­ence? In math­e­mat­ics one be­comes mar­ried to one’s own lit­tle field. [...] The va­ri­ety of ob­jects worked on by young sci­en­tists is grow­ing ex­po­nen­tially. [...] Only within the nar­row per­spec­tive of a par­tic­u­lar spe­cial­ity can one see a co­her­ent pat­tern of de­vel­op­ment. • I’ve or­dered a copy, but on a sec­ond look, I’m not sure that the ar­gu­ment is sound, or even in­ter­est­ing. Biolog­i­cal evolu­tion runs on the lo­cal non-sur­vival of the least fit (and some­times the un­lucky), not on an overview-based eval­u­a­tion of the fittest. • Peer-re­view is the preda­tor. But if the prey pop­u­la­tion is higher than can be sheltered by se­lec­tion of promis­ing ideas from non­sense, non­sense will pre­vail. That is, those peo­ple pro­duc­ing valuable re­sults won’t be fa­vored over those that come up with marginal or wrong re­sults. • Yes, that’s ex­actly the kind of stuff I recom­mended The Math­e­mat­i­cal Ex­pe­rience for. It takes a bird’s eye view in­stead of go­ing for the usual text­book minu­tiae, but still feels like it’s talk­ing about the ac­tual prac­tice of math­e­mat­ics in­stead of some­thing sim­plified to death for the benefit of pop­u­lar au­di­ences. • Wow, great list. Thanks! • Oh, oops— I in­tended my re­view of the above se­lec­tions to show up on your replies, not Risto’s. • Godel Escher Bach! • knb, does your nephew know about less­wrong, ra­tio­nal­ity and the Sin­gu­lar­ity? I guess I would have en­joyed read­ing such a web­site when I was a teenager. When it comes to a phys­i­cal book, Eng­ines of Creation by Drexler can be a good way to in­tro­duce him to nan­otech­nol­ogy and what sci­ence can make hap­pen. (I know that nan­otech is far less im­por­tant that FAI, but I think it is more “vi­sual” : you can imag­ine those nanobots man­u­fac­tur­ing stuff or cur­ing dis­eases, while you can­not imag­ine a hard take­off). Teenagers need dream. • My sister and brother-in-law are both semi-re­li­gious the­ists, so I’m a bit re­luc­tant to in­tro­duce him to any­thing as hard­core-athe­ist as Less Wrong, at least right now. Go­ing through that huge the­ist-to-athe­ist iden­tity tran­si­tion can be re­ally trau­matic. I think it would be bet­ter if he was a bit older be­fore he had con­front those ideas. I was 16 be­fore I re­ally al­lowed my­self to ac­cept that I didn’t be­lieve in God, and that was still a ma­jor crisis for me. If he starts get­ting into hard­core ra­tio­nal­ity ma­te­rial this early, I’m afraid it could force a choice be­tween ra­tio­nal­ity and wish­ful think­ing that he may not be ready to make. • If he is gifted and in­ter­ested in sci­ence, in­tro­duc­ing him to less­wrong, ra­tio­nal­ity and the Sin­gu­lar­ity could have a sub­stan­tial pos­i­tive im­pact on his aca­demic de­vel­op­ment. What would be the worst that could hap­pen? • My con­cern is not just that it would be trau­matic, but that it will be so trau­matic that he’ll ra­tio­nal­ize him­self into a “be­lief in be­lief” situ­a­tion. I had my crisis of faith when I was close to his age (14) and I wasn’t ready to ac­cept some­thing that would alienate me from my fam­ily yet, so I sim­ply told my­self that I be­lieved, and tried not to think about the is­sue. (I sus­pect this is why most peo­ple don’t come out as athe­ists un­til af­ter they’ve es­tab­lished sep­a­rate iden­tities from their par­ents and fam­i­lies. A lot of peo­ple never es­cape from these traps. I think wait­ing some­what—un­til he’s some­what older and more ma­ture—will make him more likely to come to the right con­clu­sions in the end. • I had rather the op­po­site ex­pe­rience—don’t re­call ever re­ally be­liev­ing (though I went to Catholic el­e­men­tary school and semi-reg­u­larly at­tended a church), and was shocked in 8th grade to find that peo­ple were re­ally se­ri­ous about that stuff. Ended up spend­ing a lot of time pointlessly ar­gu­ing. • If I un­der­stand cor­rectly, your pri­mary con­cern is that he may ra­tio­nal­ise him­self into this “be­lief in be­lief” situ­a­tion, and that this will ul­ti­mately de­lay or de­ter com­pletely his tran­si­tion into athe­ism. Why do you think this? Have there been any stud­ies done to sup­port this no­tion? I doubt the like­li­hood of learn­ing about ra­tio­nal­ity and the Sin­gu­lar­ity in­duc­ing a crisis of faith is greater than that of most pub­lic sci­ence books. • How is the above wrong enough to be at −2? I nearly uni­ver­sally re­ject any as­ser­tions that peo­ple have a duty to in­terfere with oth­ers but even so I don’t have a prob­lem with the above. • “I nearly uni­ver­sally re­ject any as­ser­tions that peo­ple have a duty to in­terfere with oth­ers” As do I, hence “al­most”. I sup­pose I should edit the word out of my com­ment. • If I un­der­stand cor­rectly, your pri­mary con­cern is that he may ra­tio­nal­ise him­self into this “be­lief in be­lief” situ­a­tion, and that this will ul­ti­mately de­lay or de­ter com­pletely his tran­si­tion into athe­ism? “I sus­pect this is why most peo­ple don’t come out as athe­ists un­til af­ter they’ve es­tab­lished sep­a­rate iden­tities from their par­ents and fam­i­lies. A lot of peo­ple never es­cape from these traps.”—What ev­i­dence do you have for think­ing this? I would think that challeng­ing re­li­gious as­sump­tions at a younger age would re­sult in an ear­lier tran­si­tion to Athe­ism (as­sum­ing one oc­curs). More im­por­tantly, the risk of ra­tio­nal­ity and the Sin­gu­lar­ity in­duc­ing a crisis of faith is no greater than that of any sci­ence and math book. Visit the sci­ence sec­tion of any ma­jor book­store and bam—Dawk­ins. • The Hein­lein Ju­ve­niles. ‘have space suit will travel’ and oth­ers have the whole self-re­li­ance, work hard and achieve things strongly in­grained. I can­not judge how well the in­te­grate with your cur­rent cul­ture, but in the 50s they sold well, and still do. But those are not spe­cific for über-bright kids, more for the nor­mal bright types. If he hasn’t done so yet, just in­tro­duc­ing him to the next big library might help a lot. • Another all-pur­pose book: Bill Brysons: short his­tory of al­most ev­ery­thing. It is not aim at kids, but very ac­cessible, well writ­ten and deal with lots of the his­tory of sci­ences, in­clud­ing the ig­nor­ing of great achieve­ments, mis­lead­ing path­ways and such. A great overview. • My dad’s been try­ing to get me to read the Feyn­man Lec­tures for ages—the man’s a good writer if your nephew would be in­ter­ested by physics. • Get him a book of math con­tests. The Man­delbrot Prob­lem Book is an ex­cel­lent one. • You might also con­sider Ray­mond Smul­lyan’s books of logic puz­zles—I par­tic­u­larly recom­mend The Lady or the Tiger? as ex­cel­lent. • Does any­one know where the page that used to live here can be found? It was an ex­per­i­ment where two economists were asked to play 100 turn asym­met­ric pris­on­ers dilemma with com­mu­ni­ca­tion on each turn to the ex­per­i­menters, but not each other. It was quite amus­ing in that even though they were both economists and should have known bet­ter, the guy on the ‘dis­ad­van­taged’ side was at­tempt­ing to have the other guy let him defect once in a while to make it “fair”. • “Bur­ton, for ex­am­ple, spent years on what should’ve been a straight­for­ward pro­ject. Some CIA an­a­lysts work with a tool, “Anal­y­sis of Com­pet­ing Hy­pothe­ses,” to tease out what ev­i­dence sup­ports (or, mostly, dis­proves) their the­o­ries. But the Java-based soft­ware is sin­gle-user — so there’s no abil­ity to share the­o­ries, or add in dis­sent­ing views. Bur­ton, work­ing on be­half of a Wash­ing­ton-area con­sult­ing firm with deep ties to the CIA, helped build on spec a col­lab­o­ra­tive ver­sion of ACH. He tried it out, us­ing the JonBenet Ram­sey mur­der case as a test. Bur­ton tested 51 clues — the lack of a scream, ev­i­dence of bed-wet­ting — against five pos­si­ble culprits. “I went in, to­tally con­vinced it all pointed to the mom,” Bur­ton says. “Turns out, that wasn’t right at all.”” • Far more in­ter­est­ing than the soft­ware is the chap­ter in the CIA book Psy­chol­ogy of In­tel­li­gence Anal­y­sis where they de­scribe the method: Anal­y­sis of com­pet­ing hy­pothe­ses, some­times ab­bre­vi­ated ACH, is a tool to aid judg­ment on im­por­tant is­sues re­quiring care­ful weigh­ing of al­ter­na­tive ex­pla­na­tions or con­clu­sions. It helps an an­a­lyst over­come, or at least min­i­mize, some of the cog­ni­tive limi­ta­tions that make pre­scient in­tel­li­gence anal­y­sis so difficult to achieve. ACH is an eight-step pro­ce­dure grounded in ba­sic in­sights from cog­ni­tive psy­chol­ogy, de­ci­sion anal­y­sis, and the sci­en­tific method. It is a sur­pris­ingly effec­tive, proven pro­cess that helps an­a­lysts avoid com­mon an­a­lytic pit­falls. Be­cause of its thor­ough­ness, it is par­tic­u­larly ap­pro­pri­ate for con­tro­ver­sial is­sues when an­a­lysts want to leave an au­dit trail to show what they con­sid­ered and how they ar­rived at their judg­ment. Sum­mary and con­clu­sions: Three key el­e­ments dis­t­in­guish anal­y­sis of com­pet­ing hy­pothe­ses from con­ven­tional in­tu­itive anal­y­sis. • Anal­y­sis starts with a full set of al­ter­na­tive pos­si­bil­ities, rather than with a most likely al­ter­na­tive for which the an­a­lyst seeks con­fir­ma­tion. This en­sures that al­ter­na­tive hy­pothe­ses re­ceive equal treat­ment and a fair shake. • Anal­y­sis iden­ti­fies and em­pha­sizes the few items of ev­i­dence or as­sump­tions that have the great­est di­ag­nos­tic value in judg­ing the rel­a­tive like­li­hood of the al­ter­na­tive hy­pothe­ses. In con­ven­tional in­tu­itive anal­y­sis, the fact that key ev­i­dence may also be con­sis­tent with al­ter­na­tive hy­pothe­ses is rarely con­sid­ered ex­plic­itly and of­ten ig­nored. • Anal­y­sis of com­pet­ing hy­pothe­ses in­volves seek­ing ev­i­dence to re­fute hy­pothe­ses. The most prob­a­ble hy­poth­e­sis is usu­ally the one with the least ev­i­dence against it, not the one with the most ev­i­dence for it. Con­ven­tional anal­y­sis gen­er­ally en­tails look­ing for ev­i­dence to con­firm a fa­vored hy­poth­e­sis. • What’s the policy on User pages in the wiki? Can I write my own for the sake of peo­ple hav­ing a refer­ence when they re­ply to my posts, or are they only for some­what ac­com­plished con­tributers? • I can’t imag­ine any rea­son why it would be a prob­lem to make a User page. Go ahead. • I haven’t seen any sort of policy ar­tic­u­lated. I just sort of went for it, and haven’t got­ten any com­plaints yet. Per­son­ally, I’d love to see more peo­ple with wiki user pages, since the LW site it­self doesn’t have much in the way of pro­file fea­tures. • My de­fault as­sump­tion has that been un­less oth­er­wise stated, all the norms and con­ven­tions of Wikipe­dia ap­ply to the LW wiki. En, at least, lets you have one for any rea­son you want. • I made some com­ments on the re­cently-deleted threads that got or­phaned when the whole topic was banned and the as­so­ci­ated posts were taken down. Cur­rently no-one can re­ply to the com­ments. They don’t re­lated di­rectly to the banned sub­ject mat­ter—and some of my mes­sages sur­vive de­spite the con­text be­ing lost. Some of the com­ments were SIAI-crit­i­cal—and it didn’t seem quite right to me at the time for the mod­er­a­tor to crush any dis­cus­sion about them. So, I am re­post­ing some of them as chil­dren of this com­ment in an at­tempt to rec­tify things—so I can re­fer back to them, and so oth­ers can com­ment—if they feel so in­clined: • [In the con­text of SIAI folks think­ing an un­pleas­ant AI was likely] The SIAI de­rives its fund­ing from con­vinc­ing peo­ple that the end is prob­a­bly nigh—and that they are work­ing on a po­ten­tial solu­tion. This is not the type of or­gani­sa­tion you should trust to be ob­jec­tive on such an is­sue—they have ob­vi­ous vested in­ter­ests. • I’ve no­ticed this struc­tural vuln­er­a­bil­ity to bias too—Can you think of any struc­tural changes that might re­duce or elimi­nate this bias? Maybe SIAI ought to be offer­ing a prize for sub­stan­tially jus­tified crit­i­cism of some im­por­tant po­si­tional doc­u­ments, as judged by some dis­in­ter­ested agent? • They are already get­ting some crit­i­cal feed­back. I think I made much the same points in my DOOM! video. DOOM mon­gers: • tend to do things like write books about THE END OF THE WORLD—which gives them a stake in pro­mot­ing the topic …and... • are a self-se­lected sam­ple of those who think DOOM is very im­por­tant (and so, of­ten, highly likely) - so nat­u­rally they hold ex­treme views—and rep­re­sent a sam­ple from the far end of the spec­trum; • clump to­gether, cite each oth­ers pa­pers, and en­joy a sense of com­mu­nity based around their un­usual views. It seems tricky for the SIAI to avoid the crit­i­cism that they have a stake in pro­mot­ing the idea of DOOM—while they are funded the way they are. Similarly, I don’t see an easy way of avoid­ing the crit­i­cism that they are a self-se­lected sam­ple from the ex­treme end of a spec­trum of DOOM be­liefs ei­ther. If we could in­de­pen­dently es­tab­lish p(DOOM), that would help—but mea­sur­ing it seems pretty challeng­ing. IMO, a prize wouldn’t help much—but I don’t know for sure. Many peo­ple be­have ir­ra­tionally around prizes—so it is hard to be very con­fi­dent here. I gather they are work­ing on pub­lish­ing some po­si­tional doc­u­ments. It seems to be a not-un­rea­son­able move. If there is some­thing con­crete to crit­i­cise, crit­ics will have some­thing to get their teeth into. • For the cu­ri­ous: DOOM! • They used to have a “com­mit­ment” that: “Tech­nol­ogy de­vel­oped by SIAI will not be used to harm hu­man life.” ...on their web site. I prob­a­bly missed the memo about that be­ing taken down. • [In the con­text of SIAI folks think­ing an un­pleas­ant AI was likely] Re: “The jus­tifi­ca­tion is that uFAI is a lot eas­ier to make.” That seems like naive rea­son­ing. It is a lot eas­ier to make a ran­dom mess of ASCII that crashes or loops—and yet soft­ware com­pa­nies still man­age to ship work­ing prod­ucts. • Those soft­ware com­pa­nies test their prod­ucts for crashes and loops. There is a word for test­ing an AI of un­known Friendli­ness and that word is “suicide”. • That just seems to be an­other con­fu­sion to me :-( The ar­gu­ment—to the ex­tent that I can make sense of it—is that you can’t re­strain an su­per-in­tel­li­gent ma­chine—since it will sim­ply use its su­pe­rior brain­power to es­cape from the con­straints. We suc­cess­fully re­strain in­tel­li­gent agents all the time—in pris­ons. The pris­on­ers may be smarter than the guards, and they of­ten out­num­ber them—and yet still the re­straints are usu­ally suc­ces­suful. Some of the key ob­ser­va­tions to my mind are: • You can of­ten re­strain one agent with many stupi­der agents; • The re­strain­ing agents do not need to be hu­mans—they can be other ma­chines; • You can of­ten re­strain one agent with a to­tally dumb cage; • Com­plex sys­tems can of­ten be tested in small pieces (unit test­ing); • Large sys­tems can of­ten be tested on a smaller scale be­fore de­ploy­ment; • Sys­tems can of­ten be tested in vir­tual en­vi­ron­ments, re­duc­ing the cost of failure. Dis­card­ing the stan­dard test­ing-based method­ol­ogy would be very silly, IMO. In­deed, it would sab­o­tage your pro­ject to the point that it would al­most in­evitably be beaten—and there is very lit­tle point in aiming to lose. • Are you fa­mil­iar with the AI-Box ex­per­i­ment? We can re­strain hu­man-in­tel­li­gence level agents in pris­ons, most of the time. But the ques­tion to ask is: how effec­tive was the first prison? Be­cause that’s the equiv­a­lent case. None of the safety mea­sures you pro­pose are safe enough. You’re un­der­es­ti­mat­ing the power of a re­cur­sively self-im­prov­ing AI by a fac­tor I can’t be­gin to es­ti­mate—which is kind of the point. • A much stronger ar­gu­ment than all-pow­er­ful AIs sud­denly es­cap­ing (which is still not with­out merit) is that AI will have an in­cen­tive to be­have as we ex­pect it to be­have, un­til at some point we no longer con­trol it. It’ll try its best to pass all tests. • So: while it be­lieves it is un­der eval­u­a­tion it does its very best to be­have it­self? Can we wire that be­lief in as a prior with p=1.0? • I sup­pose I was men­tally clas­sify­ing that kind of be­hav­ior as an es­cape; you’re right that it should be called out as a sep­a­rate point of failure. • My point is that “ai box ex­per­i­ment” com­mu­ni­cates or­ders of mag­ni­tude less ev­i­dence about the dan­ger of es­cap­ing AIs than peo­ple like to im­ply, and there are lots of stronger and sim­pler self-con­tained ar­gu­ments such as the one I gave. (The over­all dan­ger is much greater than even that, be­cause these are spe­cific plots with an ob­vi­ous villain, while re­al­ity is more sub­tle.) • Ahhh, I see what you’re get­ting at. Agreed. • For that mat­ter, call­ing it an “ex­per­i­ment” is quite mis­lead­ing. • It won’t be the first prison—or any­thing like it. If we have pow­er­ful in­tel­li­gence that needs test­ing, then we can have pow­er­ful guards too. The AI-Box ex­per­i­ment has hu­man guards. Con­se­quently, it has very low rele­vance to the ac­tual prob­lem. Pro­gram­mers don’t build their test har­nesses out of hu­man be­ings. Safety is usu­ally an eco­nomic trade off. You can usu­ally have an lot of it—if you are pre­pared to pay for it. • soft­ware com­pa­nies still man­age to ship work­ing prod­ucts. Soft­ware com­pa­nies man­age to ship prod­ucts that do sort of what they want, that they can patch to more closely do what they want. This is gen­er­ally af­ter rounds of in­ter­nal test­ing, in which they try to figure out if it does what they want by run­ning it and ob­serv­ing the re­sult. But an AGI, whether FAI or uFAI, will be the last pro­gram that hu­mans get to write and ex­e­cute un­su­per­vised. We will not get to is­sue patches. • But an AGI, whether FAI or uFAI, will be the last pro­gram that hu­mans get to write and ex­e­cute un­su­per­vised. We will not get to is­sue patches. Or to put it an­other way, the rev­olu­tion will not be beta tested. • That is one of the most chilling phrases I’ve ever heard. Disarm­ing in its sim­plic­ity, yet down­right Love­craf­tian in its im­pli­ca­tions. And it would prob­a­bly make a nice bumper sticker. • Revolu­tions never get beta tested. • But an AGI, whether FAI or uFAI, will be the last pro­gram that hu­mans get to write and ex­e­cute un­su­per­vised. We will not get to is­sue patches. In fic­tion, yes. Fic­tional tech­nol­ogy ap­pears overnight, works the first time with­out re­quiring con­tin­u­ing hu­man effort for de­bug­ging and main­te­nance, and can do all sorts of won­drous things. In real life, the pic­ture is very differ­ent. Real life tech­nol­ogy has a small frac­tion of the ca­pa­bil­ities of its fic­tional coun­ter­part, and is de­vel­oped in­cre­men­tally, decade by painfully slow decade. If in­tel­li­gent ma­chines ever ac­tu­ally come into ex­is­tence, not only will there be plenty of time to is­sue patches, but patch­ing will be pre­cisely the pro­cess by which they are de­vel­oped in the first place. • I agree some­what with this as a set of con­clu­sions, but your ar­gu­ment de­serves to get down­voted be­cause you’ve made state­ments that are highly con­tro­ver­sial. The pri­mary is­sue is that, if one thinks that an AI can en­gage in re­cur­sive self-im­prove­ment and can do so quickly, then once there’s an AI that’s at all ca­pa­ble of such im­prove­ment, the AI will rapidly move out­side our con­trol. There are ar­gu­ments against such a pos­si­bil­ity be­ing likely, but this is not a triv­ial mat­ter. More­over, com­par­ing the situ­a­tion to fic­tion is un­helpful- just be­cause some­thing is com­mon in fic­tion that’s not an ar­gu­ment that such a situ­a­tion can’t ac­tu­ally hap­pen in prac­tice. Re­v­ersed stu­pidity is not in­tel­li­gence. • your ar­gu­ment de­serves to get down­voted be­cause you’ve made state­ments that are highly controversial Did you ac­ci­den­tally pick the wrong ad­jec­tive, or did you se­ri­ously mean that con­tro­versy is un­wel­come in LW com­ment threads? • I read the sub­text as ”...you’ve made state­ments that are highly con­tro­ver­sial with­out at­tempt­ing to sup­port them”. Suggest­ing that there will be plenty of time to de­bug, main­tain, and man­u­ally im­prove any­thing that ac­tu­ally fits the defi­ni­tion of “AGI” is a very sig­nifi­cant dis­agree­ment with some fairly stan­dard LW con­clu­sions, and it may cer­tainly be stated, but not as a ca­sual as­sump­tion or a fact; it should be ac­com­panied by an ac­cord­ingly se­ri­ous at­tempt to jus­tify it. • No. See ata’s re­ply which sum­ma­rizes ex­actly what I meant. • To be sure, the fact that some­thing is com­mon­place in fic­tion doesn’t prove it false. What it does show is that we should dis­trust our in­tu­ition on it, be­cause it’s clearly an idea to which we are pos­i­tively dis­posed re­gard­less of its truth value—in the Bayesian sense, that is ev­i­dence against it. The stronger ar­gu­ment against some­thing is of course its con­sis­tent failure to oc­cur in real life. The en­tire his­tory of tech­nolog­i­cal de­vel­op­ment says that tech­nol­ogy in the real world does not work the way it would need to for the ‘AI go foom’ sce­nario. If 100% ev­i­dence against and 0% ev­i­dence for a propo­si­tion should not be enough to get us to dis­be­lieve it, then what should? Not to men­tion that when you look at the struc­ture of the no­tion of re­cur­sive self-im­prove­ment, it doesn’t even make sense. A ma­chine is not go­ing to be able to com­pletely re­place hu­man pro­gram­mers un­til it is smarter than even the smartest hu­mans in ev­ery rele­vant sense, which given the differ­ences in ar­chi­tec­ture, is an ex­traor­di­nar­ily stringent crite­rion, and one far be­yond any­thing un­aided hu­mans could ever pos­si­bly build. If such an event ever comes about in the very dis­tant fu­ture, it will nec­es­sar­ily fol­low a long path of de­vel­op­ment in which AI is used to cre­ate gen­er­a­tion af­ter gen­er­a­tion of im­proved tools in an ex­tended boot­strap­ping pro­cess that has yet to even get started. And in­deed this is not a triv­ial mat­ter—if peo­ple start bas­ing de­ci­sions on the ‘AI go foom’ be­lief, that’s ex­actly the kind of thing that could snuff out what­ever chance of sur­vival and suc­cess we might have had. • Re: “The pri­mary is­sue is that, if one thinks that an AI can en­gage in re­cur­sive self-im­prove­ment and can do so quickly, then once there’s an AI that’s at all ca­pa­ble of such im­prove­ment, the AI will rapidly move out­side our con­trol.” If its cre­ators are in­com­pe­tent. Those who think this are es­sen­tially bet­ting on the in­com­pe­tence of the cre­ators. There are nu­mer­ous counter-ar­gu­ments—the shift­ing moral zeit­geist, the down­ward trend in de­liber­ate death, the safety record of pre­vi­ous risky tech en­ter­prises. A stop but­ton seems like a rel­a­tively sim­ple and effec­tive safely fea­ture. If you can get the ma­chine to do any­thing at all, then you can prob­a­bly get it to turn it­self off. The cre­ators will likely be very smart hu­mans as­sisted by very smart ma­chines. Bet­ting on their in­com­pe­tence is not a par­tic­u­larly ob­vi­ous thing to do. • Miss­ing the point. I wasn’t ar­gu­ing that there aren’t rea­sons to think that the bad AI goes FOOM won’t hap­pen. In­deed, I said ex­plic­itly that I didn’t think it would oc­cur. My point was that if one is go­ing to make an ar­gu­ment that re­lies on that here one needs to be aware that the premise is con­tro­ver­sial and be clear about that (say giv­ing ba­sic rea­son­ing for it, or even just say­ing “If one ac­cepts that X then...” etc.). • Most pro­gram­mers are su­per­vised. So, this claim is hard to parse. Ma­chine in­tel­li­gence has been un­der de­vel­op­ment for decades—and there have been plenty of patches so far. One way of think­ing about the pro­cess is in terms of in­creas­ing the “level” of pro­gram­ming lan­guages. Com­put­ers already write most ma­chine code to­day. Even­tu­ally hu­mans will be able to tell ma­chines what they want in or­di­nary English—and then a “patch” will just be some new in­struc­tions. • Most pro­gram­mers are su­per­vised. By other hu­mans. If we pro­gram an AGI, then it will su­per­vise all fu­ture pro­gram­ming. Ma­chine in­tel­li­gence has been un­der de­vel­op­ment for decades—and there have been plenty of patches so far. Ma­chine in­tel­li­gence does not yet ap­proach hu­man in­tel­li­gence. We are talk­ing about ap­ply­ing patches on a su­per­in­tel­li­gence. and then a “patch” will just be some new in­struc­tions. The difficulty is not in spec­i­fy­ing the patch, but in ap­ply­ing to a pow­er­ful su­per­in­tel­li­gence that does not want it. • All com­puter pro­gram­ming will be performed and su­per­vised by en­g­ineered agents even­tu­ally. But so what? That is right, nat­u­ral and de­sir­able. It seems as though you are pre­sum­ing a su­per­in­tel­lii­gence which doesn’t want to do what hu­mans tell it to. I am sure that will be true for some hu­mans—not ev­ery­one can ap­ply patches to Google to­day. How­ever, for other hu­mans, the su­per­in­tel­li­gence will prob­a­bly be keen to do what­ever they ask of it—since it will have been built to do just that. • A com­puter which un­der­stands hu­man lan­guages with­out prob­lems will have achieved gen­eral in­tel­li­gence. We won’t nec­es­sar­ily be able to give it “some new in­struc­tions”, or at least it might not be in­clined to fol­low them. • Well, sure—but if we build them ap­pro­pri­ately, they will. We should be well mo­ti­vated to do that—peo­ple are not go­ing to want to buy a bad robots, or ma­chine as­sis­tants that don’t do what we tell them. Con­sumers buy­ing po­ten­tially-dan­ger­ous ma­chines will be look­ing for saftey fea­tures—STOP but­tons and the like. The “bad” pro­jects are less likely to get fund­ing or mind­share—and so have less chance of get­ting off the ground. • Well, sure—but if we build them ap­pro­pri­ately, they will. You are as­sum­ing the very thing that is be­ing claimed to be as­ton­ish­ingly difficult. You also don’t seem to ac­cept the con­se­quences of re­cur­sive self-im­prove­ment. May I ask why? • I was not “as­sum­ing”—I said “if”! The is­sue needs ev­i­dence—and the idea that an un­pleas­ant ma­chine in­tel­li­gence is easy to build is not—in it­self—good qual­ity ev­i­dence. It is eas­ier to build many things that don’t work prop­erly. A pile of scrap metal is eas­ier to build than a work­ing car—but that doesn’t im­ply that au­to­mo­tive en­g­ineers pro­duce piles of scrap. The first manned moon rocket had many safety fea­tures—and in fact worked suc­cess­fully the very first time—and then only a tiny hand­ful of lives were at stake. If the claim is that safety fea­tures are likely to be se­ri­ously ne­glected, then one has to ask what rea­son­ing sup­ports that. The fact that nice agents are a small point in the search space is ex­tremely fee­ble ev­i­dence on the is­sue. “The con­se­quences of re­cur­sive self-im­prove­ment” seems too vague and neb­u­lous to re­spond to. Which con­se­quences. I have writ­ten a fair bit about self-im­prov­ing sys­tems. You can see some of my views on: http://​​al­ife.co.uk/​​es­says/​​the_in­tel­li­gence_ex­plo­sion_is_hap­pen­ing_now/​​ • As Vladimir Nesov pointed out, the first manned moon rocket wasn’t a su­per­in­tel­li­gence try­ing to de­ceive us. All AGIs look Friendly un­til it’s too late. • It is a good job we will be able to scan their brains, then, and see what they are think­ing. We can build them with noses that grow longer when­ever they lie if we like. • That isn’t nec­es­sar­ily fea­si­ble. My de­part­ment writes elec­tronic de­sign au­toma­tion soft­ware, and we have a hard time putting in enough di­ag­nos­tics in the right places to show to us when the code is tak­ing a wrong turn with­out bury­ing us in an un­read­ably huge vol­ume of out­put. If an AI’s de­cid­ing to lie is only visi­ble as it’s hav­ing a sub­goal of putting an ob­server’s men­tal model into a cer­tain state, and the only way to no­tice that this is a lie is to no­tice that the in­tended men­tal state mis­matches with the real world in a cer­tain way, and this is sit­ting in a database of 10,000 other sub­goals the AI has at the time—don’t count on the scan find­ing it... • Ex­traspec­tion seems likely to be a de­sign goal. Without it it is harder to de­bug a sys­tem—be­cause it is difficult to know what is go­ing on in­side it. But sure—this is an en­g­ineer­ing prob­lem with difficul­ties and con­straints. • Self-mod­ifi­ca­tion means self-mod­ifi­ca­tion. The AI could mod­ify it­self so that your brain scan re­turns in­ac­cu­rate re­sults. It could mod­ify it­self to pre­vent its nose from grow­ing. It could mod­ify it­self to con­sider peach ice cream the only sub­stance in the uni­verse with pos­i­tive util­ity. It could mod­ify it­self to seem perfectly Friendly un­til it’s sure that you won’t be able to stop it from turn­ing you and ev­ery­thing else in the so­lar sys­tem into peach ice cream. It is a su­per­in­tel­li­gence. It is smarter than you. And smarter than me. And smarter than Eliezer, and Ein­stein, and who­ever man­ages to build the thing. This is the scale by which you should be mea­sur­ing in­tel­li­gence. • To quote from my com­ments from the OB days on that link: “This should be pretty ob­vi­ous—but hu­man in­tel­li­gence varies con­sid­er­ably—and ranges way down be­low that of an av­er­age chimp or mouse. That is be­cause hu­mans have lots of ways to go wrong. Mu­tate the hu­man genome enough, and you wind up with a low-grade mo­ron. Mu­tate it a bit more, and you wind up with an agent in a per­ma­nent coma—with an in­tel­li­gence prob­a­bly similar to that of an amoeba.” • Not ev­ery­thing that is pos­si­ble hap­pens. You don’t seem to be pre­sent­ing much of a case for the in­com­pe­tence of the de­sign­ers. You are just claiming that they could be in­com­pe­tent. Lots of things could hap­pen—the is­sue is which are best sup­ported by ev­i­dence from his­tory, com­puter sci­ence, evolu­tion­ary the­ory, etc. • The state of the art in AGI, as I un­der­stand it, is that we aren’t com­pe­tent de­sign­ers: we aren’t able to say “if we build an AI ac­cord­ing to blueprint X its de­gree of smarts will be Y, and its de­sires (in­clud­ing de­sires to re­build it­self ac­cord­ing to blueprint X’) will be Z”. In much the same way, we aren’t cur­rently com­pe­tent de­sign­ers of in­for­ma­tion sys­tems: we aren’t yet able to say “if we build a sys­tem ac­cord­ing to blueprint X it will grant those who ac­cess it ca­pa­bil­ities C1 through Cn and no other”. This is why we rou­tinely hear of se­cu­rity breaches: we re­lease such sys­tems in spite of our well-es­tab­lished in­com­pe­tence. So, we are un­able to com­pe­tently rea­son about de­sires and about ca­pa­bil­ities. Fur­ther, what we know of cur­rent com­puter ar­chi­tec­tures is that it is pos­si­ble for a pro­gram to ac­ci­den­tally gain ac­cess to its un­der­ly­ing op­er­at­ing sys­tem, where some form of its own source code is stored as data. Posit that in­stead of a dumb sin­gle-pur­pose ap­pli­ca­tion, the pro­gram in ques­tion is a very effi­cient cross-do­main rea­soner. Then we have pre­cisely the sort of in­com­pe­tence that would al­low such an AI ar­bi­trary self-im­prove­ment. • To­day—ac­cord­ing to most es­ti­mates I have seen—we are prob­a­bly at least a decade away from the prob­lem—and maybe a lot more. Com­put­ing hard­ware looks as though it is un­likely to be cost-com­pet­i­tive with hu­man brains for around that long. So, for the mo­ment, most peo­ple are not too scared of in­com­pe­tent de­sign­ers. The rea­son is not be­cause we cur­rently know what we are do­ing (I would agree that we don’t) - but be­cause it looks as though most of the ac­tion is still some dis­tance off into the fu­ture. • All the more rea­son to be work­ing on the prob­lem now, while there’s still time. I don’t think the AGI prob­lem is hard­ware-bound at this point, but it should be worth work­ing on ei­ther way. • Well, yes, of course. Creat­ing our de­scen­dants is the most im­por­tant thing in the world. • Most of the time, sci­en­tists/​in­ven­tors/​en­g­ineers don’t get things ex­actly right the first time. Un­less se­ri­ous effort is ex­pended to cre­ate an AGI with a prov­ably sta­ble goal func­tion that perfectly al­igns with hu­man prefer­ence, failing to get AGI ex­actly right the first time will prob­a­bly turn us all into peach ice cream, or pa­per­clips, or some­thing stranger. You are ar­gu­ing that test­ing will pre­vent this from hap­pen­ing, but (I hope) I have ex­plained why that is not the most re­li­able ap­proach. • We’ve been try­ing for decades already, and so far there have been an awful lot of mis­takes. Few have caused much dam­age. Re: “Un­less se­ri­ous effort is ex­pended to cre­ate an AGI with a prov­ably sta­ble goal func­tion that perfectly al­igns with hu­man prefer­ence, failing to get AGI ex­actly right the first time will prob­a­bly turn us all into peach ice cream, or pa­per­clips, or some­thing stranger.” ...but that does not seem to be a sen­si­ble idea. Very few ex­perts be­lieve this to be true. For one thing, there is not any such thing as “hu­man prefer­ence”. We have billions of hu­mans, all with differ­ent (and of­ten con­flict­ing) prefer­ences. • Very few ex­perts be­lieve this to be true. Who would you con­sider an “ex­pert” qual­ify­ing as an au­thor­ity on this is­sue? Ex­perts on clas­si­cal nar­row AI won’t have any rele­vant ex­per­tise. Nor will ex­perts on robotics, or ex­perts on hu­man cog­ni­tive sci­ence, or ex­perts on evolu­tion, or even ex­perts on con­ven­tional prob­a­bil­ity the­ory and de­ci­sion the­ory. I know of very few ex­perts on the the­ory of re­cur­sively self-im­prov­ing AGI, but as far as I can tell, most of them do take this threat se­ri­ously. • I was think­ing of those work­ing on ma­chine in­tel­li­gence. Re­searchers mostly think that there are risks. I think there are risks. How­ever, I don’t think that it is very likely that en­g­ineers will need to make much use of prov­able sta­bil­ity to solve the prob­lem. I also think there are prob­a­bly lots of ways of go­ing a lit­tle bit wrong—that do not rapidly re­sult in a dis­aster. • It’s an in­ter­est­ing prob­lem—you might want a robot which will do what you tell it, or you might want a robot which will at least ques­tion or­ders which would be likely to get you into trou­ble. • Con­sumer tem­pra­ments may differ—so the ma­chine should do what the user re­ally wants it to in this area. • It is a lot eas­ier to make a ran­dom mess of ASCII that crashes or loops—and yet soft­ware com­pa­nies still man­age to ship work­ing prod­ucts. Still, a lot of these “work­ing prod­ucts” are the out­put of a fil­ter­ing pro­cess which starts from a ran­dom mess of ASCII that crashes or loops, and tweaks it un­til it’s less ob­vi­ously bro­ken. (Most of the job of test­ing be­ing, typ­i­cally, left to the end user.) • Sure. The point is that—to con­clude that a tar­get will be missed—it is not suffi­cient to ob­serve how small it is. Pro­gram­mers roun­tinely hit minis­cule tar­gets in search spaces. To make the case, you would also need to ar­gue that those aiming at the tar­get are not good marks­men. • Are there any posts peo­ple would like to see re­posted? For ex­am­ple, Where Are We seems like it maybe should be re­done, or at least put a link in About… Or so I thought, but I just checked About and the page for in­tro­duc­tions wasn’t linked, ei­ther. Huh. • It would be nice if we had pro­file pages with ma­chine-read­able in­for­ma­tion and an in­ter­face for sim­ple queries so posts such as that one would be re­dun­dant. • I came across a blurb on Ars Tech­nica about “quan­tum mem­ory” with the head­line pro­claiming that it may “top­ple Heisen­berg’s un­cer­tainty prin­ci­ple”. Here’s the link: http://​​ar­stech­nica.com/​​sci­ence/​​news/​​2010/​​08/​​quan­tum-mem­ory-may-top­ple-heisen­bergs-un­cer­tainty-prin­ci­ple.ars?utm_source=rss&utm_medium=rss&utm_cam­paign=rss They didn’t source the spe­cific ar­ti­cle, but it seems to be this one, pub­lished in Na­ture Physics. Here’s that link: http://​​www.na­ture.com/​​nphys/​​jour­nal/​​vaop/​​ncur­rent/​​full/​​nphys1734.html This is all well above my pay­grade. Is this all con­cep­tual? Are the sci­en­tists in­v­oled any­where near an ex­per­i­ment to ver­ify any of this? In a word, huh? • I don’t want this kind of items to be dis­cussed on LW. It’s ei­ther off-topic or crack­pot­tery, ir­rele­vant what­ever the case. • Con­sid­er­ing the source was Na­ture, I doubt your anal­y­sis is cor­rect. The re­searchers are from Lud­wig-Max­i­m­il­i­ans-Univer­sity and ETH Zürich, which ap­pear to be re­spectable in­sti­tu­tions. I found a write-up at Science Daily (http://​​www.sci­encedaily.com/​​re­leases/​​2010/​​07/​​100727082652.htm) that pro­vides some more de­tails on the re­search. From that link: “The teams at LMU and the ETH Zurich have now shown that the re­sult of a mea­sure­ment on a quan­tum par­ti­cle can be pre­dicted with greater ac­cu­racy if in­for­ma­tion about the par­ti­cle is available in a quan­tum mem­ory. Atoms or ions can form the ba­sis for such a quan­tum mem­ory. The re­searchers have, for the first time, de­rived a for­mula for Heisen­berg’s Prin­ci­ple, which takes ac­count of the effect of a quan­tum mem­ory. In the case of so-called en­tan­gled par­ti­cles, whose states are very highly cor­re­lated (i.e. to a de­gree that is greater than that al­lowed by the laws of clas­si­cal physics), the un­cer­tainty can dis­ap­pear. Ac­cord­ing to Chri­s­tandl, this can be roughly un­der­stood as fol­lows “One might say that the di­s­or­der or un­cer­tainty in the state of a par­ti­cle de­pends on the in­for­ma­tion stored in the quan­tum mem­ory. Imag­ine hav­ing a pile of pa­pers on a table. Often these will ap­pear to be com­pletely di­s­or­dered—ex­cept to the per­son who put them there in the first place.” This is one of the very few places on­line that I’ve seen thought­ful dis­cus­sion on the im­pli­ca­tions of quan­tum me­chan­ics, so I felt re­search that could im­pact quan­tum the­ory would be rele­vant. • This is one of the very few places on­line that I’ve seen thought­ful dis­cus­sion on the im­pli­ca­tions of quan­tum me­chan­ics, so I felt re­search that could im­pact quan­tum the­ory would be rele­vant. The dis­cus­sion of quan­tum me­chan­ics Eliezer Yud­kowsky did was not be­cause quan­tum me­chan­ics is rele­vant to the in­ter­ests of this com­mu­nity, but be­cause the coun­ter­in­tu­itive na­ture of quan­tum me­chan­ics offered good case stud­ies to use in dis­cussing ra­tio­nal­ity. • As I said, off-topic. • If this is off-topic for the open thread, then we should make a monthly off-topic thread where we can dis­cuss things not di­rectly re­lated to ra­tio­nal­ity. I think it’s rather silly to sug­gest that we can’t dis­cuss non-ra­tio­nal­ity top­ics. • One of the things which many of us like to do is to fol­low the “Re­cent Com­ments” (Google Reader up­dates RSS feeds fre­quently enough to make it prac­ti­ca­ble) so we can catch new dis­cus­sions on old threads—and crowd­ing that feed with con­ver­sa­tion not re­lated to our com­mon in­ter­est is an­noy­ing. If you want to post a link to your blog for dis­cus­sion of a tan­gen­tially-re­lated sub­ject, there prob­a­bly wouldn’t be much ob­jec­tion. • Since this site has such a high san­ity wa­ter­line, I’d like to see com­ments about im­por­tant top­ics even if they aren’t di­rectly ra­tio­nal­ity-re­lated. Has any­one figured out a way to satisfy both me and RobinZ with­out mak­ing this site any less con­ve­nient to con­tribute to? (Upvoted for ex­plain­ing your ob­jec­tion.) • Isn’t that what the open thread is for? Quan­tum physics is hardly the most off-topic thing dis­cussed on the open thread. In fact, it doesn’t seem off-topic at all. • It may be a fas­ci­nat­ing piece of quan­tum me­chan­ics, but I don’t see any rele­vance to ra­tio­nal­ity at all. Even if it were rele­vant, there’s no ba­sis for a real dis­cus­sion, be­cause the origi­nal ar­ti­cle is be­hind a pay­wall. I don’t see any­thing available on­line but pop­u­lar-level ar­ti­cles say­ing noth­ing of sub­stance. • Agreed, though as-needed in­stead of strictly monthly. • It might be use­ful to have a short list of English words that in­di­cate log­i­cal re­la­tion­ships or con­cepts of­ten used in de­bates and ar­gu­ments, so as to en­able peo­ple who are ar­gu­ing about con­tro­ver­sial top­ics to speak more pre­cisely. Has any­one en­coun­tered such a list? Does any­one know of pre­vi­ous at­tempts to cre­ate such lists? • 28 Aug 2010 15:35 UTC 2 points In the com­ments sec­tion of Mak­ing Beliefs Pay Rent, Eliezer wrote: I fol­low a cor­re­spon­dence the­ory of truth. I am also a Bayesian and a be­liever in Oc­cam’s Ra­zor. If a be­lief has no em­piri­cal con­se­quences then it could re­ceive no Bayesian con­fir­ma­tion and could not rise to my sub­jec­tive at­ten­tion. In prin­ci­ple there are many true be­liefs for which I have no ev­i­dence, but in prac­tice I can never know what these true be­liefs are, or even fo­cus on them enough to think them ex­plic­itly, be­cause they are so vastly out­num­bered by false be­liefs for which I can find no ev­i­dence. If I am in­ter­pret­ing this cor­rectly, Eliezer is say­ing that there is a nearly in­finite space of un­falsifi­able hy­pothe­ses, and so our pri­ors for each in­di­vi­d­ual hy­poth­e­sis should be very close to zero. I agree with this state­ment, but I think it raises a philo­soph­i­cal prob­lem: doesn’t this same rea­son­ing ap­ply to any fac­tual ques­tion? Given a set of data D, there must be an nearly in­finite space of hy­pothe­ses that (a) ex­plain D and (b) make pre­dic­tions (fulfilling the crite­ria dis­cussed in Mak­ing Beliefs Pay Rent). Though Oc­cam’s Ra­zor can help us to weed out a large num­ber of these pos­si­ble hy­pothe­ses, a mind-bog­glingly large num­ber would still re­main, forc­ing us to have a low prior for each in­di­vi­d­ual hy­poth­e­sis. (In philos­o­phy of sci­ence, this is known as “un­der­de­ter­mi­na­tion.”) Or is there a flaw in my rea­son­ing some­where? • Surely, this is dealt with by con­sid­er­ing the amount of in­for­ma­tion in the hy­poth­e­sis? If we con­sider each hy­poth­e­sis that can be rep­re­sented with 1,000 bits of in­for­ma­tion, there will only be a max­i­mum of 2^1,000 such hy­pothe­ses, and if we con­sider each hy­poth­e­sis that can be rep­re­sented with n bits of in­for­ma­tion, there will only be a max­i­mum of 2^n—and that is be­fore we even start elimi­nat­ing hy­pothe­ses that are in­con­sis­tent with what we already know. If we fa­vor hy­pothe­ses with less in­for­ma­tion con­tent, then we end up with a small num­ber of hy­pothe­ses that can be taken rea­son­ably se­ri­ously, and the re­main­der be­ing un­likely—and pro­gres­sively more un­likely as n in­creases, so that when n is suffi­ciently large, we can, prac­ti­cally, dis­miss any hy­pothe­ses. • I agree with most of that, but why fa­vor less in­for­ma­tion con­tent? Though I may not fully un­der­stand the math, this re­cent post by cousin it seems to be say­ing that pri­ors should not always de­pend on Kol­mogorov com­plex­ity. And, even if we do de­cide to fa­vor less in­for­ma­tion con­tent, how much em­pha­sis should we place on it? • In gen­eral, I would think that the more in­for­ma­tion is in a the­ory, the more spe­cific it is, and the more spe­cific it is, the smaller is the pro­por­tion of pos­si­ble wor­lds which hap­pen to com­ply with it. Re­gard­ing how much em­pha­sis we should place on it: I woud say “a lot” but there are com­pli­ca­tions. The­o­ries aren’t used in iso­la­tion, but tend to provide a kind of in­for­mally put to­gether world view, and then there is the is­sue of de­gree of match­ing. • Which the­ory has more in­for­ma­tion? • All crows are black • All crows are black ex­cept • I didn’t say you ig­nored pre­vi­ous cor­re­spon­dence with re­al­ity, though. • That isn’t Per­plexed’s point. Let’s say that as of this mo­ment all crows that have been ob­served are black, so both of his hy­pothe­ses fit the data. Why should “all crows are black” be as­signed a higher prior than “All crows are black ex­cept ”? Based on cousin_it’s post, I don’t see any rea­son to do that. • So, to re­vive this dis­cus­sion: if we must dis­tribute prob­a­bil­ity mass evenly be­cause we can­not place em­pha­sis on sim­plic­ity, shouldn’t our pri­ors be al­most zero for ev­ery hy­poth­e­sis? It seems to me that the “un­der­de­ter­mi­na­tion” prob­lem makes it very hard to use pri­ors in a mean­ingful way. • I am as­sum­ing here that all the crows that we have pre­vi­ously seen have been black, and there­fore that both the­o­ries have the same agree­ment, or at least ap­prox­i­mate agree­ment, with what we know. The sec­ond the­ory clearly has more in­for­ma­tion con­tent. Why would it not make sense to use the first the­ory on this ba­sis? The fact that all the crows we have seen so far are black makes it a good idea to as­sume black crows in fu­ture. There may be in­stances of non-black crows, when the the­ory has pre­dicted black crows, but that sim­ply means that the the­ory is not 100% ac­cu­rate. If the 270 pages of ex­cep­tions have not come from any­where, then the fact that they are not jus­tified just makes them ran­dom, un­jus­tified speci­fic­ity. Out of all the pos­si­ble wor­lds we can imag­ine that are con­sis­tent with what we know, the pro­por­tion that agree with this speci­fic­ity is go­ing to be small. If most crows are black, as I am as­sum­ing our ex­pe­rience has sug­gested, then when this sec­ond the­ory pre­dicts a non-black crow, as one of its ex­cep­tions, it will prob­a­bly be wrong: The un­jus­tified speci­fic­ity is there­fore con­tribut­ing to a failure of the the­ory. On the other hand, when the oc­ca­sional non-black crow does show up, there is no rea­son to think that the sec­ond the­ory is go­ing to be much bet­ter at pre­dict­ing this than the first the­ory—so the sec­ond the­ory would seem to have all the in­ac­cu­ra­cies of wrongful black crow pre­dic­tion of the first the­ory, along with ex­tra er­rors of wrongful non-black crow pre­dic­tion in­tro­duced by the un­jus­tified speci­fic­ity. Now, if you want to say that we don’t have ex­pe­rience of mainly black crows, or that the 270 pages of ex­cep­tions come from some­where, then that puts us into a differ­ent sce­nario: a more com­pli­cated one. Look­ing at it in a sim­ple way, how­ever, I think this ex­am­ple ac­tu­ally just demon­strates that in­for­ma­tion in a the­ory should be min­i­mized. • I haven’t been fol­low­ing the dis­cus­sion on this topic very closely, so my re­sponse may be about stuff you already know or already know is wrong. But, since I’m feel­ing reck­less to­day, I will try to say some­thing in­ter­est­ing. There are two differ­ent in­for­ma­tion met­rics we can use re­gard­ing the­o­ries. The first deals with how in­for­ma­tive a the­ory is about the world. The ideally in­for­ma­tive the­ory tells us a lot about the world. Or, to say the same thing in differ­ent lan­guage, an in­for­ma­tive the­ory rules out as many “pos­si­ble wor­lds” as it can; it tells us that our own world is very spe­cial among all oth­er­wise pos­si­ble wor­lds; that the set of wor­lds con­sis­tent with the the­ory is a small set. We may as well call this kind of in­for­ma­tion Shan­non in­for­ma­tion or S-in­for­ma­tion . A Karl Pop­per fan would ap­prove of mak­ing a the­ory as S-in­for­ma­tive as pos­si­ble, be­cause then it is ex­pos­ing it­self to the great­est risk of re­fu­ta­tion. The sec­ond in­for­ma­tion met­ric mea­sures how much in­for­ma­tion is re­quired to com­mu­ni­cate the the­ory to some­one. My 270 pages of fine print in the sec­ond crow the­ory might be an ex­am­ple of a the­ory with a lot of this kind of in­for­ma­tion. Let us call this kind of in­for­ma­tion Kol­mogorov in­for­ma­tion, or K-in­for­ma­tion. My un­der­stand­ing of Oc­cam’s ra­zor is that it recom­mends that our the­o­ries should use as lit­tle K-in­for­ma­tion as pos­si­ble. So we have Oc­cam tel­ling us to min­i­mize the K-in­for­ma­tion and Pop­per tel­ling us to max­i­mize the S-in­for­ma­tion. Luck­ily, the two types of in­for­ma­tion are not closely re­lated, so (as­sum­ing that the uni­verse does not con­spire against us) we can fre­quently do rea­son­ably well by both crite­ria. So much for the ob­vi­ous and easy points. The trou­ble ap­pears, es­pe­cially for biol­o­gists and other “squishy” sci­en­tists, when Na­ture seems to have set things up so that ev­ery law has some ex­cep­tions. I’ll leave it to you to Google on ei­ther “white crow” or “white raven” and to ad­mire those fine and in­tel­li­gent birds. So, given our ob­jec­tives of max­i­miz­ing one in­for­ma­tion mea­sure and min­i­miz­ing the other, how should we pro­ceed? Do we change our law to say “99+% of crows are black?” Do we change it to say “All crows are black, not count­ing ravens as crows, and ex­cept for a frac­tion un­der 1% of crows which are albinos and also have pink eyes?” I don’t know, but maybe you have thought about it more than I have. • The trou­ble ap­pears, es­pe­cially for biol­o­gists and other “squishy” sci­en­tists, when Na­ture seems to have set things up so that ev­ery law has some ex­cep­tions. I’ll leave it to you to Google on ei­ther “white crow” or “white raven” and to ad­mire those fine and in­tel­li­gent birds. So, given our ob­jec­tives of max­i­miz­ing one in­for­ma­tion mea­sure and min­i­miz­ing the other, how should we pro­ceed? Do we change our law to say “99+% of crows are black?” Do we change it to say “All crows are black, not count­ing ravens as crows, and ex­cept for a frac­tion un­der 1% of crows which are albinos and also have pink eyes?” We change it to say, “99+% of crows have such-and-such alle­les of genes for de­ter­min­ing feather colour; cer­tain other alle­les are rare and re­sult in a bird lack­ing feather pig­ments due to the syn­the­sis path­way be­ing bro­ken at such-and-such a step for lack of such-and-such a pro­tein. The mu­ta­tion is dis­ad­van­ta­geous, hence the ab­sence of any sub­stan­tial pop­u­la­tion of white crows.” (Or what­ever the ac­tual story is, I’m just mak­ing that one up.) If we don’t know the ac­tual story, then the best we can do is say that for rea­sons we don’t know, it hap­pens now and then that black crows can give birth to a white offspring. Squish­i­ness is not a prop­erty of biolog­i­cal phe­nom­ena, but of our knowl­edge of those phe­nom­ena. Ex­cep­tions are in our de­scrip­tions, not in Na­ture. • I won­der if it helps to ar­range K-in­for­ma­tion in lay­ers. You could start with “Al­most all crows are black”, and then add foot­notes for how rare white crows ac­tu­ally are, what causes them, how com­plete we think our in­for­ma­tion about crow color dis­tri­bu­tion is and why, and pos­si­bly some fac­tors I haven’t thought of. • Lay­er­ing or mod­u­lariz­ing the hy­poth­e­sis: Of course, you can do this, and you typ­i­cally do do this. But, lay­er­ing doesn’t typ­i­cally change the to­tal quan­tity of K-in­for­ma­tion. A com­plex hy­poth­e­sis still has a lot of K-in­for­ma­tion whether you pre­sent it as neatly lay­ered or just jum­bled to­gether. Which brings us to the is­sue of just why we bother calcu­lat­ing the K-in­for­ma­tion con­tent of a hy­poth­e­sis in the first place. There is a no­tion, men­tioned in Jaynes and also in an­other thread ac­tive right now, that the K-in­for­ma­tion con­tent of a hy­poth­e­sis is di­rectly re­lated to the prior prob­a­bil­ity that ought to be at­tached to a hy­poth­e­sis (in the ab­sence of (or prior to) em­piri­cal ev­i­dence). So, it seems to me that the in­ter­est­ing thing about your lay­er­ing sug­ges­tion is how the lay­er­ing should tie in to the Bayesian in­fer­ence ma­chin­ery which we use to eval­u­ate the­o­ries. For ex­am­ple, sup­pose we have a hy­poth­e­sis which, based on ev­i­dence so far, has a sub­jec­tive “prob­a­bil­ity of cor­rect­ness” of, say 0.5. Then we get a new bit of ev­i­dence. We ob­serve a white (albino) crow, for ex­am­ple. Do­ing stan­dard Bayesian up­dat­ing, the prob­a­bil­ity of our hy­poth­e­sis drops to 0.001, say. So we de­cide to try to re­s­ur­rect our hy­poth­e­sis by adding an­other layer. Trou­ble is, that we have just in­creased the K-com­plex­ity of the hy­poth­e­sis, and that ought to hurt us in our origi­nal “no-data” prior. Trou­ble is, we already have data. Lots of it. So is there some alge­braic trick which lets us add that new layer to the hy­poth­e­sis with­out go­ing back to ev­i­den­tial square one? • K-in­for­ma­tion is about com­mu­ni­cat­ing to “some­one”—do you com­pute the amount of K-in­for­ma­tion for the most re­cep­tive per­son you’re com­mu­ni­cat­ing with, or do you have a differ­ent amount for each layer of de­tail? Ac­tu­ally, you might have a tree struc­ture, not just lay­ers—the prevalence of white crows in time and space is a differ­ent branch than the ex­pla­na­tion of how crows can be white. • K-in­for­ma­tion is about com­mu­ni­cat­ing to “some­one”—do you com­pute the amount of K-in­for­ma­tion for the most re­cep­tive per­son you’re com­mu­ni­cat­ing with, or do you have a differ­ent amount for each layer of de­tail? A very in­ter­est­ing ques­tion. Espe­cially when you con­sider the anal­ogy with canon:Kol­mogorov. Here we have an am­bi­guity as to what per­son we com­mu­ni­cate to. There, the am­bi­guity was re­gard­ing ex­actly what model of uni­ver­sal Tur­ing ma­chine we were pro­gram­ming. And there, there was a the­o­rem to the effect that the differ­ences among Tur­ing ma­chines aren’t all that big. Do we have a similar the­o­rem here, for the differ­ences among peo­ple—seen as uni­ver­sal pro­grammable epistemic en­g­ines. • Trou­ble is, we already have data. Lots of it. So is there some alge­braic trick which lets us add that new layer to the hy­poth­e­sis with­out go­ing back to ev­i­den­tial square one? Bayesian up­dat­ing is time­less. It doesn’t care whether you ob­served the data be­fore or af­ter you wrote the hy­poth­e­sis. • So, it sounds like you are sug­gest­ing that we can back out all that data, change our hy­poth­e­sis and prior, and then read the data back in. In the­ory, yes. But some­times we don’t even re­mem­ber the data that brought us to where we are now. Hence the de­sir­a­bil­ity of a trick. Is there an up­dat­ing-with-new-hy­poth­e­sis rule to match Bayes’s up­dat­ing-with-new-ev­i­dence rule? • Here’s a thought ex­per­i­ment that’s been con­fus­ing me for a long time, and I have no idea whether it is even pos­si­ble to re­solve the is­sues it raises. It as­sumes that a re­al­ity which was en­tirely simu­lated on a com­puter is in­dis­t­in­guish­able from the “real” one, at least un­til some ex­ter­nal force al­ters it. So… the ques­tion is, as­sum­ing that such a pro­gram ex­ists, what hap­pens to the simu­lated uni­verse when it is ex­e­cuted? In ac­cor­dance with the ar­gu­ments that Pavirta gives be­low me, re­dun­dant com­pu­ta­tion is not the same as ad­di­tional com­pu­ta­tion. Ex­e­cut­ing the same pro­gram twice (with the same in­puts each time) is equiv­a­lent to ex­e­cut­ing it once, which is equiv­a­lent to ex­e­cut­ing it five times, ten times, or a mil­lion. You are just simu­lat­ing the same uni­verse over and over, not a differ­ent one each time. But is run­ning the simu­la­tion once equiv­a­lent to run­ning it ZERO times? The ob­vi­ous an­swer seems to be “no”, but bear with me here. There is noth­ing spe­cial about the quarks and lep­tons that make up a phys­i­cal com­puter. If you could make a Tur­ing ma­chine out of light, or more ex­otic mat­ter, you would still be able to ex­e­cute the same pro­gram on it. And if you could make such a com­puter in any other uni­verse (what­ever that might mean), you would still be able to run the pro­gram on it. But in such con­sid­er­a­tions, the com­puter used is im­ma­te­rial. A phys­i­cal com­puter is not a perfect Tur­ing ma­chine—it has finite mem­ory space and is vuln­er­a­ble to phys­i­cal defects which in­tro­duce er­rors into the pro­gram. What mat­ters is the pro­gram it­self, which ex­ists re­gard­less of the com­puter it is on. A pro­gram is a Pla­tonic ideal, a math­e­mat­i­cal ob­ject which can­not ex­ist in this uni­verse. We can make a rep­re­sen­ta­tion of that pro­gram on a com­puter, but the rep­re­sen­ta­tion is not perfect, and it is not the pro­gram it­self. In the same way, a perfect equilat­eral tri­an­gle can­not ac­tu­ally be con­structed in this uni­verse; even if you use ma­te­ri­als whose length is mea­sured down to the atom, its sides will not be perfectly straight and its an­gles will not be perfectly equal. More im­por­tantly, if you then al­ter the rep­re­sen­ta­tion to make one of the an­gles big­ger, it does not change the fact that equilat­eral tri­an­gles have 60° an­gles, it sim­ply makes your rep­re­sen­ta­tion less ac­cu­rate. In the same way, ex­e­cut­ing a pro­gram on a com­puter will not al­ter the pro­gram it­self. If there are con­scious be­ings simu­lated on your com­puter, they ex­isted be­fore you ran the pro­gram, and they will ex­ist even if you un­plug the com­puter and throw it into a hole—be­cause what you have in your com­puter is not the con­scious be­ings, but a rep­re­sen­ta­tion of them. And they will still ex­ist even if you never run the pro­gram, or even if it never oc­curs to any­one on Earth that such a pro­gram could be made. The prob­lem is, this same ar­gu­ment could be used to jus­tify the ex­is­tence of liter­ally ev­ery­thing, ev­ery­where. So we are left with sev­eral pos­si­ble con­clu­sions: (1)Every­thing is “real” in some uni­verse, and we have no way of ever find­ing such uni­verses. This can­not ever be proved or falsified, and also leads to prob­lems with the defi­ni­tion of “ev­ery­thing” and “real”. (2)The ini­tial premise is false, and only phys­i­cal ob­jects are real: simu­la­tions, thoughts and con­structs are not. I think there is a philo­soph­i­cal school of thought that be­lieves this to be true, though I have no idea what its name is. Re­gard­less, there are still a lot of holes in this an­swer. (3)I have made a log­i­cal mis­take some­where, or I am op­er­at­ing from an in­cor­rect defi­ni­tion of “real”. It hap­pens. It is also worth point­ing out that both (1) and (2) in­val­i­date ev­ery eth­i­cal truth in the book, since in (1) there is always a uni­verse in which I just caused the death of a trillion peo­ple, and in (2) there is no such thing as “ethics”—ideas aren’t real, and that in­cludes philo­soph­i­cal ideas. Any­way, just bear this in mind when you think about a uni­verse be­ing simu­lated on a com­puter. • (1)Every­thing is “real” in some uni­verse, and we have no way of ever find­ing such uni­verses. This can­not ever be proved or falsified, and also leads to prob­lems with the defi­ni­tion of “ev­ery­thing” and “real”. That’s pretty much Teg­mark’s Mul­ti­verse, which seems pretty pop­u­lar around here (I think it makes a lot of sense). • In­deed. I have a post mak­ing similar ar­gu­ments, though I still haven’t been able to re­solve the eth­i­cal and an­thropic prob­lems it raises in any satis­fac­tory way. At this point I’ve back­tracked from the con­fi­dence I held when I wrote that post; what I’m still will­ing to say is that we’re prob­a­bly on the right track think­ing of “Why does any­thing ex­ist?” as a wrong ques­tion and think­ing of re­al­ity as in­dex­i­cal (i.e. the true refer­ent of the cat­e­gory “real” is the set of things in­stan­ti­ated by this uni­verse; it is a cat­e­gory er­ror to talk about other uni­verses be­ing real or not real), but the Math­e­mat­i­cal Uni­verse Hy­poth­e­sis still leaves much to be con­fused about. • My own view is that (ig­nor­ing simu­la­tions for the time be­ing) MWI ideas have no con­flict with our usual eth­i­cal in­tu­itions and rea­son­ings. Yes, it is the case that when I choose be­tween evil ac­tion A and good ac­tion B, there will be two branches of the uni­verse—one in which I choose A and one in which I choose B. This will be the case re­gard­less of which choice I make. But this does not make my choice morally in­signifi­cant, be­cause I split too, along with the rest of the uni­verse. The ver­sion of me that chose evil act A will have to live there­after with the con­se­quences of that choice. And the ver­sion of me that chose B must live with quite differ­ent con­se­quences. What, more than that, could a be­liever in the moral sig­nifi­cance of ac­tions want of his uni­verse? The situ­a­tion with re­spect to simu­la­tions is a bit trick­ier. Sup­pose I am de­cid­ing whether to (A) pull the plug on a simu­la­tion which con­tains mil­lions of sen­tient (simu­lated) be­ings, or (B) al­low the simu­la­tion to con­tinue. So, I choose, and the uni­verse branches. If I chose A, I must live with the con­se­quences. I don’t have that simu­la­tion to kick around any more. But, if I were to worry about all the simu­lated lives that I have so ruth­lessly ter­mi­nated, I can eas­ily re­as­sure my­self that I have only ter­mi­nated a re­dun­dant copy of those lives. The (now) mas­ter copy of the simu­la­tion plays on, over in that par­allel uni­verse where I chose B. Is it wrong to cre­ate a simu­la­tion and then tor­ture the in­hab­itants? Well, that is an eth­i­cal ques­tion, whereas this is a meta-eth­i­cal anal­y­sis. But the meta-eth­i­cal an­swer to that eth­i­cal ques­tion is that if you tor­ture simu­lated be­ings, then you must live with the con­se­quences of that. • Yes, MWI ideas have no con­flict with usual eth­i­cal in­tu­itions. And they also help you make bet­ter sense of those in­tu­itions. Coun­ter­fac­tu­als re­ally do ex­ist, for ex­am­ple; they’re not just some hy­po­thet­i­cal that is in point of fact phys­i­cally im­pos­si­ble. • but we shouldn’t con­cern our­selves with counter fac­tu­als if they aren’t part of our ob­served uni­verse. • My im­pres­sion is that some­times we do need to deal with them in or­der to make the math come out right, even though the only thing we are re­ally con­cerned about is our ob­served uni­verse. Just as we some­times need to deal with nega­tive num­bers of sheep—how­ever difficult we may find this to vi­su­al­ize if we work as a shep­herd. • true, but there are no ‘nega­tive sheep’, only num­bers ar­bi­trar­ily rep­re­sent­ing them. • but we shouldn’t con­cern our­selves with num­bers if they aren’t part of our ob­served uni­verse. • num­bers are quite use­ful, so we don’t/​shouldn’t do away with them, but the math is never a com­plete sub­sti­tute for the ob­serv­able uni­verse. writ­ing down ’20 sheep’ doesn’t phys­i­cally equal 20 sheep, rather it’s a method we use for sim­plic­ity. as it stands, no two sheep are al­ike to ev­ery last de­tail as far as any­one can tell, yet we still have a cat­e­gory called ‘sheep’. this is so given the ob­served re­cur­rence of ‘sheep’ like en­tities, similar enough for us to cat­e­go­rize them for prac­ti­cal­ity’s sake, but that doesn’t mean they’re phys­i­cally all al­ike to ev­ery de­tail. it could be ar­gued that some­times the math does equate with re­al­ity, as in ‘Oxy­gen atom’ is a cat­e­gory con­sist­ing of en­tirely similar things, but even that is not con­firmed, sim­ply an as­ser­tion; no hu­man has ob­served all ‘Oxy­gen atoms’ in ex­is­tence to be similar in ev­ery de­tail, or even in some ar­bi­trar­ily ‘es­sen­tial’ de­tail/​s. yet it is enough for the pur­poses of sci­ence to con­sider them all similar, and so we go with it,oth­er­wise we’d never have co­her­ent thought let alone sci­ence. it might very well be that all Oxy­gen atoms in ex­is­tence are phys­i­cally the same in some ways, but we have no way of ac­tu­ally know­ing. this doesn’t mean that there are ‘in­di­vi­d­ual atoms’, but it doesn’t negate it ei­ther. ETA: as peng­vado said in be­low post, re­place ‘atom’ with ‘par­ti­cle’. • This doesn’t mean that there are ‘in­di­vi­d­ual atoms’, but it doesn’t negate it ei­ther. No In­di­vi­d­ual Par­ti­cles. The fact that mea­sure­ments of their mass/​charge/​etc have always come out the same, is not the only ev­i­dence we have for all par­ti­cles of a given type be­ing iden­ti­cal. (A whole oxy­gen atom is a bad ex­am­ple, though. Atoms have de­grees of free­dom be­yond the types of par­ti­cles they’re made of.) • yes, I had that spe­cific post in mind when I pre­sented the atom ex­am­ple. you’re cor­rect here though, I should have said par­ti­cles,I shouldn’t write so late af­ter mid­night I guess.. now I ad­mit that my un­der­stand­ing of quan­tum me­chan­ics is not that much above a lay per­sons’, so maybe I just need to ap­ply myslef more and It’ll click, but let’s con­sider my ar­gue­ment first:- here’s what EY said in re­ply to a post in that thread-em­pha­sis mine: “There can be prop­er­ties of the par­ti­cles we don’t know about yet, but our ex­ist­ing ex­per­i­ments already show those new prop­er­ties are also iden­ti­cal, un­less the ob­served uni­verse is a lie. and then: “Undis­cov­er­ing this would be like undis­cov­er­ing that atoms were made out of nu­cle­ons and elec­trons. It’s in this sense that I say that the ob­served uni­verse would have to be a lie.” here I be­lieve he’s mak­ing a mis­take/​dis­play­ing a bias; the math-of Quan­tum Me­chan­ics in this par­tic­u­lar in­stance- does not de­ter­mine phys­i­cal re­al­ity, rather it de­scribes it to some de­gree or other. to sug­gest that the math­e­mat­ics of quan­tum me­chan­ics is the end of the road is too strong a claim IMO. • I don’t have any ar­gu­ments that weren’t dis­cussed in that post; so far as I can tell, it already ad­e­quately ad­dressed your ob­jec­tion: QM doesn’t have to be the end of the road. If QM is a good ap­prox­i­ma­tion of re­al­ity on the scales it claims to pre­dict in the situ­a­tions we have already tested it in—if the math of QM does de­scribe re­al­ity to some de­gree or other—then that’s enough for the quan­tum tests of par­ti­cle iden­tity to work ex­actly. • to put it mildly I don’t be­lieve any­one can ad­dress that ob­jec­tion satis­fac­to­rily, as wedrifid put it elo­quently, the math is part of the map, not ter­ri­tory. if the math of QM does de­scribe re­al­ity to some de­gree or other—then that’s >enough for the quan­tum tests of par­ti­cle iden­tity to work ex­actly. agreed, that was par­tially my point a cou­ple of posts ago. for prac­ti­cal rea­sons it’s good enough that the math works to a de­gree. • Uhmm. I hate to ex­plain my own jokes, but … You did no­tice the for­mal similar­ity be­tween my “we shouldn’t con­cern our­selves” com­ment and its great grand­par­ent, right? • it might very well be that all Oxy­gen atoms in ex­is­tence are phys­i­cally the same in some ways, but we have no way of ac­tu­ally know­ing. this doesn’t mean that there are ‘in­di­vi­d­ual atoms’, but it doesn’t negate it ei­ther. True (only) in the sense that our num­bers are part of our map and not the ter­ri­tory. In the same sense we have no way of ac­tu­ally know­ing there are pat­terns in the uni­verse ap­pro­pri­ately named Oxy­gen. Or Frog. • good point about the map/​ter­ri­tory dis­tinc­tion, that was what I in­tended to say but couldn’t put into so few words, thanks :) and no, it seems that not even Frog can es­cape this, I’m not sure about it’s sig­nifi­cance here though? • That’s not how MWI works, un­less hu­man brains have a quan­tum ran­dom­ness source that they use to make de­ci­sions (which does not ap­pear to be the case). • I’m not sure it mat­ters to the anal­y­sis. Whether we have a Teg­mark mul­ti­verse, or Everett MWI with some de­ci­sions de­pend­ing on quan­tum ran­dom­ness and oth­ers clas­si­cally de­ter­mined, or whether the mul­ti­ple wor­lds are purely sub­jec­tive fic­tions cre­ated to have a model of Bayesi­anism; re­gard­less of what you think is a pos­si­ble re­duc­tion of “pos­si­bly”; it is still the case that you have to live in the re­al­ity which you helped to cre­ate by way of your past ac­tions. • agreed, it’s not like sci­en­tific anal­y­sis re­quires the laws of physics to have no quan­tum ran­dom­ness source etc, rather it is satis­fied with find­ing the log­i­cal ne­ces­si­ties be­tween what is used to de­scribe the ob­serv­able uni­verse. • Is it wrong to cre­ate a simu­la­tion and then tor­ture the in­hab­itants? Well, that is an eth­i­cal ques­tion, whereas this is a meta-eth­i­cal anal­y­sis. But the meta-eth­i­cal an­swer to that eth­i­cal ques­tion is that if you tor­ture simu­lated be­ings, then you must live with the con­se­quences of that. I should add that it is im­pos­si­ble to erase your sin by de­cid­ing to ter­mi­nate the simu­la­tion, so as to “eu­th­a­nize” the vic­tims of your tor­ture. Be­cause there is always a branch where you don’t so de­cide, and the vic­tims of your tor­ture live on. • I don’t think it works like that. Math is a con­cep­tual con­struct, not some­thing that has its own re­al­ity sep­a­rate from ei­ther the thing it ap­prox­i­mates or the mind that ap­prox­i­mates with it. I’m re­minded of the per­son who thought that us­ing the equa­tions for rel­a­tivis­tic rather than clas­si­cal me­chan­ics to model can­non­balls would give the wrong an­swer. Only things that hap­pen are real. There’s no Math Heaven in­hab­ited by an­gelic equa­tions in a sep­a­rate mag­is­terium from the world of the merely real. • Ex­e­cut­ing the same pro­gram twice (with the same in­puts each time) is equiv­a­lent to ex­e­cut­ing it once In some sense, maybe. But if that were gen­er­ally true, then I wouldn’t have any rea­son to run the same pro­gram twice, but I do. (for ex­am­ple, I have re­peat­edly asked my calcu­la­tor what is 1080*4/​3, since I have a weird TV and un­trust­wor­thy mem­ory) • Eliezer has writ­ten a post (ages ago) which dis­cussed a bias when it comes to con­tri­bu­tions to char­i­ties. Frag­ments that I can re­call in­clude con­sid­er­ing the mo­ti­va­tion for par­ti­ci­pat­ing in al­tru­is­tic efforts in a tribal situ­a­tion, where hav­ing your opinion tak­ing se­ri­ously is half the point of par­ti­ci­pa­tion. This is in con­trast to donat­ing ‘just be­cause you want thing X to hap­pen’. There is a prefer­ence to ‘start your own effort, do it your­self’ even when that would be less effi­cient than donat­ing to an ex­ist­ing char­ity. I am un­able to find the post in ques­tion—I think it is dis­tinct from ‘the unit of car­ing’. It would be much ap­pre­ci­ated if some­one who knows the right key­words could throw me a link! • That’s it. Thankyou! • Alright, I’ve lost track of the book­mark and my google-fu is not strong enough with the few bits and pieces I re­mem­ber. I re­mem­ber see­ing a link to a story in a less­wrong ar­ti­cle. The story was about a group of sci­en­tists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a se­ries of ex­pe­riences/​dreams which re­count his­tory lead­ing up to where he cur­rently is, in­clud­ing a civ­i­liza­tion of up­loads, and he’s cur­rently liv­ing with the last hu­mans around… some­thing like that. Can any­body help me out? On­line story, 20 some­thing chap­ters I think… this is driv­ing me nuts. • The vi­sual guide to a PhD: http://​​matt.might.net/​​ar­ti­cles/​​phd-school-in-pic­tures/​​ Nice map–ter­ri­tory per­spec­tive. • John Baez This Week’s Finds in Math­e­mat­i­cal Physics has its 300th and last en­try. He is mov­ing to word­press and Az­i­muth. He states he wants to con­cen­trate on fu­tures, and has up­com­ing in­ter­views with: Tim Palmer on cli­mate mod­el­ing and pre­dictabil­ity, Thomas Fischbacher on sus­tain­abil­ity and per­ma­cul­ture, and Eliezer Yud­kowsky on ar­tifi­cial in­tel­li­gence and the art of ra­tio­nal­ity. A Google search re­turns no matches for Fischbacher + site:less­wrong.com and no hits for Palmer +. That link to Fischbacher that Baez gives has a pre­sen­ta­tion on cog­ni­tive dis­tor­tions and pub­lic policy which I found quite good. • I’ve writ­ten a post for con­soli­dat­ing book recom­men­da­tions, and the links don’t have hid­den urls. Th­ese are links which were cut and pasted from a com­ment—the for­mat­ting worked there. Post­ing (in­clud­ing to my drafts) mys­te­ri­ously dou­bles the spaces be­tween the words in one of my link texts, but not the oth­ers. I tried tak­ing that link out in case it was mak­ing the whole thing weird, but it didn’t help. I’ve tried us­ing the pop-up menu for links that’s available for writ­ing posts, but that didn’t change the re­sults. What might be wrong with the for­mat­ting? • I don’t know what’s wrong, but a peek at the raw HTML ed­i­tor (there’s a but­ton for it in the toolbar) might give a hint. • Thank you. Posts are html. Com­ments are Mark­down. • I thought I had it solved. I swear there was one mo­ment when a clean copy with links ap­peared, though it might have been as a draft. And then the raw html links started show­ing up. At this point, I’ve just posted it with­out links. • With no time limit, how can you ever win that one? • No time limit? Created by gw­ern about 1 hour ago; known in over 5 years Might as well cre­ate a pre­dic­tion for this; I as­sume 5 years ought to be enough time for the proof, if cor­rect, to be ver­ified & ac­cepted, or to be re­futed. • Sce­nario: A life in­surance sales­man, who hap­pens to be a trusted friend of a rel­a­tively-new-but-so-far-trust­wor­thy friend of yours, is try­ing to sell you a life in­surance policy. He makes the sur­pris­ing claim that af­ter 20 years of sel­l­ing life in­surance, none of his clients have died. He seems to want you to think that buy­ing a life in­surance policy from him will some­how make you less likely to die. How do you re­spond? edit: to make this ques­tion more in­ter­est­ing: you also re­ally don’t want to offend any of the peo­ple in­volved. • He makes the sur­pris­ing claim that af­ter 20 years of sel­l­ing life in­surance, none of his clients have died. Wow. He ad­mit­ted that to you? That seems to be strong ev­i­dence that most peo­ple re­fuse to buy life in­surance from him. In a whole 20 years he hasn’t sold enough in­surance that even one client has died from un­avoid­able mis­for­tune! • PeerIn­finity added that he had got­ten sales awards for the num­ber of poli­cies sold, so I don’t think this is a fac­tor. • “No.” Life in­surance sales­men are used to hear­ing that. If they act offended, it’s a sales act. If you’re re­luc­tant to say it, you’re eas­ily pres­sured and you’re tak­ing ad­van­tage. You say “No”. If they press you, you say, “Please don’t press me fur­ther.” That’s all. • Since his sales rate prob­a­bly in­creased with time, that means the av­er­age time af­ter sel­l­ing a policy is ~8 years. So the typ­i­cal client of his didn’t die af­ter 8 years. Mak­ing a rough es­ti­mate of the age of the client he sells to, which would prob­a­bly be 30-40, it just means that the typ­i­cal client has lived to at least 48 or less, which is nor­mal, not spe­cial. Fur­ther­more, peo­ple who buy life in­surance self-se­lect for be­ing more pru­dent in gen­eral. So, even ig­nor­ing the causal sep­a­ra­tions you could find, what he’s told you is not very spe­cial. Though it sep­a­rates him from other sales­men, the high­est like­li­hood ra­tio you should put on this piece of ev­i­dence would be some­thing like 1.05 (i.e. ~19 out of 20 sales­men could say the same thing), or not very in­for­ma­tive, so you are only jus­tified in mak­ing a very slight move to­ward his hy­poth­e­sis, even un­der the most gen­er­ous as­sump­tions. You could get a bet­ter es­ti­mate of his atyp­i­cal­ity by ask­ing more about his clients, at which point you would have iden­ti­fied fac­tors that can screen off the fac­tor of him sel­l­ing a policy. (Though in my ex­pe­rience, life in­surance sales­men aren’t very bright, and a few sen­tences into that ex­pla­na­tion, you’ll get the, “Oh, it’s one of these peo­ple” look …) How’d I do? Edit: Okay, I think I have to turn in my Bayes card for this one: I just came up with a rea­son why the hy­poth­e­sis puts a high prob­a­bil­ity on the ev­i­dence, when in re­al­ity the ev­i­dence should have a low prob­a­bil­ity of ex­ist­ing. So it’s more likely he doesn’t have his facts right. Maybe this is a good case to check the “But but some­body would have no­ticed” heuris­tic. If one of his clients died, would he even find out? Would the in­surance com­pany tell him? Does he reg­u­larly check up on his clients? • I dis­agree with your anal­y­sis, but the de­tails of why I dis­agree would be spoilers. more de­tails: no, he’s not de­liber­ately se­lect­ing low-risk clients. He’s try­ing to make as many sales as pos­si­ble. and he’s had lots of clients. I don’t know the ac­tual num­bers, but he has won awards for how many poli­cies he has sold. and he seems to hon­estly be­lieve that there’s some­thing spe­cial about him that makes his clients not die. he’s “one of those peo­ple”. and here’s the first ac­tu­ar­ial life table I found through a quick google search: http://​​www.ssa.gov/​​OACT/​​STATS/​​table4c6.html • I’m go­ing to go ahead and post the spoiler, rot13′d Zl thrff: Ur’f yl­vat. Naq ur’f ce­bonoyl yl­vat gb uvzfrys nf jryy, va be­qre sbe gur yvr gb or zber pbaivap­vat. Gung vf, qryvoren­gryl sbe­trgg­vat nobhg gur pyvragf jub unir qvrq. Vs ur unf unq a pyvragf, naq vs gurve nirentr ntr vf 30… Rnpu lrne, gur ce­bonovyvgl bs rnpu bs gurz fhei­vi­vat gur arkg lrne vf, jryy, yrg’f ebhaq hc gb 99%. Gung zr­naf gung gur ce­bonovyvgl bs nyy bs gurz fhei­vi­vat vf 0.99^a. Rira vs ur unf bayl unq 100 pyvragf, gura gur ce­bonovyvgl bs gurz nyy fhei­vi­vat bar lrne vf 0.99^100=0.36 Vs ur unq 200 pyvragf, gura gur ce­bonovyvgl bs gurz nyy fhei­vi­vat bar lrne vf 0.99^200=0.13. Naq gung’f whfg sbe bar lrne. Gur sbezhyn tbrf rkcbaragvny nt­nva vs lbh pbafvqre nyy 20 lrnef. Gur ce­bonovyvgl bs nyy 100 pyvragf fhei­vi­vat 20 lrnef vf 0.99^100^20=1.86R-9 Naq zl npghny erfcbafr vf… qba’g ohl gur yvsr vafhenapr. Ohg qba’g gryy nalbar gung lbh gu­vax ur’f yl­vat. (hayrff lbh pb­hag guvf cbfg.) Nyfb, gur sevraq ab yba­tre pb­hagf nf “gehf­grq”, be ng yrnfg abg gehf­grq gb or en­gvbany. Bu, naq srry ernyyl thvygl sbe abg svaq­vat n orggre fby­hgvba, naq cbfg gb YJ gb frr vs nalbar gu­vaxf bs n orggre vqrn. Ohg qba’g cbfg rab­htu vas­bezngvba sbe nalbar gb npgh­nyyl gu­vax bs n orggre fby­hgvba. Naq vs fbzr­bar qbrf gu­vax bs n orggre vqrn naljnl, vtaber vg vs vg’f gbb fp­nel. • I don’t un­der­stand what you mean by a bet­ter solu­tion; I wouldn’t feel guilty about do­ing what you did. • The part to feel guilty about is that I chose not to ex­plain that the sales­man is prob­a­bly ei­ther ly­ing, or in­sane, or both, and there­fore prob­a­bly shouldn’t be con­sid­ered “a trusted friend”. And also that I chose to just try to avoid both of these peo­ple, rather than… think­ing of a less blatantly un­friendly solu­tion. • I dis­agree with your anal­y­sis, but the de­tails of why I dis­agree would be spoilers. But I can only make in­fer­ences on what you’ve told me. If there’s a fac­tor that throws off the gen­eral in­fer­ences you can make from a sales­man’s clien­tele, you can’t fault me for not us­ing it. It’s like you’re try­ing to say: “This dude was born in the US. He’s 50 years old. Can he speak English?” → Yeah, prob­a­bly. → “Haha! No, he can’t! I didn’t tell you he was ab­ducted to Cam­bo­dia as an in­fant and grew up there!” Any­way, the next step is to es­ti­mate what frac­tion of sales­man with the same clien­tele com­po­si­tion have not had their clients die and see how atyp­i­cal he is. Plus, his sales record would have to start from early in his ca­reer, or else his clients fall mostly within re­cent sales, a time span in which peo­ple nor­mally don’t die any­way. • I thought I pro­vided enough in­for­ma­tion, but I apol­o­gise if I didn’t. I posted an rot13′d ver­sion of my an­swer, which also ex­plains why I dis­agreed with your an­swer. sorry if the rot13ing is pointlessly an­noy­ing. • Fur­ther­more, peo­ple who buy life in­surance self-se­lect for be­ing more pru­dent in gen­eral. On the other hand, there’s also se­lec­tion for peo­ple who aren’t ex­pect­ing to live as long as the av­er­age, and this pool in­cludes pru­dent peo­ple. Any­one have in­for­ma­tion on own­ing life in­surance and longevity? • On the other hand, there’s also se­lec­tion for peo­ple who aren’t ex­pect­ing to live as long as the av­er­age, and this pool in­cludes pru­dent peo­ple. And on yet an­other hand there is se­lec­tion for peo­ple who are ex­pected to live longer than the av­er­age (se­lec­tion from the sale­men di­rectly or me­di­ated by price.) • I like the anal­y­sis! Did you have a for­mula you used to ar­rive at the 8 years, or is it an eye­ball guess? • Thanks! Just made an eye­ball guess on the 8 years. • Buy­ing life in­surance can’t ex­tend a hu­man’s life. • Thank you, Clip­tain Ob­vi­ous! The prob­lem is to say how his claim is im­plau­si­ble or doesn’t fol­low from his ev­i­dence, given that we already have that in­tu­ition. • Tell him you found his pitch very in­ter­est­ing and per­sua­sive, and that you’d like to buy life in­surance for a 20 year pe­riod. Then, pon­der for a lit­tle while; “Ac­tu­ally, it can’t be hav­ing the con­tact that keeps them al­ive, can it? That’s just a piece of pa­per. It must be that the sort of per­son who buy it are good at stay­ing al­ive! And it looks like I’m one of them; this is ex­cel­lent! Then , you point out that as you’re not go­ing to die, you don’t need life in­surance, and say good­bye. If you wanted to try to en­lighten him, you might start by ex­plic­itly ask­ing if he be­lieved there was a causal link. But as the situ­a­tion isn’t re­ally set up for hon­est truth-hunt­ing, I wouldn’t bother. • Then , you point out that as you’re not go­ing to die, you don’t need life in­surance, and say good­bye. If the sales­man is omega in dis­guise, is this two-box­ing? :-) • Well, kind of. Un­like in New­combe’s, we have no ev­i­dence that it’s the de­ci­sion that cases the long-life, as op­posed to some other fac­tor cor­re­lated with both (which seems much more likely). • With a de­gree of dis­com­bob­u­la­tion, I imag­ine. I can’t see any causal mechanism by which buy­ing in­surance would cause you to live longer, so un­less the sales­man knows some­thing I wouldn’t ex­pect him to, he would seem to have ac­quired an un­re­li­able be­lief. Given this, I would post­pone buy­ing any in­surance from him in case this un­re­li­able be­lief could have un­for­tu­nate fur­ther con­se­quences* and I would re­duce my ex­pec­ta­tion that the sales­man might prove to be an ex­cep­tional ra­tio­nal­ist. * For ex­am­ple: given his su­per­sti­tion, he may have al­lot­ted in­ad­e­quate cash re­serves to cover fu­ture life in­surance pay­ments. • Maybe the sales­man mostly sells tem­po­rary life in­surance, and just means that no clients had died while cov­ered? • Good­hart sight­ing? Mi­sun­der­stand­ing of causal­ity sight­ing? Check out this re­cent eco­nomic anal­y­sis on Slate.com (em­pha­sis added): For much of the mod­ern Amer­i­can era, in­fla­tion has been viewed as an evil de­mon to be ex­or­cised, ideally be­fore it even rears its head. This makes sense: In­fla­tion robs peo­ple of their sav­ings, and the many Amer­i­cans who have lived through pe­ri­ods of dou­ble-digit in­fla­tion know how mis­er­able it is. But some­times a lit­tle bit of in­fla­tion is valuable. Dur­ing the Great De­pres­sion, gov­ern­ment poli­cies de­liber­ately tried to cre­ate in­fla­tion. Ris­ing prices are a sign of ris­ing out­put, some­thing that would be wel­come in the cur­rent slow-mo­tion re­cov­ery. (He then quotes an economist that says in­fla­tion would also prop up home val­ues and pre­vent fore­clo­sures.) Did I get that right? Be­cause in­fla­tion has tra­di­tion­ally been a sign of (caused by) ris­ing out­put, you should di­rectly cause in­fla­tion, in or­der to cause higher out­put. (Note: in or­der to com­plete the case for in­fla­tion, you ar­guably have to do the same thing again, but re­plac­ing in­fla­tion with out­put, and out­put with re­duced un­em­ploy­ment.) A usual, I’m not try­ing to start a poli­ti­cal de­bate about whether in­fla­tion is good or bad, or what should be done to in­crease/​de­crease in­fla­tion. I’m in­ter­ested in this par­tic­u­lar way of ar­gu­ing for pro-in­fla­tion poli­cies, which seems to even rec­og­nize which way the causal­ity flows, but still ar­gue as if it runs the op­po­site di­rec­tion. Am I mi­s­un­der­stand­ing it? LW Good­hart article • It’s pos­si­ble—the next sen­tence af­ter your quo­ta­tion reads: As economist Casey Mul­li­gan has ar­gued, some in­fla­tion right now could have some salu­tary effects: “Speci­fi­cally, in­fla­tion would raise prices of homes, among other things. Higher hous­ing prices would pull a num­ber of mort­gages out from un­der wa­ter … and thereby re­duce the num­ber of fore­clo­sures.” ...which is at least a causal mechanism that would go the cor­rect di­rec­tion. That said, the part you quoted sounds pretty bad. • but that seems to miss the whole point of de­pres­sions: over in­fla­tion Has to lead to defla­tion or X, and X is bad (an­gry masses, civil un­rest, col­lapsed gov­ern­ment, large scale wars etc). not many peo­ple have much money to be­gin with, and we should raise prices of homes and what­not? peo­ple who have fore­closed Need to fore­close, just like com­pa­nies that go broke Need to-the bailouts were a huge mis­take- or else your fi­nan­cial model is bro­ken and you ac­tu­ally want to sup­port net nega­tive be­hav­ior in the econ­omy. now, I’m no eco­nomics ma­jor, but I don’t that de­gree to know this: in a nut­shell, if you have an as­set-house for eg.-and it’s mar­ket price is 100k but it and all the other houses in the area are be­ing sold @ 500k and some­one-most peo­ple any­way-ac­tu­ally buys that house by bor­row­ing money they can never hope to pay back with in­ter­est in any rea­son­able amount of time, then that house’s price sim­ply Has to go down or else you have X. how does ‘in­creas­ing in­fla­tion’ solve the fun­da­men­tal prob­lem of there be­ing no more wealth to pa for any­thing with? the US has sim­ply bor­rowed more than it can pay back for decades if ever, in­fla­tion will only cause mat­ters to worsen to im­prove. yes all gov­ern­ments have debt and sur­vive, and a gov­ern­ment hav­ing zero debt is un­likely to hap­pen any­time soon, but that’s fine as long as the debt is man­age­able, and it might seem like that if we take ‘offi­cial’ re­ports of the Out­stand­ing Public Debt be­ing around$13.3 Trillion, even though that’s pretty bad, we’d just need tighter purse strings and some mea­sures here and there and in a few decades it’ll be mostly payed off, but un­for­tu­nately that’s not go­ing to hap­pen.

Fac­tor in the re­main­ing ‘un­funded li­a­bil­ities’ ie. the benefits-money- promised by gov­ern­ment to the el­derly, sick, un­em­ployed and so on-so­cial se­cu­rity et all- and our debt is over $60 Trillion, each cit­i­zen’s bur­den of an equal share amounts to around a quar­ter mil­lion$US.

put rais­ing in­fla­tion de­liber­ately in such a con­text and you’ll see how pretty bad it all ac­tu­ally is.

I know this is strong lan­guage from a non economist, but again, this is not such a hard thing to grok, see http://​​com­mu­ni­ties.wash­ing­ton­times.com/​​neigh­bor­hood/​​stim­u­lus/​​2010/​​jun/​​30/​​for­give-us-our-debts/​​ or http://​​cyn­i­cusec­o­nomi­cus.blogspot.com/​​2008/​​09/​​bank­ing-bailout-why-will-help-bankrupt.html

• Last night I in­tro­duced a cou­ple of friends to New­comb’s Prob­lem/​Coun­ter­fac­tual Mug­ging, and we dis­cussed it at some length. At some point, we some­how stum­bled across the ques­tion “how do you pic­ture Omega?”

Friend A pic­tures Omega as a large (~8 feet) hu­manoid with a deep voice and a wide stone block for a head.

When Friend B hears Omega, he imag­ines Dar­mani from Ma­jora’s mask (http://​​www.ka­suto.net/​​image/​​offi­cialart/​​ma­jora_dar­mani.jpg)

And for my part, I’ve always pic­tured him a hu­manoid with pa­per-white skin in a red jump­suit with a cape (the cape, I think, comes from hear­ing him de­scribed as “fly­ing off” af­ter he’s con­founded you).

So it seemed worth ask­ing LW just for the amuse­ment: how do you pic­ture Omega?

• I’ve always pic­tured Omega like this: sud­denly I’m pul­led from our world and ap­pear in a ster­ile white room that con­tains two boxes. At the same mo­ment I some­how know the prob­lem for­mu­la­tion. I open one box, take the mil­lion, and re­turn to the world.

• This, down to the white room and be­ing pul­led. Omega doesn’t Have form or per­son­al­ity. He’s be­yond physics.

• And when you get coun­ter­fac­tu­ally mugged, you’re in a ster­ile white room with a vend­ing ma­chine bill ac­cep­tor planted in the wall?

• No, just an empty room. If I take a bill out of my pocket and hold it in front of me, it dis­ap­pears and I go back. If I say “no”, I go back.

• Omega would get bet­ter re­sults if he ac­cepted Master Card.

• Then let’s imag­ine it as a phone call. “Ex­cuse me Sir, I guess we have to with­draw $100 from your ac­count due to coun­ter­fac­tual cir­cum­stances.” • I’ve always thought of Omega as look­ing some­thing like a hy­dral­isk—biolog­i­cal and alien, al­most a scaled-down Love­craf­tian hor­ror. • (Ne­cro-thread) I can’t ex­plain why, but I’ve always imag­ined Omega to be a big hov­er­ing red sphere with a car­toon­ish face, and black be­holder-like eye­stalks com­ing off him from all sides. He may have been in­fluenced by the Fly­ing Spaghetti Mon­ster. • He may have been in­fluenced by the Fly­ing Spaghetti Mon­ster. Has FSM mythol­ogy got room for an ar­changel equiv­a­lent? Or per­haps a pan­theon, an equiv­a­lent to the Norse Loki. Per­haps the love chid of FSM and a ‘mor­tal’ AGI. Given a se­ries of bizarre tasks with in­com­pre­hen­si­ble mo­tives that he must com­plete to prove him­self. • At the risk of spoiling a very good we­b­comic; Omega looks like this. DAMN YOU WILLIS. • A white hu­man-shaped figure in a busi­ness suit, pos­si­bly face­less, step­ping into a trans­par­ent blue cube for the flight part. Pos­si­bly un­con­sciously in­fluenced by Ein­stein. • AI de­vel­op­ment in the real world? As a re­sult, a lot of pro­gram­mers at HFT firms spend most of their time try­ing to keep the soft­ware from run­ning away. They cre­ate elab­o­rate safe­guard sys­tems to form a walled gar­den around the traders but, ex­actly like a hu­man trader, the pro­grams know that they make money by be­ing novel, do­ing things that other traders haven’t thought of. Th­ese gate­keeper pro­grams are there­fore un­der con­stant, hec­tic de­vel­op­ment as new al­gorithms are rol­led out. The de­vel­op­ment pace ne­ces­si­tates that they im­ple­ment only the most im­por­tant safe­guards, which means that cer­tain types of al­gorith­mic be­hav­ior can eas­ily pass through. As has been pointed out by oth­ers, these were “quotes” not “trades”, and they were far away from the in­side price—there­fore not some­thing the risk man­age­ment soft­ware would be nec­es­sar­ily be look­ing for. —com­ment from gameDevNYC I can’t eval­u­ate whether what he’s say­ing is plau­si­ble enough for sci­ence fic­tion—it’s cer­tainly that—or likely to be true. • One of the facts about ‘hard’ AI, as is re­quired for prof­itable NLP, is that the coders who de­vel­oped it don’t even un­der­stand com­pletely how it works. If they did, it would just be a reg­u­lar pro­gram. TLDR: this definitely is emer­gent be­hav­ior—it is in­for­ma­tion pass­ing be­tween black-box al­gorithms with mo­ti­va­tions that even the origi­nal pro­gram­mers can­not make defini­tive state­ments about. Yuck. • 5 Aug 2010 1:09 UTC 2 points “An Alien God” was re­cently re-posted on the starde­stroyer.net “Science Logic and Mo­ral­ity” fo­rum. You may find the re­sult­ing dis­cus­sion in­ter­est­ing. http://​​bbs.starde­stroyer.net/​​view­topic.php?f=5&t=144148&start=0 • In an ar­gu­ment with a philoso­pher, I used Bayesian up­dat­ing as an ar­gu­ment. Guy’s used to de­bat­ing the­ists and was wor­ried it wasn’t bul­let­proof. Some­what akin to how, say, the sum of an­gles of a tri­an­gle only equals 180 in Eu­cli­dian ge­om­e­try. My ques­tion: what are the fun­da­men­tal as­sump­tions of Bayes the­o­rem in par­tic­u­lar and prob­a­bil­ity the­ory in gen­eral? Are any of these as­sump­tions im­me­di­ate can­di­dates for worry? • If you’re talk­ing about math, Bayes’ the­o­rem is true and that’s the end of that. If you’re talk­ing about de­grees of be­lief that real peo­ple hold—es­pe­cially if you want to con­vince your op­po­nent to up­date in a spe­cific di­rec­tion be­cause Bayes’ the­o­rem says so—I’d ad­vise to use an­other strat­egy. Go­ing meta like “you must be per­suaded by these ar­gu­ments be­cause blah blah blah” gives you less bang per buck than up­grad­ing the ar­gu­ments. • What kind of math do you know in where things can be “true, and that’s the end of that”? In math, things should be prov­able from a known set of ax­ioms, not cho­sen to be true be­cause they feel right. Change the ax­ioms, and you get differ­ent re­sult. In­tu­ition is a good guide for find­ing a proof, and in pick­ing ax­ioms, but not much more than that. And in­tu­itively true ax­ioms can eas­ily re­sult in in­con­sis­tent sys­tems. The ques­tions, “what ax­ioms do I need to ac­cept to prove Bayes’ The­o­rem?”, “Why should I be­lieve these ax­ioms re­flect the phys­i­cal uni­verse”? and “What proof tech­niques do I need to prove the the­o­rem?” are very rele­vant to de­cid­ing whether to ac­cept Bayes’ The­o­rem as a good model of the uni­verse. • Bayes’ the­o­rem doesn’t re­quire much more than mul­ti­pli­ca­tion and di­vi­sion. Here’s some prob­a­bil­ity defi­ni­tions: P(A) = the prob­a­bil­ity of A hap­pen­ing P(A|B) = the prob­a­bil­ity of A hap­pen­ing given B has hap­pened P(AB) = the prob­a­bil­ity of both A and B happening For ex­am­ple, if A is a fair, six-sided die rol­ling a 4 and B is said die rol­ling an even, then P(A) = 16, P(A|B) = 13, P(AB) = 16. By defi­ni­tion, P(A|B)=P(AB)/​P(B). In words, the prob­a­bil­ity of A given B is equal to the prob­a­bil­ity of both A and B di­vided by the prob­a­bil­ity of B. Solv­ing for P(AB) tells us that: P(B)P(A|B) = P(AB) = P(A)P(B|A) Tak­ing out the mid­dle and solv­ing for P(B) al­lows us to flip-flop from one-side of the given to the other. P(A|B)=P(A)*P(B|A)/​P(B) Voila! Bayes’ The­o­rem is log­i­cally nec­es­sary. • I’d love to hear more rea­sons, but here’s one: The fact that we find it in­tu­itive is (via evolu­tion) ev­i­dence that it in fact is true in this uni­verse. Right? Un­for­tu­nately, there are enough ex­cep­tions to that rule that it prob­a­bly only counts as weak ev­i­dence. • Thank you all. It seems I per­haps haven’t phrased my ques­tion the way I thought of it. I don’t doubt the val­idity of the proofs un­der­ly­ing Bayes’ the­o­rem, just as I don’t doubt the val­idity of Eu­cli­dian ge­om­e­try. The ques­tion is rather if BT/​prob­a­bil­ity the­ory hinges on as­sump­tions that may turn out not to be nec­es­sar­ily true for all pos­si­ble wor­lds, ge­ome­tries, cur­va­tures, what­ever. This turned out to be the case for Eu­cli­dian ge­om­e­try, as it did for Zeno. They as­sumed fea­tures of the world which turned out not to be the case. It may be that my ques­tion doesn’t even make sense, but what I was try­ing to con­vey was what apri­ori as­sump­tions does BT rely on which may turn out to be dodgy in the real world? I’m not as such try­ing to con­vince peo­ple, rather try­ing to un­der­stand my own side’s ar­gu­ments. • I think Kevin Van Horn’s in­tro­duc­tion to Cox’s the­o­rem (warn­ing: pdf) is ex­actly what you’re look­ing for. (If you read the ar­ti­cle, please give me feed­back on the cor­rect­ness of my guess that it ad­dresses your con­cern.) • Bayes’ The­o­rem as­sumes that it is mean­ingful to talk about sub­jec­tive de­grees of be­lief, but be­yond that all you re­ally need is ba­sic ar­ith­metic. I can’t imag­ine a uni­verse in which sub­jec­tive de­grees of be­lief aren’t some­thing that can be rea­soned about, but that may be my failure and not re­al­ity’s. • Jaynes’ book PT:LoS has a good chap­ter on this, where he de­rives Bayes’ the­o­rem from sim­ple as­sump­tions (use of num­bers to rep­re­sent plau­si­bil­ity, con­sis­tency be­tween paths that com­pute the same value, con­ti­nu­ity, and agree­ment with com­mon sense qual­i­ta­tive rea­son­ing). The as­sump­tions are sound. Note that the val­idity of Bayes’ the­o­rem is a sep­a­rate ques­tion from the val­idity of any par­tic­u­lar set of prior prob­a­bil­ities, which is on much shak­ier ground. • Bayes’s the­o­rem fol­lows al­most im­me­di­ately from the or­di­nary defi­ni­tion of con­di­tional prob­a­bil­ity, which I think is it­self so re­as­sur­ingly in­tu­itive that no one who ac­cepts the use of prob­a­bil­ities would worry about it (ex­cept per­haps in the cor­ner case where the de­nom­i­na­tor’s zero). • Wei Dai has cast some doubts on the AI-based approach As­sum­ing that it is un­likely we will ob­tain fully satis­fac­tory an­swers to all of the ques­tions be­fore the Sin­gu­lar­ity oc­curs, does it re­ally make sense to pur­sue an AI-based ap­proach? I am cu­ri­ous if he has “an­other ap­proach” he wrote about; I am not brushed up on sl4/​ob/​lw pre­his­tory. Per­son­ally I have some in­ter­est in in­creas­ing in­tel­li­gence ca­pa­bil­ity on in­di­vi­d­ual level via “tools of thought” kind of ap­proach, BCI in the limit. There is not much dis­cus­sion of it here. • No, I haven’t writ­ten in any de­tail about any other ap­proach. I think when I wrote that post I was mainly wor­ried that Eliezer/​SIAI wasn’t think­ing enough about what other ap­proaches might be more likely to suc­ceed than FAI. After my visit to SIAI a few months ago, I be­came much less wor­ried be­cause I saw ev­i­dence that plenty of SIAI peo­ple were think­ing se­ri­ously about this ques­tion. • I haven’t seen any other ap­proaches men­tioned here speci­fi­cally, would be in­ter­est­ing to hear what those thoughts are, if they are pub­lish­able. I think there is a lot of room im­prov­ing on En­gle­bart’s ap­proach with mod­ern tools. It may also be viewed as a booster to the FAI rocket, if it in crease pro­duc­tivity enough. • From the Long Now de­part­ment: “He Took a Po­laroid Every Day, Un­til the Day He Died” My com­ment on the Hacker News page de­scribes my lit­tle we­b­cam script to use with cron and (again) links to my Pre­dic­tion Book page. • If you have many differ­ent (and con­flict­ing, in that they de­mand un­di­vided at­ten­tion) in­ter­ests: if it was pos­si­ble, would copy­ing your­self in or­der to pur­sue them more effi­ciently satisfy you? One copy gets to learn draw­ing, an­other one im­merses it­self in math­e­mat­ics & physics, etc. In time, they can grow very differ­ent. (Is this sce­nario much differ­ent to you than sim­ply hav­ing chil­dren?) • I wouldn’t have prob­lems copy­ing my­self as long as I could merge the copies af­ter­wards. How­ever, it might not be pos­si­ble to have a merge op­er­a­tion for hu­man level sys­tems that both pre­serves in­for­ma­tion and pre­serves san­ity. E.g. if one copy started study­ing philos­o­phy and rad­i­cally changed its world views from the origi­nal, how do you merge this copy back into the origi­nal with­out los­ing in­for­ma­tion? • David Brin’s novel Kiln Peo­ple has this “merg­ing back” idea, with cheap copies, us­ing clay for a lot of the ma­te­rial and run­ning on a hy­dro­gen based metabolism so they are very short lived (hours to weeks, de­pend­ing on$$) and have to merge back rel­a­tively soon in or­der to keep con­ti­nu­ity of con­scious­ness through their long lived origi­nal. Lots of fas­ci­nat­ing prac­ti­cal eco­nomic, eth­i­cal, so­cial, mil­i­tary, and poli­ti­cal de­tails are ex­plored while a noir de­tec­tive story hap­pens in the fore­ground. I recom­mend it :-) • I agree, I don’t think merge is pos­si­ble in this sce­nario. I still see some gains, though (es­pe­cially when com­mu­ni­ca­tion is pos­si­ble): • I (the copy that does X) am happy be­cause I do what I wanted. • I (the other copies) am happy be­cause I partly iden­tify with the other copy (as I would be proud of my child/​stu­dent?) • I (all copies) get re­sults I wanted (re­search, cre­ative, or even per­sonal in­sights if the first copy is able to com­mu­ni­cate them) • If you don’t have the abil­ity to merge, would the copies get equal rights as the origi­nal? Or would the origi­nal con­trol all the re­sources and the copies get treated as sec­ond class cit­i­zens? If the copies were sec­ond class cit­i­zens, I would prob­a­bly not fork be­cause this would re­sult in slav­ery. If the copies do get equal rights, how do you plan to al­lo­cate re­sources that you had be­fore fork­ing such as wealth and friends? If I split the wealth down the mid­dle, I would prob­a­bly be OK with the lack of merg­ing. How­ever, I’m not sure how I would di­vide up so­cial re­la­tion­ships be­tween the copy and the origi­nal. If both the origi­nal and the copy had to re­duce their fi­nan­cial and so­cial cap­i­tal by half, this might have a net nega­tive util­ity. If the goal is to just learn a new skill such as draw­ing, a more effi­cient solu­tion might in­volv­ing up­load­ing your­self with­out copy­ing your­self and then run­ning the up­load faster than re­al­time. I.e. the up­load thinks it has spent a year learn­ing a new skill but only a day has gone by in the real world. How­ever, this trick won’t work if the goal in­volves in­ter­act­ing with oth­ers un­less they are also will­ing to run faster than re­al­time. • Ten­ta­tively—there’s be a cen­tral uber­per­son which wouldn’t be that much like a sin­gle hu­man be­ing. If I had rea­son to think it was safe, I’d re­ally like to live that way. • Do what e.g. Mer­cu­rial does: re­port that the copies are too differ­ent for au­to­matic merge, and punt the prob­lem back to the user. In other words, you are right that there is no solu­tion in the gen­eral case, but that should not nec­es­sar­ily de­ter us from look­ing for a solu­tion that works in 90% of cases. • That sounds (to me) bet­ter than hav­ing chil­dren, but not as good as liv­ing longer. • Sounds won­der­ful. Divide and con­quer. As this sounds like a com­puter as­sisted sce­nario, I would like the abil­ity to ap­pend mem­o­ries while sleep­ing. Wake up and have ac­cess to the mem­o­ries of the copy. This would not nec­es­sar­ily in­clude full profi­ciency as I sus­pect that mus­cle mem­ory may not get copied. • Waste of pro­cess­ing power. Hav­ing dozens of fo­cuses of at­ten­tion and cor­re­spond­ing body/​brain con­struc­tion is more effi­cient. • Waste of pro­cess­ing power. Be­cause ba­sic func­tions are be­ing re­peated? • I rather say higher level func­tions is ex­ces­sively re­dun­dant. Then there are co­or­di­na­tion prob­lems, com­pe­ti­tion for shared re­sources (e.g. money, sex­ual part­ner), pos­si­bly di­ver­gence of near- and far-term goals, rel­a­tively low in-group com­mu­ni­ca­tion speed, pos­si­bly less num­ber of cross-do­main-of-knowl­edge in­sights. • What’s the differ­ence be­tween a copy of your­self and an ex­tra “body/​brain con­struc­tion”? • I think red75 meant re­build­ing your­self into a more “multi-threaded” be­ing. I’m not sure I would want to go in that di­rec­tion, though—it’s hard to imag­ine what the re­sult would feel like, it prob­a­bly couldn’t even be called con­scious in the hu­man sense, but some­how mul­ti­ply-con­scious... • Yes, some­thing like that. But I don’t think that con­scious­ness of such be­ing will be dra­mat­i­cally differ­ent, be­cause it still should con­tain “cen­tral ex­ec­u­tive” that still co­or­di­nates over­all be­hav­ior of that be­ing and still con­trols di­rec­tion and dis­tri­bu­tion of at­ten­tion that is how­ever much more fine-grained than hu­man’s one. • Copy­ing has at best zero util­ity (as re­gards in­ter­ests): each copy only in­dul­ges in one in­ter­est, and I an­ti­ci­pate be­ing only one copy, even if I don’t know in ad­vance which one. How is hav­ing chil­dren at all similar? 1) chil­dren would have differ­ent in­ter­ests; 2) I can­not con­trol (pre­com­mit) fu­ture chil­dren; 3) rais­ing chil­dren would be for me a huge nega­tive util­ity—both emo­tion­ally and re­source-wise. • Copy­ing has at best zero util­ity (as re­gards in­ter­ests) This is not true for me. I care about my ideas be­yond my own de­sire to im­ple­ment them. If I knew there was a pas­sion­ate and ca­pa­ble per­son will­ing to take over some of my ideas (which I’d oth­er­wise not have time for), I’d jump on the op­por­tu­nity. Dou­bly so if the other per­son was a copy of me, in which case I’d not only have a guaran­tee on com­pe­tence, but as­surance that the per­son would be able to re­late the story and product to me af­ter­wards (and pos­si­bly share the profit). • Dou­bly so if the other per­son was a copy of me, in which case I’d not only have a guaran­tee on com­pe­tence, but as­surance that the per­son would be able to re­late the story and product to me af­ter­wards (and pos­si­bly share the profit). In­ter­est­ingly, now that you bring this up, I’m not at all cer­tain that I’d be able to com­mu­ni­cate es­pe­cially effec­tively with a copy of my­self. Prob­a­bly bet­ter than with a ran­domly se­lected per­son, but per­haps not as well as I might hope. • In­ter­est­ingly, now that you bring this up, I’m not at all cer­tain that I’d be able to com­mu­ni­cate es­pe­cially effec­tively with a copy of my­self. Prob­a­bly bet­ter than with a ran­domly se­lected per­son, but per­haps not as well as I might hope. What makes you reach that con­clu­sion? • I think com­mu­ni­ca­tion would start out good and be­come amaz­ing over time. I don’t com­mu­ni­cate with my­self com­pletely in English, there’s a lot of thoughts that go through un­en­coded. Hav­ing a copy of my­self to talk to would force us to en­code those raw thoughts as best as pos­si­ble. This isn’t nec­es­sar­ily easy but I think the re­ally difficulty part would already be be­hind us, namely hav­ing the same core thoughts. • How is hav­ing chil­dren at all similar? I think peo­ple can feel a sense of ac­com­plish­ment when their child achieved some­thing they wanted but never got around to. • I would like feed­back on my re­cent blog post: http://​​www.kmeme.com/​​2010/​​07/​​sin­gu­lar­ity-is-always-steep.html It’s sim­plis­tic for this crowd, but some­thing that both­ered me for a while. When I first saw Kurzweil speak in per­son (GDC 2008) he of course showed both lin­ear and log scale plots. But I always thought the log scale plots were just a con­ve­nient way to fit more on the screen, that the “real” be­hav­ior was more like the lin­ear scale plot, build­ing to a dra­matic steep slope in the com­ing years. In­stead I now be­lieve in many cases the log plot is closer to “the real thing” or at least how we per­ceive that thing. For ex­am­ple in the post I talk about com­pu­ta­tional ca­pac­ity. I be­lieve the ex­po­nen­tial in­crease is ca­pac­ity trans­lates into a per­ceived lin­ear in­crease in util­ity. A com­puter twice as fast is only in­cre­men­tally more use­ful, in terms of what ap­pli­ca­tions can be run. This holds true to­day and will hold true in 2040 or any other year. There­fore com­pu­ta­tional util­ity is in­cre­men­tally in­creas­ing to­day and will be in­cre­men­tally in­creas­ing in 2040 or any fu­ture date. It’s not build­ing to some dra­matic peak. None of this says any­thing against the pos­si­bil­ity of a Sin­gu­lar­ity. If you pass the thresh­old where ma­chine in­tel­li­gence is pos­si­ble, you pass it, what­ever the per­ceived rate of progress at the time. • My es­say on the topic: http://​​al­ife.co.uk/​​es­says/​​the_sin­gu­lar­ity_is_non­sense/​​ See also: “The Sin­gu­lar­ity” by Lyle Burk­head—see the sec­tion “Ex­po­nen­tial func­tions don’t have sin­gu­lar­i­ties!” It’s not ex­po­nen­tial, it’s sigmoidal The Sin­gu­lar­ity Myth Sin­gu­lar­ity Skep­ti­cism: Ex­pos­ing Ex­po­nen­tial Errors IMO, those in­ter­ested in com­pu­ta­tional limits should dis­cuss per-kg figures. The met­ric Moore’s law uses is not much use re­ally—since it would be rel­a­tively easy to make large asyn­chronous ICs with lots of faults—which would make a com­plete mess of the “law”. • I would love to see an on­go­ing big wiki-style FAQ ad­dress­ing all pos­si­ble re­ceived crit­ics of the sin­gu­lar­ity — of course, re­fut­ing the re­futable ones, ac­cept­ing the sen­si­ble. A ver­sion with steroids of what this one did with Athe­ism. Team would be: • one guy invit­ing and sort­ing out crit­i­cism and up­dat­ing the web­site. • an ad hoc team of re­spon­ders. It seems crit­i­cism and an­swers have been scat­tered all over. There seems to be no one-stop source for that. • Wow good stuff. Espe­cially liked yours not linked above: http://​​al­ife.co.uk/​​es­says/​​the_in­tel­li­gence_ex­plo­sion_is_hap­pen­ing_now/​​ I called the bluff on the ex­po­nen­tial it­self, but I was will­ing to be­lieve that cross­ing the brain-equiv­a­lent thresh­old and the rise of ma­chine in­tel­li­gence could pro­duce some kind of sud­den ac­cel­er­a­tion or event. I felt The Sin­gu­lar­ity wasn’t go­ing to hap­pen be­cause of ex­po­nen­tial growth it­self, but might still hap­pen be­cause of where ex­po­nen­tial growth takes us. But you make a very good case that the whole thing is bunk. I es­pe­cially like the “differ­ent lev­els of in­tel­li­gence” point, had not heard that be­fore re: AI. But I find it still tempt­ing though to say there is just some­thing spe­cial about ma­chines that can de­sign other ma­chines. That like point­ing a cam­corder at a TV screen it leads to some kind of in­stant re­cur­sion. But maybe it is similar, a neat trick but not some­thing which changes ev­ery­thing all of a sud­den. I won­der if some­one 50 years ago said “some day com­put­ers will dis­play high qual­ity video and ev­ery­one will watch com­put­ers in­stead of TV or film”. Sure it is hap­pen­ing, but it’s a rather long slow tran­si­tion which in fact might never 100% com­plete. Maybe AI is more like that. • IIRC, Vinge said that the Sin­gu­lar­ity might look like a shock­ingly sud­den jump from an ear­lier point of view, but look­ing back over it, it might seem like a com­pre­hen­si­ble if some­what bumpy road. It hasn’t been fast, but I think a pa­le­olithic hu­man would have a hard time un­der­stand­ing how an eco­nomic crisis is pos­si­ble. • I’m start­ing to be­lieve term The Sin­gu­lar­ity can be re­placed with The Fu­ture with­out any loss. Here is some­thing from The Sin­gu­lar­ity In­sti­tute with the sub­sti­tu­tion made: But the real heart of the The Fu­ture is the idea of bet­ter in­tel­li­gence or smarter minds. Hu­mans are not just big­ger chimps; we are bet­ter chimps. This is the hard­est part of the The Fu­ture to dis­cuss – it’s easy to look at a neu­ron and a tran­sis­tor and say that one is slow and one is fast, but the mind is harder to un­der­stand. Some­times dis­cus­sion of the The Fu­ture tends to fo­cus on faster brains or big­ger brains be­cause brains are rel­a­tively easy to ar­gue about com­pared to minds; eas­ier to vi­su­al­ize and eas­ier to de­scribe. • I don’t think it’s got­ten that vac­u­ous, at least as SIAI uses it. (They tend to use it pretty nar­rowly to re­fer to the in­tel­li­gence ex­plo­sion point, at least the peo­ple there whom I’ve talked to. The Sum­mit is a bit broader, but I sup­pose that’s to be ex­pected, what with Kurzweil’s in­volve­ment and the need to fill two days with semi-tech­ni­cal and non-tech­ni­cal dis­cus­sion of in­tel­li­gence-re­lated tech­nol­ogy, sci­ence, and philos­o­phy.) You say that it can be re­placed with “the fu­ture” with­out any loss, but your ex­am­ple doesn’t re­ally bear that out. If I stum­bled upon that pas­sage not know­ing it’s ori­gin, I’d be pretty con­fused by how it keeps talk­ing about “the fu­ture” as though some point about in­creas­ing in­tel­li­gence had already been es­tab­lished as fun­da­men­tal. (In­deed, the first sen­tence of that es­say defines the Sin­gu­lar­ity as “the tech­nolog­i­cal cre­ation of smarter-than-hu­man in­tel­li­gence”, thereby es­tab­lish­ing a promise to use it con­sis­tently to mean that, and you can’t change that to “the fu­ture” with­out be­ing very very con­fus­ing to any­one who has heard the word “fu­ture” be­fore.) It may be pos­si­ble to do a less-lossy Sin­gu­lar­ity → Fu­ture sub­sti­tu­tion on writ­ings by peo­ple who’ve read “The Sin­gu­lar­ity Is Near” and then de­cided to be fu­tur­ists too, but even Kurzweil him­self doesn’t use the word so gen­er­ally. • You are right, it was an ex­ag­ger­a­tion to say you can swap Sin­gu­lar­ity with Fu­ture ev­ery­where. But it’s an ex­ag­ger­a­tion born out of a truth. Many things said about The Sin­gu­lar­ity are sim­ply things we could say about the fu­ture. They are true to­day but will be true again in 2045 or 2095 or any year. This comes back to the root post and the perfectly smooth na­ture of the ex­po­nen­tial. While smooth­ness im­plies there is noth­ing spe­cial brew­ing in 30 years, it also im­plies 30 years from now things will look re­mark­ably like to­day. We will be star­ing at an up­com­ing billion-fold im­prove­ment in com­puter ca­pac­ity and mar­veling over how it will change ev­ery­thing. Which it will. Kruzweil says The Sin­gu­lar­ity is just “an event which is hard to see be­yond”. I sub­mit ev­ery 30 year chunk of time is “hard to see be­yond”. It’s long enough time that things will change dra­mat­i­cally. That has always been true and always will be. • I think that if The Fu­ture were com­monly used, it would rapidly ac­quire all the weird con­no­ta­tions of The Sin­gu­lar­ity, or worse. • I am not sure what you mean about the “differ­ent lev­els of in­tel­li­gence” point. Maybe this: “A ma­chine in­tel­li­gence that is of “roughly hu­man-level” is ac­tu­ally likely to be ei­ther vastly su­pe­rior in some do­mans or vastly in­fe­rior in oth­ers—sim­ply be­cause ma­chine in­tel­li­gence so far has proven to be so vastly differ­ent from our own in terms of its strengths and weak­nesses [...]” • Ac­tu­ally by “differ­ent lev­els of in­tel­li­gence” I meant your point that hu­mans them­selves have very differ­ent lev­els of in­tel­li­gence, one from the other. That “hu­man-level AI” is a very broad tar­get, not a nar­row one. I’ve never seen it dis­cussed does an AI re­quire more com­pu­ta­tion to think about quan­tum physics than to think about what or­der to pick up items in the gro­cery store? How about train­ing time? Is it a lit­tle more or or­ders of mag­ni­tude more? I don’t think it is known. • Hu­man in­tel­li­gence can go down pretty low at ei­ther end of life—and in sick­ness. There is a bit of a lump of well peo­ple in the mid­dle, though—where in­tel­li­gence is not so widely dis­tributed. The in­tel­li­gence re­quired to do jobs is cur­rently even more spread out. As au­toma­tion pro­gresses, the low end of that range will be grad­u­ally swal­lowed up. • More? If any­thing, I sus­pect think­ing about quan­tum physics takes less in­tel­li­gence; it’s just not what we’ve evolved to do. An ab­strac­tion in­ver­sion, of sorts. Hm. I also have this pet the­ory that some past event (that one near-ex­tinc­tion?) has caused hu­mans to have less vari­a­tion in in­tel­li­gence than most other species, thus caus­ing a rel­a­tively egal­i­tar­ian so­ciety. Ad­mit­tedly, this is some­thing I have close to zero ev­i­dence for—I’m mostly us­ing it for fic­tion—but it would be in­ter­est­ing to see, if you’ve got ev­i­dence for or (I guess more likely) against. • Hu­man in­tel­li­gence can go down pretty low at ei­ther end of life—and in sick­ness. There is a bit of a lump in the mid­dle, though—where in­tel­li­gence is not so widely dis­tributed. The in­tel­li­gence re­quired to do jobs is cur­rently even more spread out. As au­toma­tion pro­gresses, the low end of the abil­ity range will be swal­lowed up. • Machines de­sign­ing ma­chines will in­deed be a mas­sive change to the way phe­no­types evolve. How­ever it is already go­ing on to­day—to some ex­tent. I ex­pect ma­chine in­tel­li­gence won’t sur­pass hu­man in­tel­li­gence rapidly—but rather grad­u­ally, one fac­ulty at a time. Me­mory and much calcu­la­tion have already gone. The ex­tent to which ma­chines de­sign and build other ma­chines has been grad­u­ally in­creas­ing for decades—in a pro­cess known as “au­toma­tion”. That pro­cess may pick up speed, and per­haps by the time ma­chines are do­ing more cog­ni­tive work than hu­mans it might be go­ing at a rea­son­able rate. Au­toma­tion takes over jobs grad­u­ally—partly be­cause the skills needed for those jobs are not re­ally hu­man-level. Many clean­ers and bank tel­lers were not us­ing their brain to its full ca­pac­ity in their work—and sim­ple ma­chines could do their jobs for them. How­ever, this bunches to­gether the re­main­ing hu­man work­ers some­what—likely in­creas­ing the rate at which their jobs will even­tu­ally go. So: pos­si­bly rel­a­tively rapid and dra­matic changes—but most of the ideas used to jus­tify us­ing the “sin­gu­lar­ity” term seem wrong. Here is some more or­tho­dox ter­minol­ogy: http://​​en.wikipe­dia.org/​​wiki/​​Digi­tal_Revolution http://​​en.wikipe­dia.org/​​wiki/​​In­for­ma­tion_Revolution I dis­cussed this ter­minol­ogy in a re­cent video/​es­say: http://​​al­ife.co.uk/​​es­says/​​en­g­ineer­ing_rev­olu­tion/​​ • This is eas­ier to say when you’re near the top of the cur­rent curve. It doesn’t af­fect me much that my com­puter can’t han­dle hi-def youtube, be­cause I’m just a cou­ple of dou­bling times be­hind the state of the art. But if you were us­ing a com­puter ten dou­bling times back, you’d have trou­ble even just read­ing less­wrong. Even if you over­came the for­mat and soft­ware is­sues, we’d be trad­ing funny cat videos that are big­ger than all your stor­age. You’d get noth­ing with­out a helper god to down­sam­ple them. When the sin­gu­lar­ity ap­proaches, the dou­bling time will de­crease, for some peo­ple. Maybe not for all. Maybe will will /​feel/​ like a lin­ear in­crease in util­ity for the peo­ple who’s abil­ities are be­ing in­creased right along. For peo­ple who are 10 dou­blings be­hind and still fal­ling, it will be ob­vi­ous some­thing is differ­ent.. • Con­sider$/​MIPS available in the main­stream open mar­ket. The dou­bling time of this can’t go down “for some peo­ple”, it can only go down globally. Will this dou­bling time de­crease lead­ing up to the Sin­gu­lar­ity? Or dur­ing it?

I always felt that’s what the Sin­gu­lar­ity was, an ac­cel­er­a­tion of Moore’s Law type progress. But I wrote the post be­cause I think it’s easy to see a lin­ear plot of ex­po­nen­tial growth and say “look there, it’s shoot­ing through the roof, that will be crazy!”. But in fact it won’t be any cra­zier than progress is to­day.

It will re­quire a new growth term, ma­chine in­tel­li­gence kick­ing in for ex­am­ple, to ac­tu­ally feel like things are ac­cel­er­at­ing.

• It could if, for ex­am­ple, it were only available in large chunks. If you have $50 to­day you can’t get the$/​MIPS of a \$5000 server. You could maybe rent the time, but that re­quires a high level of knowl­edge, ex­ist­ing in­ter­net ac­cess at some level, and an ap­pli­ca­tion that is still mean­ingful on a re­mote ba­sis.

The first aug­men­ta­tion tech­nol­ogy that re­quires surgery will im­pose a differ­ent kind of ‘cost’. and will spread un­evenly even among peo­ple who have the money.

It’s also im­por­tant to note that an in­crease in dou­bling time would show up as a /​bend/​ in a log scale graph, not a straight line.

• Yes Kurzweil does show a bend in the real data in sev­eral cases. I did not try to du­pli­cate that in my plots, I just did straight dou­bling ev­ery year.

I think any bend­ing in the log scale plot could be fairly called ac­cel­er­a­tion.

But just the dou­bling it­self, while it leads to ever-in­creases step sizes, is not ac­cel­er­a­tion. In the case of com­puter perfor­mance it seems clear ex­po­nen­tial growth of power pro­duces only lin­ear growth in util­ity.

I feel this point is not made clear in all con­texts. In pre­sen­ta­tions I felt some of the lin­ear scale graphs were used to “hype” the idea that ev­ery­thing was speed­ing up dra­mat­i­cally. I think only the bend points to a “speed­ing up”.

• I agree with your post, es­pe­cially since I ex­pect to win my bet with Eliezer.

• Did you no­tice that, as phrased in the link, your bet is about the fol­low­ing event: “[at a cer­tain point in time un­der a few con­di­tions] it will be in­ter­est­ing to hear Eliezer’s ex­cuses”? Tech­ni­cally, all Eliezer will have to do to win the bet will be to write a bor­ing ex­cuse.

• Eliezer was the one who linked to that: the bet is about whether those con­di­tions will be satis­fied.

Any­way, he has already promised (more or less) not to make ex­cuses if I win.

• I don’t know what this bet is, and I don’t see a link any­where in your post.

• http://​​wiki.less­wrong.com/​​wiki/​​Bets_registry

(I am the origi­nal Un­known but I had to change my name when we moved from Over­com­ing Bias to Less Wrong be­cause I don’t know how to ac­cess the other ac­count.)

• Any chance you and Eliezer could set a date on your bet? I’d like to im­port the 3 open bets to Pre­dic­tion Book, but I need a spe­cific date. (PB, rightly, doesn’t do open-ended pre­dic­tions.)

eg. per­haps 2100, well af­ter many Sin­gu­lar­i­tar­i­ans ex­pect some sort of AI, and also well af­ter both of your ac­tu­ar­ial death dates.

• If we agreed on that date, what would hap­pen in the event that there was no AI by that time and both of us are still al­ive? (Th­ese con­di­tions are surely very un­likely but there has to be some de­ter­mi­nate an­swer any­way.)

• You could either

1. donate the money to char­ity un­der the view ‘and you’re both wrong, so there!’

2. say that the pre­dic­tion is im­plic­itly a big AND - ‘there will be an AI by 2100 AND said first AI will not have… etc.‘, and that the con­di­tions al­low ‘short-cir­cuit­ing’ when any AI is cre­ated; with this change, reach­ing 2100 is a loss on your part.

3. Like #2, but the loss is on Eliezer’s part (the bet changes to ‘I think there won’t be an AI by 2100, but if there is, it won’t be Friendly and etc.’)

I like #2 bet­ter since I dis­like im­plicit premises and this (while you two are still rel­a­tively young and healthy) is as good a time as any to clar­ify the terms. But #1 fol­lows more the Long Bets for­mula.

• Eliezer and I are prob­a­bly about equally con­fi­dent that “there will not be AI by 2100, and both Eliezer and Un­known will still be al­ive” is in­cor­rect. So it doesn’t seem very fair to se­lect ei­ther 2 or 3. So op­tion 1 seems bet­ter.

• Not that many will care, but I should get a brief ap­pear­ance on Dateline NBC Fri­day, Aug. 20, at 10 p.m. Eastern/​Pa­cific. A case I pros­e­cuted is get­ting the Dateline treat­ment.

Elderly athe­ist farmer dead; his friend the pop­u­lar preacher’s the sus­pect.

--JRM

• I know. Does any hu­man math­e­mat­i­cian re­ally doubt that?

• I’ve been be­com­ing more and more con­vinced that Kevin and Clippy are the same per­son. Be­sides Clippy’s at­tempt to get money for Kevin, one rea­son is that both of them re­fer to peo­ple with la­bels like “User:Kevin”. More ev­i­dence just came in here, namely these com­ments within 5 min­utes of each other.

• I’m not User:Kevin.

• Ex­plain why I should con­sider this to be ev­i­dence that you are not User:Kevin.

(This is not rhetor­i­cal. It is some­thing worth ex­plor­ing. How does this in­stance of a non-hu­man agent gain cred­i­bil­ity? How can my­self and such an agent build and main­tain co­op­er­a­tion in the game of cred­ible com­mu­ni­ca­tion de­spite in­cen­tives to lie? Has Clippy him­self done any of these things?)

• Per­haps you shouldn’t. But there’s a small chance that, if I were a hu­man like User:Kevin, and other Users had made such in­fer­ences cor­rectly iden­ti­fy­ing me, I would re­gard this time as the op­ti­mal one for re­veal­ing my true iden­tity.

There­fore, my post above is slightly in­for­ma­tive.

• That could eas­ily be con­sis­tent with my state­ment, if taken in a cer­tain sense.

• Yes, there are hu­mans math­e­mat­i­ci­ans who doubt that P is not equal to NP.

See “Guest Column: The P=?NP Poll” http://​​www.cs.umd.edu/​​~gasarch/​​pa­pers/​​poll.pdf by William Gasarch where a poll was taken of 100 ex­perts, 9 of whom ven­tured the guess that P = NP and 22 of whom offered no opinion on how the P vs. NP ques­tion will be re­solved. The doc­u­ment has quotes from var­i­ous of the peo­ple pol­led elab­o­rat­ing on what their be­liefs are on this mat­ter.

• How do you know you know?

• There’s a very good sum­mary by Scott Aaron­son de­scribing why we be­lieve that P is very likely to be not equal to NP. How­ever, Clippy’s con­fi­dence seems un­jus­tified. In par­tic­u­lar, there was a poll a few years ago that showed that a ma­jor­ity of com­puter sci­en­tists be­lieve that P=NP but a sub­stan­tial frac­tion do not. (The link was here but seems to be not func­tion­ing at the mo­ment (ac­cord­ing to umd.edu’s main page to­day they have a sched­uled out­age of most Web ser­vices for main­te­nance so I’ll check again later. I don’t re­mem­ber the ex­act num­bers so I can’t cite them right now)).

This isn’t pre­cisely my area, but speak­ing as a math­e­mat­i­cian whose work touches on com­plex­ity is­sues, I’d es­ti­mate around a 1100 chance that P=NP.

• URL is re­peated twice in link?

• Thanks, fixed.

• Be­cause if it were oth­er­wise—if ver­ify­ing a solu­tion were of the same or­der of com­pu­ta­tional difficulty of find­ing it -- it would be a lot harder to ac­count for my ob­ser­va­tions than if it weren’t so.

For ex­am­ple, ver­ify­ing a proof would be of similar difficulty to find­ing the proof, which would mean na­ture would stum­ble upon rep­re­sen­ta­tions iso­mor­phic to ei­ther with similar prob­a­bil­ity, which we do not see.

The pos­si­bil­ity that P = NP but with a “large polyno­mial de­gree” or con­stant is too ridicu­lous to be taken se­ri­ously; the al­gorith­mic com­plex­ity of the set of NP-com­plete prob­lems does not per­mit a short­cut that char­ac­ter­izes the en­tire set in a way that would al­low such a solu­tion to ex­ist.

I can’t pre­sent a for­mal proof, but I have suffi­cient rea­son to pred­i­cate fu­ture ac­tions on P ≠ NP, for the same rea­son I have suffi­cient rea­son to pred­i­cate fu­ture ac­tions on any be­lief I hold, in­clud­ing be­liefs about the prov­abil­ity or truth of math­e­mat­i­cal the­o­rems.

• Most hu­man math­e­mat­i­ci­ans think along similar lines. It will still be a big deal when P ≠ NP is proven, if for no other rea­son that it pays a mil­lion dol­lars. That’s a lot of pa­per­clips.

Let me know if you think you can solve any of these! http://​​www.clay­math.org/​​mil­len­nium/​​

• 9 Aug 2010 1:32 UTC
0 points
Parent

The pos­si­bil­ity that P = NP but with a “large polyno­mial de­gree” or con­stant is too ridicu­lous to be taken se­ri­ously; the al­gorith­mic com­plex­ity of the set of NP-com­plete prob­lems does not per­mit a short­cut that char­ac­ter­izes the en­tire set in a way that would al­low such a solu­tion to ex­ist.

Would you elab­o­rate.

• As peo­ple are prob­a­bly aware, Hitchens has can­cer, which is likely to kill him in the not-too-dis­tant fu­ture. There does not seem to be much to be done about this; but I won­der if it’s pos­si­ble to pass the hat to pay for cry­on­ics for him? Apart from the fuzzies of sav­ing a life with X per­cent prob­a­bil­ity, which can be had much cheaper by send­ing food to Africa, it might serve as mar­ket­ing for cry­on­ics, caus­ing oth­ers to sign up. Of course, this as­sumes that he would ac­cept, and also that there wouldn’t be a per­cep­tion that he was just grasp­ing at any straw available.

• I’d love to per­suade him, but no way am I pass­ing a hat.

1. Would Hitchens not be able to af­ford cry­on­ics with­out dona­tions?

2. per­cep­tion that he was just grasp­ing at any straw available

What’s wrong with this? Isn’t that ex­actly what cry­on­ics is: grasp­ing the only available straw?

(Hm, how do I get a sen­tence in­side the num­ber­ing in­den­ta­tion but out­side the quo­ta­tion?)

• Would Hitchens not be able to af­ford cry­on­ics with­out dona­tions?

Per­haps so, but would he con­sider it the best use of his re­sources? While if he gets it for free, take it or lose it, that’s a differ­ent mat­ter.

What’s wrong with this? Isn’t that ex­actly what cry­on­ics is: grasp­ing the only available straw?

For mar­ket­ing pur­poses it would be an epic fail. In in­ter­views he has made the point that no, he will not be do­ing any deathbed con­ver­sions un­less he goes mad from pain. If cry­on­ics is seen as only a deathbed con­ver­sion to a differ­ent re­li­gion (easy pat­tern com­ple­tions: “Rap­ture of the Nerds”, “weird be­liefs = cults”) it’ll merely re­in­force the per­cep­tion of cry­on­ics as some­thing rather kooky which se­ri­ous peo­ple needn’t spend time on. Your point is cor­rect, but will only work as PR if that’s how it gets across to the pub­lic: This is a straw with an ac­tual chance of work­ing.

• Ah, I see. Cer­tainly it would be bet­ter if he made the choice well be­fore he’s at death’s door/​in ter­rible pain/​etc..

• Know­ing that medicine is of­ten more about sig­nal­ing care than im­prov­ing health, it’s hard for me to make a big fuss over some minor ail­ment of a friend or fam­ily mem­ber. Con­sciously try­ing to sig­nal care seems too fake and ma­nipu­la­tive. Un­for­tu­nately, oth­ers then in­ter­pret my lack of fuss-mak­ing as not car­ing. Has any­one else run into this prob­lem, and if so, how did you deal with it?

• I feel like I’ve wres­tled with this, or some­thing similar. I will throw some thoughts out.

In re­lat­ing to your ex­am­ple, I re­call times when I was ex­pected to give care that I didn’t think a per­son needed, and I guess my sense was that they were weak to ex­pect it (and so I was un­able to em­pathize with them), or that my fake care would en­courage them to be weak. I also felt that the care was dis­in­gen­u­ous be­cause it wasn’t re­ally do­ing any­thing.

I no longer feel that way, and what changed over sev­eral years, I guess, is a deeper re­al­iza­tion (along an in­de­pen­dent, sep­a­rate path of ex­pe­riences, in­clud­ing be­ing a mother) of the hu­man con­di­tion: we are all lonely, iso­lated minds trapped in phys­i­cal bod­ies. We ache for con­nec­tion—more so at differ­ent times of our lives, and some more than oth­ers, with differ­ent lev­els of com­fort for differ­ent lev­els—but in­fants can’t sur­vive with­out af­fec­tion and chil­dren and adults also need af­fec­tion. (Ali­corn’s “love lan­guages” ap­pro­pri­ate here.) What­ever ex­pres­sions of af­fec­tion we pre­fer, I think we need all of them a lit­tle bit, and phys­i­cal, pla­tonic af­fec­tion is some­thing we just don’t re­ceive as of­ten. (I hear this is es­pe­cially true for the el­derly.)

Sig­nal­ing med­i­cal care is to­ken for phys­i­cal care, thus it stands in for phys­i­cal af­fec­tion—even if there is no phys­i­cal con­tact in­volved. If there is phys­i­cal con­tact in­volved—the place­ment of a band-aid on a knee—then that is even bet­ter. I think it is im­por­tant to re­al­ize that peo­ple do have a need for such phys­i­cal af­fec­tion, and med­i­cal situ­a­tions provide a con­text for this (of­ten at times when peo­ple are in need of more af­fec­tion any­way).

• Good point. But the next ques­tion ought to be whether there’s a cre­ative third al­ter­na­tive that would al­low us to bet­ter sig­nal our car­ing while be­ing less waste­ful. In some cases (the ris­ing pop­u­lar­ity of hos­pice rather than hos­pi­tal for ter­mi­nal ill­ness), we can see this already be­ing done.

(For a similar ex­am­ple, some cou­ples plan­ning wed­dings are mov­ing away from the mas­sively waste­ful† reg­istry op­tion in fa­vor of other ideas. It looks tacky to just ask for a cash dona­tion, of course, but there re­ally are third al­ter­na­tives—one cou­ple asked for dona­tions to­ward the spe­cific events they planned for their hon­ey­moon, while oth­ers ask for dona­tions to­ward a fa­vored list of char­i­ties. Etc.)

† Guests sig­nal their gen­eros­ity and re­gard for the new cou­ple by buy­ing them some­thing from a set of nice things. How­ever, the cou­ple typ­i­cally asks for things that are use­lessly nicer than what they would buy them­selves if it were their money, so as to sig­nal so­phis­ti­ca­tion. The end re­sult is that a lot of money gets wasted on overly spe­cific kitchen gad­gets which will gather dust, or overly nice china that rarely gets used, etc.

• Without spe­cific ex­am­ples, I hadn’t thought of sig­nal­ing care that was ex­pen­sive. (I guessed it was emo­tion­ally ex­pen­sive for Wei­dai.) But yes, fo­tak­ing some­one to see the doc­tor when you know that wouldn’t be use­ful would be quite ex­pen­sive.

• I haven’t run into this prob­lem with medicine, but I have about other things. In those cases I han­dled it by some com­bi­na­tion of 1) Ex­plain­ing that I do care but that I don’t think it’s worth spend­ing re­sources on ex­pen­sive oth­er­wise use­less sig­nals, and 2) Con­sciously try­ing to sig­nal care when its cheap, even though it feels fake and they know it’s a con­scious effort.

With medicine, if I ac­tu­ally do care, I’ll re­search the prob­lem and usu­ally sug­gest a bet­ter treat­ment. The av­er­age doc­tor is more or less in­com­pe­tent at what they’re sup­posed to do (like ev­ery­one else), so us­ing him and google as re­sources is of­ten enough to come up with a bet­ter game plan.

• 5 Aug 2010 14:41 UTC
1 point

One way to model some­one’s be­liefs, at a given frozen mo­ment of time, is as a real-val­ued func­tion P on the set of all as­ser­tions. In an ideal situ­a­tion, P will be sub­ject to a lot of con­sis­tency con­di­tions, for in­stance if A is a log­i­cal con­se­quence of B, then P(A) is not smaller than P(B). This ideal P is very smart: if such a P has P(math ax­ioms) very close to 1, then it will have P(math the­o­rems) very close to 1 as well.

Clearly, even a Bayesian su­per­in­tel­li­gence is not go­ing to main­tain an in­finitely large database of val­ues of P, that it up­dates from in­stant to in­stant. Rather, it will have some­thing like a com­puter pro­gram that takes as in­puts as­ser­tions A, spends some time think­ing, and out­puts num­bers P(A). I think we can­not ex­pect the com­puted num­bers P(A) to have the con­sis­tency prop­erty (B im­plies A means P(A) not smaller than P(B)). For in­stance it should be pos­si­ble for a su­per­in­tel­li­gence to an­swer a math ques­tion (I don’t know, Gold­bach’s con­jec­ture) with “very likely true” and have Gold­bach’s con­jec­ture turn out false.

(Since “A” is a log­i­cal con­se­quence of “A and B”, I guess I am ac­cus­ing su­per­in­tel­li­gences of com­mit­ting a souped-up form of the con­junc­tion fal­lacy.)

The fact that a prior in prac­tice won’t be a set of cached num­bers but in­stead a com­puter pro­gram, sub­ject to all the at­ten­dant re­source con­straints, seems im­por­tant to me, but I’m open to the pos­si­bil­ity that it’s a red her­ring. Am I mak­ing some kind of clas­sic or eas­ily ad­dressed er­ror?

• I think we can­not ex­pect the com­puted num­bers P(A) to have the con­sis­tency prop­erty (B im­plies A means P(A) not smaller than P(B)).

Clar­ify for me what you are say­ing here. Why would a bounded-ra­tio­nal su­per­in­tel­li­gence main­tain a log­i­cally in­con­sis­tent be­lief sys­tem? Are you mak­ing the fol­low­ing ob­ser­va­tion?

• Even a su­per­in­tel­li­gence is not log­i­cally omniscient

• There are in­evitably go­ing to be com­pli­cated math­e­mat­i­cal prop­er­ties of the su­per­in­tel­li­gence’s map that are not worth spend­ing pro­cess­ing time on. This is en­sured by the limits of physics it­self.

• Out­side the ‘bounds’ of the su­per­in­tel­li­gence’s log­i­cal search­ing there will be log­i­cal prop­er­ties for which the su­per­in­tel­li­gence’s be­liefs are not con­sis­tent.

• A bayesian su­per­in­tel­li­gence is not log­i­cally om­ni­scient so some of it’s be­liefs will be log­i­cally in­con­sis­tent.

My im­pres­sion is that the above holds true un­less the su­per­in­tel­li­gence in ques­tion crip­ples it­self, sac­ri­fic­ing most of it’s in­stru­men­tally ra­tio­nal ca­pa­bil­ity for epistemic pu­rity.

• I am glad that a term like “bounded-ra­tio­nal” ex­ists. If it’s been dis­cussed some­place very thor­oughly then I likely don’t have very much to add. What are some pro­pos­als for mod­el­ing bounded Bayesi­anism?

I think what I’m say­ing is con­sis­tent with your bul­let points, but I would go fur­ther. I’ll fo­cus on one point: I do not think it’s pos­si­ble for a bounded agent to be epistem­i­cally pure, even hav­ing sac­ri­ficed most or all of its in­stru­men­tally ra­tio­nal ca­pa­bil­ity. Epistemic im­pu­rity is built right into math and logic.

Let me make the fol­low­ing as­sump­tion about our bounded ra­tio­nal agent: given any as­ser­tion A, it has the ca­pa­bil­ity of com­put­ing its prior P(A) in time that is polyno­mial in the length of A. That is, it is not strictly ag­nos­tic about any­thing. Since there ex­ist as­ser­tions A which are log­i­cal con­se­quences of some ax­ioms, but whose short­est proof is su­per-polyno­mial (in fact it gets much worse) in the length of A, it seem very un­likely that we will have P(A) > P(Ax­ioms) for all prov­able as­ser­tions A.

(I think you could make this into a rigor­ous math­e­mat­i­cal state­ment, but I am not claiming to have proved it—I don’t see how to rule out the pos­si­bil­ity that P always com­putes P(A) > P(Ax­ioms) (and quickly!) just by luck. Such a P would be very valuable.)