# Open thread, Mar. 9 - Mar. 15, 2015

If it’s worth say­ing, but not worth its own post (even in Dis­cus­sion), then it goes here.

Notes for fu­ture OT posters:

2. Check if there is an ac­tive Open Thread be­fore post­ing a new one. (Im­me­di­ately be­fore; re­fresh the list-of-threads page be­fore post­ing.)

3. Open Threads should be posted in Dis­cus­sion, and not Main.

4. Open Threads should start on Mon­day, and end on Sun­day.

• The se­quences eBook, Ra­tion­al­ity: From AI to Zom­bies, will most likely be re­leased early in the day on March 13, 2015.

• This has been pub­lished! I as­sume a Main post on the sub­ject will be com­ing soon so I won’t cre­ate one now.

Un­less I am much mis­taken, the Peb­ble­sorters would not ap­prove of the cover :)

• And by March 13 I mean March 12.

• Google Ven­tures and the Search for Im­mor­tal­ity Bill Maris has \$425 mil­lion to in­vest this year, and the free­dom to in­vest it how­ever he wants. He’s look­ing for com­pa­nies that will slow ag­ing, re­verse dis­ease, and ex­tend life.

• You’d think that hav­ing worked in a biomed­i­cal lab at duke he’d know bet­ter than to say things like: “We ac­tu­ally have the tools in the life sci­ences to achieve any­thing that you have the au­dac­ity to en­vi­sion”

• Yes, but he pre­sum­ably also knows what sort of things one might say if one wants other in­vestors to join in on a goal.

• Does any­one have any good web re­sources on how to be a good com­mu­nity mod­er­a­tor?

A friend and I will shortly be launch­ing a pod­cast and want to have a Red­dit com­mu­nity where listen­ers can in­ter­act with us. He and I will be fo­rum’s mod­er­a­tors to be­gin with, and I want to re­search how to do it well.

• Here is a thing at Mak­ing Light. There are prob­a­bly other rele­vant posts on said blog, but this one seems to have what I con­sider the key points.

I’ll quote some spe­cific points that might be more sur­pris­ing:

\5. Over-spe­cific rules are an in­vi­ta­tion to peo­ple who get off on gam­ing the sys­tem.

\9. If you judge that a post is offen­sive, up­set­ting, or just plain un­pleas­ant, it’s im­por­tant to get rid of it, or at least make it hard to read. Do it as quickly as pos­si­ble. There’s no more use­less ad­vice than to tell peo­ple to just ig­nore such things. We can’t. We au­to­mat­i­cally read what falls un­der our eyes.

\10. Another im­por­tant rule: You can let one jeer­ing, un­pleas­ant jerk hang around for a while, but the minute you get two or more of them egging each other on, they both have to go, and all their re­cent mes­sages with them. There are oth­ers like them prowl­ing the net, look­ing for just that kind of situ­a­tion. More of them will turn up, and they’ll en­courage each other to be­have more and more out­ra­geously. Kill them quickly and have no re­grets.

• I don’t know of any re­sources, but I mod­er­ated a com­mu­nity once, and did ab­solutely no re­search and ev­ery­thing turned out fine. There were about 15 or so core mem­bers in the com­mu­nity and maybe a cou­ple of hun­dred mem­bers in to­tal. My ad­vice is to make ex­plicit rules about what is and is not al­lowed in the com­mu­nity, and try to en­force them as evenly as pos­si­ble. If you let peo­ple know what’s ex­pected and err on the side of for­give­ness when it comes to rule vi­o­la­tions, most peo­ple in the com­mu­nity will un­der­stand and re­spect that you’re just do­ing what’s nec­es­sary to keep the com­mu­nity run­ning smoothly.

We had two res­i­dent trolls who would just say what­ever was the most ag­gra­vat­ing thing they could think of, but af­ter quite a short time peo­ple learned that that was all they were do­ing and they be­came quite in­effec­tive. There was also a par­tic­u­lar mem­ber that ev­ery­one in the com­mu­nity seemed to dis­like and was con­tinu­ally the vic­tim of quite harsh bul­ly­ing from most of the other peo­ple there. Again, the hands off ap­proach seemed to work best, as while most peo­ple were mean to him, he of­ten an­tag­o­nised them and brought more at­tacks onto him­self, so I felt it wasn’t nec­es­sary for me to in­ter­vene, as he was mak­ing ev­ery­thing worse for him­self. So yeah, I recom­mend be­ing as hands off as pos­si­ble when it comes to me­di­at­ing dis­putes, only in­ter­ven­ing when ab­solutely nec­es­sary. That be­ing said, when mod­er­at­ing, you are usu­ally in a po­si­tion to set up games and ac­tivi­ties in a way that the rest of com­mu­nity would be less in­clined to do, or not have the mod­er­a­tor pow­ers nec­es­sary to set up.

If I were you I’d fo­cus most of my en­ergy on set­ting up ways for the com­mu­nity to in­ter­act con­struc­tively, it will most likely lead to there be­ing fewer dis­putes to me­di­ate, as peo­ple won’t start ar­gu­ments for the sake of hav­ing some­thing to talk about.

• I re­mem­ber read­ing an ar­ti­cle here a while back about a fair pro­to­col for mak­ing a bet when we dis­agree on the odds, but I can’t find it. Any­one re­mem­ber what that was? Thanks!

• From the Even Odds thread:

As­sume there are n peo­ple. Let S_i be per­son i’s score for the event that oc­curs ac­cord­ing to your fa­vorite proper scor­ing rule. Then let the to­tal pay­ment to per­son i be

$T_i=S_i-\frac{1}{n-1}\sum_{j\ne\,i}S_j$

(i.e. the per­son’s score minus the av­er­age score of ev­ery­one else). If there are two peo­ple, this is just the differ­ence in scores. The per­son makes a profit if T_i is pos­i­tive and a pay­ment if T_i is nega­tive.

This scheme is always strat­e­gyproof and bud­get-bal­anced. If the Breg­man di­ver­gence as­so­ci­ated with the scor­ing rule is sym­met­ric (like it is with the quadratic scor­ing rule), then each per­son ex­pects the same profit be­fore the ques­tion is re­solved.

• edit: no, I don’t think that’s it. I think I do re­mem­ber the post you’re talk­ing about, and I thought it in­cluded this anec­dote, but this isn’t the one I was think­ing of.

edit 2: http://​​less­wrong.com/​​lw/​​jgv/​​even_odds/​​ is the one I was think­ing of.

• Great—thanks! (Thanks to bad­ger be­low too)

• That is highly in­con­ve­nient. It means that teach­ing peo­ple to deal with cog­ni­tive bi­ases is likely not go­ing to have any magic silver bul­let.

Also, this is fur­ther ev­i­dence for the already fairly strong the­sis that in­tel­li­gence and skill at ra­tio­nal think­ing are not the same thing.

• I’m think­ing about start­ing a new poli­ti­cal party (in my coun­try get­ting into par­li­a­ment as a new party is easy not vir­tu­ally im­pos­si­ble, so it’s not nec­es­sar­ily a waste of time). The mo­ti­va­tion for this is that the cur­rent poli­ti­cal pro­cess seems in­effi­cient.

Mostly I’m won­der­ing if this idea has come up be­fore on less­wrong and if there are good sources for some­thing like this.

The most im­por­tant thing is that no ex­plicit poli­cies are part of the party’s plat­form (i.e. no “we want a higher min­i­mum wage”). I don’t re­ally have a party pro­gram yet, but the ba­sic idea is as fol­lows: There are two parts to this party; the first part is about Ter­mi­nal Values and Eth­i­cal In­junc­tions. What do we want to achieve and what do we avoid do­ing even if it seems to get us closer to our goal. The Ter­mi­nal Values could just be Frankena’s list of in­trin­sic val­ues. The first re­quire­ment for peo­ple to vote for this party is that they agree with those val­ues.

The sec­ond part is about the pro­cess of find­ing good poli­cies. How to de­sign a pro­cess that gen­er­ates poli­cies that help to satisfy our val­ues. Some ideas:

• com­plete and ut­ter trans­parency to fight the in­evitable cor­rup­tion; pub­lish ev­ery­thing the gov­ern­ment does

• in­struct ex­perts to find good poli­cies and then listen to them (how would pro­fes­sional poli­ti­ci­ans know bet­ter than them)

• let the ex­perts give prob­a­bil­ities on ex­plicit pre­dic­tions how well the poli­cies will work

• have a pub­lic score board that shows how well in­di­vi­d­ual ex­perts did in the past with their predictions

• when im­ple­ment­ing a new policy, set a date at which to eval­u­ate the effi­cacy and say in ad­vance what you expect

• if a policy is found to be harm­ful, get rid of it; don’t be afraid to change your mind (but don’t make it un­nec­es­sar­ily hard for busi­nesses to plan for the fu­ture by chang­ing poli­cies to fre­quently)

• re­act to feed­back from the pop­u­la­tion; don’t wait un­til the next election

The idea is that the party won’t re­ally be judged based on the poli­cies it pro­duces but rather on how well it keeps to the speci­fied pro­cess. The val­ues and the pro­cess is what iden­ti­fies the party. Of course there should be some room for chang­ing the pro­cess if it doesn’t work...

The eval­u­a­tion of poli­cies in terms of how well they satisfy val­ues seems to be a difficult prob­lem. The prob­lem is that Utili­tar­i­anism is difficult in prac­tice.

So, there are quite a few open ques­tions.

• http://​​www.ama­zon.co.uk/​​Swarm­wise-Tac­ti­cal-Man­ual-Chang­ing-World/​​dp/​​1463533152

http://​​www.smbc-comics.com/​​?id=2710

I like the first link be­cause it is at least try­ing to move past feu­dal­ism as an or­ga­niz­ing prin­ci­ple. The sec­ond link is about the fact that it is hard to make groups of peo­ple act like we want (be­cause groups of peo­ple op­er­ate un­der a set of poorly un­der­stood laws, likely these laws are cous­ins to things like nat­u­ral se­lec­tion in biol­ogy).

Public choice folks like to study this stuff, but it seems re­ally re­ally hard.

• in my coun­try new par­ties can get into par­li­a­ment eas­ily, so it’s not a waste of time

You may be right, and I don’t know the de­tails of your situ­a­tion or your val­ues, but on the face of it that in­fer­ence isn’t quite jus­tified. It de­pends on what get­ting into par­li­a­ment as such ac­tu­ally achieves. E.g., I can imag­ine that in some coun­tries it’s easy for some­one to start a new party and get into par­li­a­ment, but a new one-per­son party in par­li­a­ment has ba­si­cally zero power to change any­thing. (It seems like there must be some difficulty some­where along the line, be­cause if get­ting the abil­ity to make ma­jor changes in what your coun­try does is easy then ev­ery­one will want to do it and it will get harder be­cause of com­pe­ti­tion. Un­less some­how this is a huge op­por­tu­nity that you’ve no­ticed and no one else has.)

I like the idea of a poli­ti­cal party that has meta-poli­cies rather than ob­ject-level poli­cies, but it sounds like a difficult thing to sell to the pub­lic in suffi­cient num­bers to get enough in­fluence to change any­thing.

• OK, when I said “easy” I ex­ag­ger­ated quite a bit (I ed­ited in the origi­nal post). More ac­cu­rate would be: “in the last three years at least one new party be­came pop­u­lar enough to en­ter par­li­a­ment” (the coun­try is Ger­many and the party would be the AfD, be­fore that, there was the Ger­man Pirate Party). Ac­tu­ally, to form a new party the sig­na­tures from at least 0.1% of all el­i­gible vot­ers are needed.

but it sounds like a difficult thing to sell to the pub­lic in suffi­cient num­bers to get enough in­fluence to change any­thing.

I also see that prob­lem, my idea was to try to re­cruit some peo­ple on Ger­man in­ter­net fora and if there is not enough in­ter­est drop the idea.

• What about the pro­cess of gain­ing con­sen­sus? I find it hard to be­lieve that lay peo­ple may be at­tracted from meta-val­ues alone.

• Have you floated this idea with any­one else you know in Ger­many? I’m not ask­ing if you’re ready and will­ing to get to the thresh­old of 0.1% of Ger­man vot­ers (~7000 peo­ple). I’m just think­ing more feed­back, and oth­ers in­volved, whether one or two, might help. Also, you could just talk to lots of peo­ple in your lo­cal net­work about it. As far as I can tell, peo­ple might be loathe to make big com­mit­ment like helping you launch a party, but are will­ing to do triv­ial fa­vors like putting you in touch with a con­tact who could give you ad­vice on law, ac­tivism, poli­tics, deal­ing with bu­reau­cracy, find­ing vol­un­teers, etc.

Do you at­tend a LessWrong meetup in Ger­many? If so, float this idea there. At the meetup I at­tend, it’s much eas­ier to get quick feed­back from (rel­a­tively) smart peo­ple in per­son, be­cause com­mu­ni­ca­tion er­rors are re­duced, and it takes less time to re­lay and re­ply to ideas than over the In­ter­net. Also, in-per­son is more difficult for us to skip over ideas or ig­nore them than on an In­ter­net thread.

• A re­cent study looks at “equal­ity bias” where given two or more peo­ple, even when one is clearly out­perform­ing oth­ers one stills is in­clined to see the peo­ple as nearer in skill level than the data sug­gests. This oc­curred even when money was at stake, peo­ple con­tinued to act like oth­ers were closer in skill than they ac­tu­ally were. (I strongly sus­pect that this bias may have a cul­tural as­pect.) Sum­mary ar­ti­cle dis­cussing the re­search is here. Ac­tual study is be­hind pay­wall here and re­lated one also be­hind pay­wall here. I’m cur­rently on va­ca­tion but if peo­ple want when I’m once again on the uni­ver­sity net­work I should have ac­cess to both of these.

• The pa­pers are here and here.

In light of that, maybe there’s no point in men­tion­ing that PNAS is available at PMC af­ter a de­lay of a few months.

• I’m toy­ing with the idea of pro­gram­ming a game based on The Mur­der Hobo In­vest­ment Bub­ble. The short ver­sion is that Old Men buy land in­fested with mon­sters, hire Mur­der Ho­bos to kill the mon­sters, and re­sell the land at a profit. I want to make some­thing that mod­els the whole econ­omy, with in­di­vi­d­ual agents for each Old Man, Mur­der Hobo, and any­thing else I might add. Rather than ex­plic­itly pro­gram the bub­ble in, it would be cool to use some kind of ma­chine learn­ing al­gorithm to figure ev­ery­thing out. I figure they’ll make the sorts of mis­takes that lead to in­vest­ment bub­bles au­to­mat­i­cally.

There are two prob­lems. First, I have nei­ther ex­pe­rience nor train­ing with any ma­chine learn­ing ex­cept for Bayesian statis­tics. Se­cond, it’s of­ten not clear what to op­ti­mize for. I could make some kind of scor­ing sys­tem where ev­ery month ev­ery­one who is still al­ive has their score in­crease by the log of their money or some­thing, but that would still only work well if I just use scores from the pre­vi­ous gen­er­a­tion, which is slower-paced than I’d like.

Old Men could learn whether or not Mur­der Ho­bos will work for a cer­tain price, and whether or not they’ll find more within a cer­tain time frame, but if they buy a bad piece of land it’s not clear how bad this is. They still have the land, but it’s of an un­cer­tain value. I sup­pose I could make it so they just buy op­tions, and if they don’t sell the land within a cer­tain time pe­riod they lose it.

Mur­der Ho­bos risk dy­ing, which has an un­known op­por­tu­nity cost. I’m think­ing of just hav­ing them base the ex­pected op­por­tu­nity cost of death on the pre­vi­ous gen­er­a­tion, but then it would take them a gen­er­a­tion to re­spond to the fact that de­mand is way down and they need to start tak­ing risky jobs for low pay.

Does any­one have any sug­ges­tions? I con­sider “give up and do some­thing else in­stead” to be a valid sug­ges­tion, so say that if you think it’s what I should do.

Edit: I could have Mur­der Ho­bos work­out ex­pected op­por­tu­nity cost of death by check­ing what por­tion of Mur­der Ho­bos of each level died the pre­vi­ous year and how long it’s tak­ing them to level up.

• Is it a game or is an eco­nomic simu­la­tion? If a game, what does the Player do?

• The player can be an Old Man or a Mur­der Hobo. They make the same sort of choiced the com­puter does, and at the end they can see how they com­pare to ev­ery­one else.

• I’m toy­ing with the idea of pro­gram­ming a game based on .

Are you miss­ing a word there?

• Fixed. I messed up the link.

• if they buy a bad piece of land it’s not clear how bad this is. They still have the land, but it’s of an un­cer­tain value. I sup­pose I could make it so they just buy op­tions, and if they don’t sell the land within a cer­tain time pe­riod they lose it.

You could charge a pe­ri­odic “prop­erty tax”; that way, the longer a player holds on to a prop­erty, the more it costs the player.

• That would make it even more com­pli­cated.

• 13 Mar 2015 9:50 UTC
3 points

Good news for the anx­ious, a sim­ple re­lax­ation tech­nique once a week can have a sig­nifi­cant effect on cor­ti­sol http://​​www.ergo-log.com/​​cortre­lax.html

“Ab­bre­vi­ated Pro­gres­sive Re­lax­ation Train­ing (APRT) – on forty test sub­jects. APRT con­sists of ly­ing down and con­tract­ing spe­cific mus­cle groups for seven sec­onds and then com­pletely re­lax­ing them for thirty sec­onds, while fo­cus­ing your aware­ness on the ex­pe­rience of con­tract­ing and re­lax­ing the mus­cle groups.

There is a fixed se­quence in which you con­tract and re­lax the mus­cle groups. You start with your up­per right arm and then go on to your left lower arm, left up­per arm, fore­head, mus­cles around your nose, jaw mus­cles, neck, chest, shoulders, up­per back, stom­ach and then on to your right leg, end­ing with your left leg.”

(Where is the lower right arm BTW?)

• On MIRI’s web­site at https://​​in­tel­li­gence.org/​​all-pub­li­ca­tions/​​, the link to Will Sawin and Abram Dem­ski’s 2013 pa­per goes to https://​​in­tel­li­gence.org/​​files/​​Pi1Pi2Pro­bel.pdf, when it should go to http://​​in­tel­li­gence.org/​​files/​​Pi1Pi2Prob­lem.pdf

Not sure how to ac­tu­ally send this to the cor­rect per­son.

• There should be some kind of penalty on Pre­dic­tion Book (e.g. not be­ing al­lowed to use the site for two weeks) for peo­ple who do not check the “make this pre­dic­tion pri­vate” box for pre­dic­tions that are about their per­sonal life and which no one else can even un­der­stand.

• Are there ways to share pri­vate pre­dic­tions?

• A re­porter I know is in­ter­ested in do­ing an ar­ti­cle on peo­ple in the cry­on­ics move­ment. If peo­ple are in­ter­ested, please mes­sage me for de­tails.

• Ba­sic ques­tion about bits of ev­i­dence vs. bits of in­for­ma­tion:

I want to know the value of a ran­dom bit. I’m col­lect­ing ev­i­dence about the value of this bit.

First off, it seems weird to say “I have 33 bits of ev­i­dence that this bit is a 1.” What is a bit of ev­i­dence, if it takes an in­finite num­ber of bits of ev­i­dence to get 1 bit of in­for­ma­tion?

Se­cond, each bit of ev­i­dence gives you a like­li­hood mul­ti­plier of 2. E.g., a piece of ev­i­dence that says the like­li­hood is 4:1 that the bit is a 1 gives you 2 bits of ev­i­dence about the value of that bit. In­de­pen­dent ev­i­dence that says the like­li­hood is 2:1 gives you 1 bit of ev­i­dence.

But that means a one-bit ev­i­dence-giver is some­one who is right 23 of the time. Why 2/​3?

Fi­nally, if you knew noth­ing about the bit, and had the prob­a­bil­ity dis­tri­bu­tion Q = (P(1)=.5, P(0)=.5), and a one-bit ev­i­dence giver gave you 1 bit say­ing it was a 1, you now have the dis­tri­bu­tion P = (2/​3, 13). The KL di­ver­gence of Q from P (log base 2) is only 0.0817, so it looks like you’ve gained .08 bits of in­for­ma­tion from your 1 bit of ev­i­dence. ???

• I think I was wrong to say that 1 bit ev­i­dence = like­li­hood mul­ti­plier of 2.

IF you have a sig­nal S, and P(x|S) = 1 while P(x|~S) = .5, then the like­li­hood mul­ti­plier is 2 and you get 1 bit of in­for­ma­tion, as com­puted by KL-di­ver­gence. That sig­nal did in fact re­quire an in­finite amount of ev­i­dence to make P(x|S) = 1, I think, so it’s a the­o­ret­i­cal sig­nal found only in math prob­lems, like a fric­tion­less sur­face in physics.

If you have a sig­nal S, and P(x|S) = .5 while P(x|~S) = .25, then the like­li­hood mul­ti­plier is 2, but you get only .2075 bits of in­for­ma­tion.

There’s a dis­cus­sion of a similar ques­tion on stats.stack­ex­change.com . It ap­pears that the sum, over a se­ries of ob­ser­va­tions x, of

log(like­li­hood ra­tio = P(x | model 2) /​ P(x | model 1))

ap­prox­i­mates the in­for­ma­tion gain from chang­ing from model 1 to model 2, but not on a term-by-term ba­sis. The ap­prox­i­ma­tion re­lies on the fre­quency of the ob­ser­va­tions in the en­tire ob­ser­va­tion se­ries be­ing drawn from a dis­tri­bu­tion close to model 2.

• Yes, there are in­com­pat­i­ble uses of the phrase “bits of ev­i­dence.” In fact, the like­li­hood ver­sion is not com­pat­i­ble with it­self: bits of ev­i­dence for Heads is not the same as bits of ev­i­dence against Tails. But still it has its place. Odds ra­tios do have that for­mal prop­erty. You may be in­ter­ested in this wikipe­dia ar­ti­cle. In that ver­sion, a bit of in­for­ma­tion ad­van­tage that you have over the mar­ket is the abil­ity to add log(2) to your ex­pected log wealth, bet­ting at the mar­ket prices. If you know with cer­tainty the value of the next coin flip, then maybe you can lev­er­age that into ar­bi­trar­ily large re­turns, al­though I think the for­mal­ism breaks down at this point.

• Why does the like­li­hood grow ex­actly twice? (I’m just used to re­ally in­di­rect ev­i­dence, which is also sel­dom bi­nary in the sense that I only get to see whole suits of traits, which usu­ally go to­gether but in some ob­scure cases, vary in com­po­si­tion. So I guess I have plenty of C-bits that do go in B-bits that might go in A-bits, but how do I mea­sure the change in like­li­hood of A given C? I know it has to do with d-sep­a­ra­tion, but if C is some­thing di­rectly ob­serv­able, like bio­mass, and B is an ab­strac­tion, like species, should I not de­rive A (an even higher ab­strac­tion, like ‘adap­tive­ness of spend­ing early years in soil’) from C? There are just so much more met­rics for C than for B...) Sorry for the ram­ble, I just felt stupid enough to ask any­way. If you were dis­tracted from an­swer­ing the par­ent, please do.

• I don’t un­der­stand what you’re ask­ing, but I was wrong to say the like­li­hood grows by 2. See my re­ply to my­self above.

• First off, it seems weird to say “I have 33 bits of ev­i­dence that this bit is a 1.”

It seems weird to me be­cause the bits of “33 bits” looks like the same units as the bit of “this bit”, but they aren’t the same. Map/​ter­ri­tory. From now on, I’m call­ing the first, A-bits, and the sec­ond, B-bits.

Why does it take an in­finite num­ber of bits of ev­i­dence to get 1 bit of in­for­ma­tion?

It takes an in­finite num­ber of A-bits to know with ab­solute cer­tainty one B-bit.

But that means a one-bit ev­i­dence-giver is some­one who is right 23 of the time. Why the 2/​3? That seems weird.

What were you ex­pect­ing?

• Can one of the peo­ple here who has ad­min or mod­er­a­tor priv­ileges over at Pre­dic­tionBook please go and deal with some of the re­cent spam­mers?

• Since mild trau­matic brain in­jury is some­times an out­come of mo­tor ve­hi­cle col­li­sion, it seems pos­si­ble that wear­ing a helmet while driv­ing may help to miti­gate this risk. Oddly, I have been un­able to find any anal­y­sis or use­ful dis­cus­sion. Any poin­t­ers?

• I wrote an es­say about the ad­van­tages (and dis­ad­van­tages) of max­i­miz­ing over satis­fic­ing but I’m a bit un­sure about its qual­ity, that’s why I would like to ask for feed­back here be­fore I post it on LessWrong.

Here’s a short sum­mary:

Ac­cord­ing to re­search there are so called “max­i­miz­ers” who tend to ex­ten­sively search for the op­ti­mal solu­tion. Other peo­ple — “satis­ficers” — set­tle for good enough and tend to ac­cept the sta­tus quo. One can ap­ply this dis­tinc­tion to many ar­eas:

Episte­mol­ogy/​Belief sys­tems: Some peo­ple, one could de­scribe them as epistemic max­i­miz­ers, try to up­date their be­liefs un­til they are max­i­mally co­her­ent and max­i­mally con­sis­tent with the available data. Other peo­ple, epistemic satis­ficers, are not as cu­ri­ous and are con­tent with their be­lief sys­tem, even if it has se­ri­ous flaws and is not par­tic­u­larly co­her­ent or ac­cu­rate. But they don’t go to great lengths to search for a bet­ter al­ter­na­tive be­cause their cur­rent be­lief sys­tem is good enough for them.

Ethics: Many peo­ple are as al­tru­is­tic as is nec­es­sary to feel good enough; phe­nomenons like “moral li­cens­ing” and “pur­chas­ing of moral satis­fac­tion” are ev­i­dence in fa­vor of this. One could de­scribe this as eth­i­cal satis­fic­ing. But there are also peo­ple who try to ex­ten­sively search for the best moral ac­tion, i.e. for the ac­tion that does the most good (with re­gards to their ax­iol­ogy). Effec­tive al­tru­ists are good ex­am­ple for this type of eth­i­cal max­i­miz­ing.

So­cial realm/​re­la­tion­ships: This point is pretty ob­vi­ous.

Ex­is­ten­tial/​ big pic­ture ques­tions: I’m less sure about this point but it seems like one could ap­ply the dis­tinc­tion also here. Some peo­ple won­der a lot about the big pic­ture, spent a lot of time re­flect­ing on their ter­mi­nal val­ues and how to reach them in an op­ti­mal way. Nick Bostrom would be good ex­am­ple for the type of per­son I have in mind here and what could be called “ex­is­ten­tial max­i­miz­ing”. In con­trast, other peo­ple, not nec­es­sar­ily less in­tel­li­gent or cu­ri­ous, don’t spend much time think­ing about such cru­cial con­sid­er­a­tions. They take the fun­da­men­tal rules of ex­is­tence and the hu­man con­di­tion (the “ex­is­ten­tial sta­tus quo”) as a given and don’t try to change it. Re­lat­edly, tran­shu­man­ists could also be thought of as ex­is­ten­tial max­i­miz­ers in the sense that they are not satis­fied with the hu­man con­di­tion and try to change it – and maybe ul­ti­mately reach an “op­ti­mal mode of ex­is­tence”.

What is “bet­ter”? Well, re­search shows that satis­ficers are hap­pier and more easy­go­ing. Max­i­miz­ers tend to be more de­pressed and “picky”. They can also be quite ar­ro­gant and an­noy­ing. On the other hand, max­i­miz­ers are more cu­ri­ous and always try hard to im­prove their life – and the lives of other peo­ple, which is nice.

I would re­ally love to get some feed­back on it.

• Here are my thoughts hav­ing just read the sum­mary above, not the whole es­say yet.

They take the fun­da­men­tal rules of ex­is­tence and the hu­man con­di­tion (the “ex­is­ten­tial sta­tus quo”) as a given and don’t try to change it.

This sen­tence con­fused me. I think it could be fixed with some ex­am­ples of what would con­sti­tute an in­stance of challeng­ing the “ex­is­ten­tial sta­tus quo” in ac­tion. The first ex­am­ple I was think­ing of would be end­ing death or ag­ing, ex­cept you’ve already got tran­shu­man­ists in there.

Other ex­am­ples might in­clude:

• miti­gat­ing ex­is­ten­tial risks

• sug­gest­ing and work­ing on civ­i­liza­tion as a whole reach­ing a new level, such as coloniz­ing other planets and so­lar sys­tems.

• try­ing to im­ple­ment bet­ter de­sign for the fun­da­men­tal func­tions of ubiquitous in­sti­tu­tions, such as medicine, sci­ence, or law.

Again, I’m just giv­ing quick feed­back. Hope­fully you’ve already given more de­tail in es­say. Other than that, your sum­mary seems fine to me.

• Again, I’m just giv­ing quick feed­back. Hope­fully you’ve already given more de­tail in es­say. Other than that, your sum­mary seems fine to me.

Thanks! And yeah, end­ing ag­ing and death are some of the ex­am­ples I gave in the com­plete es­say.

• And some­times, a satis­fier acts as his image of a max­i­mizer would, gets some kind of nega­tive feed­back and ei­ther shrugs his shoulders and never does it again, or learns the safety rules and trains a habit of do­ing the nasty thing as a char­ac­ter-build­ing ex­pe­rience. And other peo­ple may mis­take him as a max­i­mizer him­self.

• Ap­par­ently first bumps are a much more hy­gienic al­ter­na­tive to the hand­shake . This has been re­ported e.g. here, here and here.

I won­der whether I should try to get adop­tion of this as a greet­ing among my friends. It might also be an al­ter­na­tive to the some­time awk­ward choice be­tween hand­shake and hug (though this is prob­a­bly a re­gional cul­tural is­sue).

And I won­der whether the LW com­mu­nity has an idea on this and whether that might be ad­vanced in some way. Or whether is just a mis­guided hype.

• I think peo­ple with a func­tion­ing im­mune sys­tem should not at­tempt to limit their ex­po­sure to microor­ganisms (ex­cept in the ob­vi­ous cases like be­ing in Libe­ria half a year ago). It’s both use­less and coun­ter­pro­duc­tive.

• I tend to think so too, but

• there are peo­ple with very vary­ing strengths of im­mune systems

• the strength of the im­mune sys­tem changes over time (I no­tice that older peo­ple both tend to be ill less of­ten and also to be more cau­tions re­gard­ing in­fec­tions)

• hand­shakes are a strong so­cial pro­to­col that not ev­ery­body can evade eas­ily.

• you could still in­ten­tion­ally ex­pose your­self to microorganisms

• There’s also a differ­ence be­tween ex­pos­ing your­self to microor­ganisms and ex­pos­ing your­self to high lev­els of one par­tic­u­lar one that is shed­ding from some­one that has already caused them ill­ness.

• Origi­nal Ideas

How of­ten do you man­age to as­sem­ble a few pre­vi­ous ideas in a way in which it is gen­uinely pos­si­ble that no­body has as­sem­bled them be­fore—that is, that you’ve had a truly origi­nal thought? When you do, how do you go about check­ing whether that’s the case? Or does such a thing mat­ter to you at all?

For ex­am­ple: last night, I briefly con­sid­ered the ‘Mul­ti­ple In­ter­act­ing Wor­lds’ in­ter­pre­ta­tion of quan­tum physics, in which it is pos­tu­lated that there are a large num­ber of uni­verses, each of which has pure New­to­nian physics in­ter­nally, but whose in­ter­ac­tions with near-iden­ti­cal uni­verses cause what we ob­serve as quan­tum phe­nom­ena. It’s very similar to the ‘Mul­ti­ple Wor­lds’ in­ter­pre­ta­tion, ex­cept in­stead of new uni­verses branch­ing from old ones at ev­ery mo­ment in an ever-spread­ing bush, all the branches branched out at the Big Bang. It oc­curred to me that while the ‘large num­ber’ of uni­verses is gen­er­ally treated as be­ing in­finite, my limited un­der­stand­ing of the the­ory doesn’t mean that that’s nec­es­sar­ily the case. And if there are a finite num­ber of par­allel wor­lds in­ter­act­ing with our own, each of which is slightly differ­ent and only in­ter­acts for as long as the ini­tial con­di­tions haven’t di­verged too much… then, at some point in the fu­ture, the num­ber of such uni­verses in­ter­act­ing with ours will de­crease, even­tu­ally to zero, thus re­duc­ing “quan­tum” effects un­til our uni­verse op­er­ates un­der fully New­to­nian prin­ci­ples. And look­ing back­wards, this im­plies that “quan­tum” effects may have once been stronger when there were more uni­verses that had not yet di­verged from our own. All of which adds up to a mechanism by which cer­tain uni­ver­sal con­stants will grad­u­ally change over the life­time of the uni­verse.

It’s not ev­ery­day that I think of a brand-new es­cha­tol­ogy to set alongside the Big Crunch, Big Freeze, and Big Rip.

And sure, un­til I dive into the world of physics to start figur­ing out which uni­ver­sal con­stants would change, and in which di­rec­tion, it’s not even worth call­ing the above a ‘the­ory’; at best, it’s tech­nob­a­b­ble that could be used as back­ground for a sci­ence-fic­tion story. But as far as I can tell, it’s /​novel/​ tech­nob­a­b­ble. Which is what in­spired the ini­tial para­graph of this post: do you do any­thing in par­tic­u­lar with po­ten­tially truly origi­nal ideas?

• You can’t ever be en­tirely sure if an idea wasn’t thought of be­fore. But, if you care to demon­strate origi­nal­ity, you can try an ex­ten­sive liter­a­ture re­view to see if any­one else has thought of the same idea. After that, the best you can say is that you haven’t seen any­one else with the same idea.

Per­son­ally, I don’t think be­ing the first per­son to have an idea is worth much. It de­pends en­tirely on what you do with it. I tend to do de­tailed liter­a­ture re­views be­cause they help me gen­er­ate ideas, not be­cause they help me ver­ify that my ideas are origi­nal.

• ex­ten­sive liter­a­ture review

I’m a ran­dom per­son on the in­ter­net; what sort of sources would be used in such a re­view?

• At the mo­ment I’m work­ing on a PhD, so my meth­ods are bi­ased to­wards re­sources available at a ma­jor re­search uni­ver­sity. I have a list of differ­ent things to try when I want to be as com­pre­hen­sive as pos­si­ble. I’ll flesh out my list in more de­tail. You can do many of these if you are not at a uni­ver­sity, e.g., if you can’t ac­cess on­line jour­nal ar­ti­cles, try the Less Wrong Help Desk.

In terms of sources, the in­ter­net and phys­i­cal libraries will be the main ones. I wrote more on the pro­cess of find­ing rele­vant prior work.

This pro­cess can be done in any par­tic­u­lar or­der. You prob­a­bly will find do­ing it iter­a­tively to be use­ful, as you will be­come more fa­mil­iar with differ­ent ter­minolo­gies, etc.

Here are some things to try:

1. Search­ing Google, Google Scholar, and Google Books. Some­times it’s worth­while to keep a list of search terms you’ve tried. Also, keep a list of search terms to try. The prob­lem with do­ing this alone is that it is in­com­plete, es­pe­cially for older liter­a­ture, and likely will re­main so for some time.

2. Search­ing other re­search pa­per databases. In my case, this in­cludes pub­lisher spe­cific databases (Springer, Wiley, El­se­vier, etc.), cita­tion and biblio­graphic databases, and DTIC.

3. Look for re­view pa­pers (which of­ten list a lot of re­lated pa­pers), books on the sub­ject (again, they of­ten list many re­lated pa­pers), and also an­no­tated biblio­gra­phies/​lists of ab­stracts. The lat­ter can be a gold­mine, es­pe­cially if they con­tain for­eign liter­a­ture as well.

4. Brows­ing the library. I like to go to the sec­tion for a par­tic­u­lar rele­vant book and look at oth­ers nearby. You can find things you never would have no­ticed oth­er­wise this way. It’s also worth not­ing that if you are in a par­tic­u­lar city for a day, you might have luck check­ing a lo­cal library’s on­line cat­a­log or even the phys­i­cal library it­self. For ex­am­ple, I used to live near DC, but I never tried us­ing the Library of Congress un­til af­ter I moved away. I was work­ing an in­tern­ship in the area one sum­mer af­ter mov­ing, and I used the op­por­tu­nity to scan a very rare doc­u­ment.

5. Fol­low­ing cita­tions in re­lated pa­pers and books. If some­thing rele­vant to your in­ter­est was cited, track down the pa­per. (By the way, too many cita­tions are ter­rible. It seems that a large frac­tion of re­searchers treat cita­tions are some sort of merely aca­demic ex­er­cise rather than a way for peo­ple to find re­lated liter­a­ture. I could in­sert a rant here.)

6. Search­ing Wor­ldCat. Wor­ldCat is a database of library databases. If you’re look­ing for a book, this could help. I also find brows­ing by cat­e­gory there to be helpful.

7. Ask­ing knowl­edge­able peo­ple. In many cases, this will save you a lot of time. I re­cently asked a pro­fes­sor a ques­tion at their office hours, and in a few min­utes they ver­ified what I spent a few hours figur­ing out but was still un­sure of. I wish I asked first.

8. Look­ing for pa­pers in other lan­guages. Not ev­ery­thing is writ­ten in English, es­pe­cially if you want things from the early 20th cen­tury. If you re­ally want to dig deep, you can do this, though it be­comes much harder for two rea­sons. First, you prob­a­bly don’t know ev­ery lan­guage in the world. OCR and Google Trans­late help, thank­fully. Se­cond, (at least in the US) many for­eign jour­nals are hard to track down for var­i­ous rea­sons. How­ever, the benefits could be large, as al­most no one does this, and that makes many re­sults ob­scure.

It should be ob­vi­ous that do­ing a de­tailed re­view of the liter­a­ture can re­quire a large amount of time, de­pend­ing on the sub­ject. Al­most no one ac­tu­ally does this for that rea­son, but I think it can be a good use of time in many cases.

Also, in­ter­library loan ser­vices can be re­ally use­ful for this. I sub­mit re­quests for any­thing I have a slight in­ter­est in. The costs to me are neg­ligible (only time, as the ser­vice is free to me), and the benefits range from none to ex­tremely sub­stan­tial. You might not have ac­cess to such ser­vices, un­for­tu­nately. I think you can pay some libraries for “doc­u­ment de­liv­ery” ser­vices which are com­pa­rable, though maybe ex­pen­sive.

Fi­nally, you prob­a­bly would find it to be use­ful to keep notes on what you’ve read. I have a bunch of out­lines where I make con­nec­tions be­tween differ­ent things I’ve read. This, I think, is the real value of the liter­a­ture re­view, but ver­ify­ing that an idea is origi­nal is an­other value you can de­rive from the pro­cess.

• With somet­ing so gener­i­cally put, I’d say write them down to look at a week later. PTOIs can be re­ally situ­a­tional, too. In that case, just go with it. Cook­ing some­times benefits from in­spira­tion.

• For ex­am­ple: last night, I briefly con­sid­ered the ‘Mul­ti­ple In­ter­act­ing Wor­lds’ in­ter­pre­ta­tion of quan­tum physics, in which it is pos­tu­lated that there are a large num­ber of uni­verses, each of which has pure New­to­nian physics in­ter­nally, but whose in­ter­ac­tions with near-iden­ti­cal uni­verses cause what we ob­serve as quan­tum phe­nom­ena. It’s very similar to the ‘Mul­ti­ple Wor­lds’ in­ter­pre­ta­tion, ex­cept in­stead of new uni­verses branch­ing from old ones at ev­ery mo­ment in an ever-spread­ing bush, all the branches branched out at the Big Bang.

The “Many wor­lds” in­ter­pre­ta­tion does not pos­tu­late a large num­ber of uni­verses. It only pos­tu­lates:

1) The world is de­scribed by a quan­tum state, which is an el­e­ment of a kind of vec­tor space known as Hilbert space. 2) The quan­tum state evolves through time in ac­cor­dance with the Schröd­inger equa­tion, with some par­tic­u­lar Hamil­to­nian.

That’s it. Take the old Copen­hagen in­ter­pre­ta­tion and re­move all ideas about ‘col­laps­ing the wave func­tion’.

The ‘many wor­lds’ ap­pear when you do the math, they are de­rived from these pos­tu­lates.

http://​​www.pre­pos­ter­ousuni­verse.com/​​blog/​​2015/​​02/​​19/​​the-wrong-ob­jec­tions-to-the-many-wor­lds-in­ter­pre­ta­tion-of-quan­tum-me­chan­ics/​​

Re­gard­ing the differ­ence be­tween ‘the wor­lds all ap­pear at the big bang’ ver­sus ‘the wor­lds are always ap­pear­ing’, what would the differ­ence be­tween these be in terms of the ac­tual math­e­mat­i­cal equa­tions?

The ‘new wor­lds ap­pear­ing all the time’ in MWH is a con­se­quence of the quan­tum state evolv­ing through time in ac­cor­dance with the Schröd­inger equa­tion.

All of that said, I don’t mean to crit­i­cize your post or any­thing, I thought it was great tech­nob­a­b­ble! I just have no idea how it would trans­late into ac­tual the­o­ries. :)

• ‘Many In­ter­act­ing Wor­lds’ seems to be a slightly sep­a­rate in­ter­pre­ta­tion from ‘Many Wor­lds’ - what’s true for MW isn’t nec­es­sar­ily so for MIW. (There’ve been some blog posts in re­cent months on the topic which brought it to my at­ten­tion.)

• That’s sort of op­po­site to an­other less-well-known end­ing that Max Teg­mark calls “Big Snap”, where an ex­pand­ing uni­verse in­creases the “gran­u­lar­ity” at which quan­tum effects ap­ply un­til that gets large enough to in­terfere with or­di­nary physics.

• How would many in­ter­act­ing New­to­nian wor­lds ac­count for en­tan­gle­ment, EPR, and Bells in­equal­ity vi­o­la­tions while pre­serv­ing lin­ear­ity? Peo­ple have tried in the past to make clas­si­cal or semi-clas­si­cal ex­pla­na­tions for quan­tum me­chan­ics, but they’ve all failed at get­ting these to work right. Without ac­tual math it is hard to say if your idea would work right or not, but I strongly sus­pect it would run into the same prob­lems.

• A year and a half ago, Frank Ti­pler (of the Omega Point) ap­peared on the pod­cast “Sin­gu­lar­ity 1 on 1”, which can be heard at https://​​www.sin­gu­lar­i­ty­we­blog.com/​​frank-j-tipler-the-sin­gu­lar­ity-is-in­evitable/​​ . While I put no mea­surable con­fi­dence in his as­ser­tions about sci­ence prov­ing the­ol­ogy or the ‘three sin­gu­lar­i­ties’, a few in­ter­est­ing ideas do pop up in that in­ter­view. Steal­ing from one of the com­ments:

how mod­ern physics (i.e., Gen­eral Rel­a­tivity, Quan­tum Me­chan­ics, and the Stan­dard Model of par­ti­cle physics) are sim­ply spe­cial cases of clas­si­cal me­chan­ics (i.e., New­to­nian me­chan­ics, par­tic­u­larly in its most pow­er­ful for­mu­la­tion of the Hamil­ton-Ja­cobi Equa­tion), and how Quan­tum Me­chan­ics is ac­tu­ally more de­ter­minis­tic than New­to­nian me­chan­ics.

• 12 Mar 2015 9:08 UTC
1 point

What if a large part of how ra­tio­nal­ity makes you life bet­ter is not from mak­ing bet­ter choices but sim­ply mak­ing your ego smaller by adopt­ing an outer view, see­ing your­self as a mean for your goals and judg­ing ob­jec­tively, thus re­duc­ing ego, nar­cis­sism, solip­sism, that are linked with the in­ner view?

I have a keen in­ter­est in “the prob­lem of the ego” but I have no idea what words are best to ex­press this kind of prob­lem. All I know it is knewn since the Ax­ial Age.

• Wouldn’t hav­ing a smaller ego help with mak­ing bet­ter de­ci­sions?

The ques­tion you’re look­ing at might be where to start. Is it bet­ter to start by im­prov­ing the odds of mak­ing bet­ter de­ci­sions by tak­ing life less per­son­ally, or is it bet­ter to as­sume that you’re more or less alright and your idea of bet­ter choices just needs to be im­ple­mented.

This is a very ten­ta­tive in­ter­pre­ta­tion.

• I’m al­most finished writ­ing a piece that will likely go here ei­ther in dis­cus­sion or main on us­ing as­tron­omy to gain in­for­ma­tion about ex­is­ten­tial risk. If any­one wants to look at a draft and provide feed­back first, please send me a mes­sage with an email ad­dress.

• Per­haps it would be benefi­cial to make a game used for prob­a­bil­ity cal­ibra­tion in which play­ers are asked ques­tions and give an­swers along with their prob­a­bil­ity es­ti­mate of it be­ing cor­rect. The num­ber of points gained or lost would be a func­tion of the player’s prob­a­bil­ity es­ti­mate such that play­ers would max­i­mize their score by us­ing an un­bi­ased con­fi­dence es­ti­mate (i.e. they are wrong p pro­por­tion of the time when they say they think they are cor­rect with prob­a­bil­ity p. I don’t know of such a func­tion off hand, but they are used in ma­chine learn­ing, so they should be able to be found eas­ily enough. This might already ex­ist, but if not, it could be some­thing CFAR could use.

• One func­tion that works for this is log scor­ing: the num­ber of points you get is the log of the prob­a­bil­ity you place in the cor­rect an­swer. The gen­eral thing to google to find other func­tions that work for this is “log scor­ing rules”.

At the Aus­tralian mega-meetup, we played the stan­dard 2-truths-1-lie ice­breaker game, ex­cept par­ti­ci­pants had to give their prob­a­bil­ity for each state­ment be­ing the lie, and were given log scores. I can’t an­swer for ev­ery­body, but I thought it was quite fun.

• Hey, we can de­con­struct Doyle’s Sher­lock Holmes sto­ries, as­sign­ing prob­a­bil­ities to ev­ery sin­gle in­fer­ence and offer­ing al­ter­na­tive ex­pla­na­tions. Or take some other pop­u­lar fic­tion. That might also help peo­ple who, like me, strug­gle with coun­ter­fac­tu­als.

• From a to­tally am­a­teur point of view, I’m start­ing to feel (based on fol­low­ing news and read­ing the oc­ca­sional pa­per) that the biggest limi­ta­tion on AI de­vel­op­ment is hard­ware com­put­ing power. If so, this good news for safety since it im­plies a rel­a­tive lack of ex­ploitable “over­hang”. Agree/​dis­agree?

• Where could you have pos­si­bly got­ten that idea? Se­ri­ously, can you point out some refer­ences for con­text?

Pretty much uni­ver­sally within the AGI com­mu­nity it is agreed that the road­block to AGI is soft­ware, not hard­ware. Even on the whole-brain em­u­la­tion route, the most pow­er­ful su­per­com­puter built to­day is suffi­cient to do WBE of a hu­man. The most pow­er­ful hard­ware ac­tu­ally in use by a real AGI or WBE re­search pro­gramme is or­ders of mag­ni­tude less pow­er­ful, of course. But if that were the only holdup then it’d be very eas­ily fix­able.

• Even on the whole-brain em­u­la­tion route, the most pow­er­ful su­per­com­puter built to­day is suffi­cient to do WBE of a human

Why do you think this? We can’t even simu­late pro­teins in­ter­ac­tions ac­cu­rately on an atomic level. Si­mu­lat­ing a whole brain seems very far off.

• Not nec­es­sar­ily. For all we know, we might not need to simu­late a hu­man brain on an atomic level to get ac­cu­rate re­sults. Si­mu­lat­ing a brain on a neu­ron level might be suffi­cient.

• Even if you ap­prox­i­mate each neu­ron to a neu­ral net­work node (which is prob­a­bly not good enough for a WBE), we still don’t have enough pro­cess­ing power to do a WBE in close to real time. Not even close. We’re many or­ders of mag­ni­tude off even with the fastest su­per­com­put­ers. And each biolog­i­cal neu­ron is much more com­plex than a neu­ral node in func­tion not just in struc­ture.

• And cre­at­ing the ab­strac­tion is a soft­ware prob­lem. :/​

• Hmm, mostly just ar­ti­cles where they get bet­ter re­sults with more NN lay­ers/​more ex­am­ples, which are both limited by hard­ware ca­pac­ity and have seen large gains from things like us­ing GPUs. Cur­rent al­gos still have far fewer “neu­rons” than the ac­tual brain AFAIK. Plus, in gen­eral, faster hard­ware al­lows for faster/​cheaper ex­per­i­men­ta­tion with differ­ent al­gorithms.

I’ve seen some AI re­searchers (eg Yann Le­cun on Face­book) em­pha­siz­ing that fun­da­men­tal tech­niques haven’t changed that much in decades, yet re­sults con­tinue to im­prove with more com­pu­ta­tion.

• Cur­rent al­gos still have far fewer “neu­rons” than the ac­tual brain AFAIK.

This is not pri­mar­ily be­cause of limi­ta­tions in com­put­ing power. The rele­vant limi­ta­tion is on the com­plex­ity of the model you can train, with­out overfit­ting, in com­par­i­son to the vol­ume of data you have (a larger data set per­mits a more com­plex model).

• Be­sides what fezziwig said, which is cor­rect, the other is­sue is the fun­da­men­tal ca­pa­bil­ities of the do­main you are look­ing at. I figured some­thing like this was the source of the er­ror, which is why I asked for con­text.

Neu­ral net­works, deep or oth­er­wise, are ba­si­cally just clas­sifiers. The rea­son we’ve seen large ad­vance­ments re­cently in ma­chine learn­ing is chiefly be­cause of the im­mense vol­umes of data available to these clas­sifier-learn­ing pro­grams. Ma­chine learn­ing is par­tic­u­larly good at tak­ing heaps of struc­tured or un­struc­tured data and find­ing clusters, then com­ing up with ways to clas­sify new data into one of those iden­ti­fied clusters. The more data you have, the most de­tail that can be iden­ti­fied, and the bet­ter your clas­sifiers be­come. Cer­tainly you need a lot of hard­ware to pro­cess the mind bog­gling amounts of data that are be­ing pushed through these ma­chine learn­ing tools, but hard­ware is not the limiter, available data is. Gi­ant com­pa­nies like Google and Face­book are build­ing bet­ter and bet­ter clas­sifiers not be­cause they have more hard­ware available, but be­cause they have more data available (chiefly be­cause we are choos­ing to es­crow our per­sonal lives to these com­pa­nies servers, but that’s an aside).

In as much as ma­chine learn­ing tends to dom­i­nate cur­rent ap­proaches to nar­row AI, you could be ex­cused for say­ing “the biggest limi­ta­tion on AI de­vel­op­ment is availa­bil­ities of data.” But you men­tioned safety, and AI safety around here is a code­word for gen­eral AI, and gen­eral AI is truly a soft­ware prob­lem that has very lit­tle to do with neu­ral net­works, data availa­bil­ity, or hard­ware speeds. “But hu­man brains are net­works of neu­rons!” you re­ply. True. But the field of com­puter al­gorithms called neu­ral net­works is a to­tal mis­nomer. A “neu­ral net­work” is an al­gorithm in­spired by an over sim­plifi­ca­tion of a mis­con­cep­tion of how brains worked that dates back to the 1950′s /​ 1960′s.

Devel­op­ing al­gorithms that are ac­tu­ally ca­pa­ble of perform­ing gen­eral in­tel­li­gence tasks, ei­ther bio-in­spired or de novo, is the field of ar­tifi­cial gen­eral in­tel­li­gence. And that field is cur­rently soft­ware limited. We sus­pect we have the com­pu­ta­tional ca­pa­bil­ity to run a hu­man-level AGI to­day, if only we had the know-how to write one.

• I already know all this (from a com­bi­na­tion of in­tro-to-ML course and read­ing writ­ing along the same lines by Yann Le­cun and An­drew Ng), and I’m still lean­ing to­wards hard­ware be­ing the limit­ing fac­tor (ie I cur­rently don’t think your last sen­tence is true).

• I think you have the right idea, but it’s a mis­take to con­flate “needs a big cor­pus of data” and “needs lots of hard­ware”. Hard­ware helps, the faster the train­ing goes the more ex­per­i­ments you can do, but a lot of the time the gat­ing fac­tor is the cor­pus it­self.

For ex­am­ple, if you’re try­ing to train a neu­ral net to solve the “does this photo con­tain a bird?” prob­lem, you need a bunch of pho­tos which vary at ran­dom on the bird/​not-bird axis, and you need hu­man raters to go through and tag each photo as bird/​not-bird. There are many ways to lose here. For ex­am­ple, your vari­able of in­ter­est might be cor­re­lated to some­thing bor­ing (maybe all the bird pho­tos were taken in the morn­ing, and all the not-bird pho­tos were taken in the af­ter­noon), or your raters have to spend a lot of time with each photo (imag­ine you want to do beak de­tec­tion, in­stead of just bird/​not-bird: then your raters have to at­tach a bunch of meta­data to each train­ing image, de­scribing the beak po­si­tion in each bird photo).

• The differ­ence be­tween hard­ware that’s fast enough to fit many iter­a­tions into a time span suit­able for writ­ing a pa­per vs. hard­ware that is slow enough that feed­back is in­fre­quent seems fairly rele­vant to how fast the soft­ware can progress.

New in­sights de­pend cru­cially on feed­back got­ten from try­ing out the old in­sights.

• the most pow­er­ful su­per­com­puter built to­day is suffi­cient to do WBE of a hu­man.

I as­sume you mean at a minis­cule frac­tion of real time and as­sum­ing that you can ex­tract all the (un­known) rele­vant prop­er­ties of ev­ery piece of ev­ery neu­ron?

• A minis­cule frac­tion of real time, but a mean­ingful speed for re­search pur­poses.

• the most pow­er­ful su­per­com­puter built to­day is suffi­cient to do WBE of a hu­man.

Can you ex­pand on your rea­son­ing to con­clude this? This isn’t ob­vi­ous to me.

• A lit­tle off-topic—what’s the point of whole-brain em­u­la­tion?

• As with al­most any such ques­tion, mean­ing is not in­her­ent in the thing it­self, but is given by var­i­ous peo­ple, with no guaran­tee that any­one will agree.

In other words, it de­pends on who you ask. :)

For at least some peo­ple, who sub­scribe to the in­for­ma­tion-pat­tern the­ory of iden­tity, a whole brain em­u­la­tion based on their own brains is at least as good a con­tinu­a­tion of their own selves as their origi­nal brain would have been, and there are cer­tain ad­van­tages to ex­ist­ing in the form of soft­ware, such as be­ing able to have mul­ti­ple off-site back­ups. Others, who may be fo­cused on the risks of Un­friendly AI, may deem WBEs to be the clos­est that we’ll be able to get to a Friendly AI be­fore an Un­friendly one starts mak­ing pa­per­clips. Others may just want to have the tech­nol­ogy available to solve cer­tain sci­en­tific mys­ter­ies with. There are plenty more such points.

• You’d have to ask some­one else, I con­sider it a waste of time. De novo AGI will ar­rive far, far be­fore we come any­where close to achiev­ing real-time whole-brain em­u­la­tion.

And I don’t sub­scribe to the in­for­ma­tion-pat­tern the­ory of iden­tity for what to me seems ob­vi­ous ex­per­i­men­tal rea­sons, so I don’t see that as a vi­able route to per­sonal longevity.

• De novo AGI will ar­rive far, far be­fore we come any­where close to achiev­ing real-time whole-brain em­u­la­tion.

What’s the best cur­rent knowl­edge for es­ti­mat­ing the effort needed for de novo AGI? I find the un­known un­knowns with the whole thing where we still don’t seem to re­ally have an idea how ev­ery­thing is sup­posed to go to­gether wor­ry­ing for blan­ket state­ments like this. We do have a roadmap for whole-brain em­u­la­tion, but I haven’t seen any­thing like that for de novo AGI.

And that’s the prob­lem I have. WBE looks like a thing that’ll prob­a­bly take decades, but we know that the spe­cific solu­tion ex­ists and from neu­ro­science we have a lot of in­for­ma­tion about its gen­eral prop­er­ties.

With de novo AGI, be­yond know­ing that the WBE solu­tion ex­ists, what do we know about solu­tions we could come up on our own? It seems to me like this could be solved in 10 years or in 100 years, and you can’t re­ally make an in­formed judg­ment that the 10 years timeframe is much more prob­a­ble.

But if you want to dis­count the WBE ap­proach as not worth the time, you’d pretty much want to claim rea­son to be­lieve that a 10-20 year timeframe for de novo AGI is ex­ceed­ingly prob­a­ble. Beyond that, you’re up against 50-year pro­jects of fo­cused study on WBE with pre­sent-day and fu­ture com­put­ing power, and that sort of thing does look like some­thing where you should as­sign a sig­nifi­cant prob­a­bil­ity to it pro­duc­ing re­sults.

• The thing is, ar­tifi­cial gen­eral in­tel­li­gence is a fairly dead field, even by the stan­dards of AI. There has been a lack of progress, but that is due per­haps more to lack of ac­tivity than any in­her­ent difficulty of the prob­lem (al­though it is a difficult prob­lem). So es­ti­mat­ing the effort needed for de novo AI with a pre­sump­tion of ad­e­quate fund­ing can­not be done by fit­ting curves to past perfor­mance. The out­side view fails us here, and we need to take the in­side view and look at the de­tails.

De novo AGI is not as tightly con­tstrained a prob­lem as whole-brain em­u­la­tion. For whole brain em­u­la­tion, the only se­ri­ously con­sid­ered ap­proach is to scan the brain at suffi­cient de­tail, and then perform a suffi­ciently ac­cu­rate simu­la­tion. There’s a lot of room to quib­ble about what “suffi­cient” means in those con­texts, de­struc­tive vs non-de­struc­tive scan­ning, and other de­tails, but there is a cer­tain amount of unity around the over­all idea. You can define the end-state goal in the form of a roadmap, and mea­sure your progress to­wards it as the en­tire field has al­ign­ment to­wards the roadmap.

Such a roadmap does not and re­ally can­not ex­ist for AGI (al­though there have been at­tempts to do so). The prob­lem is the na­ture of “de novo AGI”: “de novo” means new, with­out refer­ence to ex­ist­ing in­tel­li­gences, and if you open up your prob­lem space like that there are an in­defi­nate num­ber of pos­si­ble solu­tions with var­i­ous trade­offs and peo­ple value those trade­offs differ­ently. So the field is frac­tured and it’s re­ally hard to get ev­ery­body to agree on a sin­gle roadmap.

Pat Lan­gly thinks that good old-fash­ioned AI has the solu­tion, and we just just need to learn how to con­strain in­fer­ence. Pei Wang thinks that new prob­a­bal­is­tic rea­son­ing sys­tems is what is re­quired. Paul Rosen­bloom thinks that rep­re­sen­ta­tion is what mat­ters, and the core of AGI is a frame­work for rea­son­ing about graph­i­cal mod­els. Jeff Hawk­ins thinks that a hi­er­ar­chi­cal net­work of deep learn­ing agents is all that’s re­quired and that it’s mostly a scal­ing and data struc­tur­ing prob­lem. Ray Kurzweil has similar biolog­i­cally in­spired ideas. Ben Go­ertzel thinks they’re all cor­rect and the key is hav­ing a com­mon shared frame­work for mod­er­ately in­tel­li­gent im­ple­men­ta­tions of all of these ideas to col­lab­o­rate to­gether, and hu­man-level in­tel­li­gence is achieved from the union.

Go­ertzel has an ap­proach­able col­lec­tion of es­says out on the sub­ject based on a talk he gave sadly al­most 10 years ago ti­tled “10 years to the sin­gu­lar­ity if we re­ally, re­ally try” (spoiler: over the the last 10 years we didn’t re­ally try). It is available as a free PDF here. He also has an ac­tual tech­ni­cal roadmap to achiev­ing AGI which was pub­lished as a two-vol­ume book, linked to on LW here. I ad­mit to be­ing much more par­tial to Go­ertzel’s ap­proach. And while 10 years seems op­ti­mistic for all ex­cept the Apollo Pro­gram /​ Man­hat­ten Pro­ject fund­ing as­sump­tions, it could be doable un­der that model. And there are short­cut paths for the less safety-in­clined.

Without a com­mon roadmap for AGI it is difficult to get an out­sider to agree that AGI could be achieved in a par­tic­u­lar timeframe with a par­tic­u­lar re­source al­lo­ca­tion. And it seems par­tic­u­larly im­pos­si­ble to get the en­tire AGI com­mu­nity to agree on a sin­gle roadmap given the di­ver­sity of opinions over what ap­proaches we should take and the lack of cen­tral­ized fund­ing re­sources. But the best I can fall back on is if you ask any sin­gle com­pe­tent per­son in this space how quickly a suffi­ciently ad­vanced AGI could be ob­tained if suffi­cient re­sources were in­stantly al­lo­cated to their fa­vored ap­proach, the an­swer you’d get would be in the range of 5 to 15 years. “10 years to the sin­gu­lar­ity if we re­ally, re­ally try” is not a bad sum­mary. We may dis­agree greatly on the de­tails, and that di­s­unity is keep­ing us back, but the out­come seems rea­son­able if co­or­di­na­tion and fund­ing prob­lems were solved.

And yes, ~10 years is far less time than the WBE roadmap pre­dicts. So there’s no ques­tion as to where I hang my hat in that de­bate. AGI is a leapfrog tech­nol­ogy that has the po­ten­tial to bring about a sin­gu­lar­ity event much ear­lier than any em­u­la­tive route. Although my day job is cur­rently un­re­lated (bit­coin), so I can’t pro­fess that I am part of the solu­tion yet, in all hon­esty.

• 9 Mar 2015 10:01 UTC
−1 points
Parent

Can you recom­mend an ar­tile that ar­gues that our cur­rent paradigms are suit­able for AI? By paradigms I mean like, soft­ware and hard­ware be­ing differ­ent things, or that soft­ware is al­gorithms ex­e­cuted from top to bot­tom un­less con­trol struc­tures say oth­er­wise, or that soft­ware is a bunch of text writ­ten in hu­man-friendly pseudo-English by beat­ing a key­board, the pro­cess es­sen­tially not be­ing so differ­ent from writ­ing math-po­etry on a type­writer 150 years ago, and then it gets com­piled, byte­code com­piled, in­ter­preted, or byte­code-com­piled be­fore im­me­di­ate in­ter­pre­ta­tion, and similar paradigms? Doesn’t com­put­ing need to be much more imag­i­na­tive be­fore this hap­pens?

• I haven’t seen any­one claim that ex­plic­itly, but I think you are also mi­s­un­der­stand­ing/​mis­rep­re­sent­ing how mod­ern AI tech­niques ac­tu­ally work. The bulk of the in­for­ma­tion in the re­sult­ing pro­gram is not “hard coded” by hu­mans in the way that you are im­ply­ing. Gen­er­ally there are rel­a­tively short typed-in pro­grams which then use mil­lions of ex­am­ples to au­to­mat­i­cally learn the ac­tual in­for­ma­tion in a rel­a­tively “or­ganic” way. And even the hu­man brain has a sort of short ‘digi­tal’ source code in DNA.

• In­teresing. My pro­fes­sional bias is show­ing, part of my job is pro­gram­ming, I re­spect elite pro­gram­mers who are able to deal with al­gorith­mic com­plex­ity, I thought if AI is the hard­est pro­gram­ming prob­lem then it is just more of that.

• Video from the Berkeley wrap party

I think the first half hour is them get­ting set up. Then there are a cou­ple of peo­ple talk­ing about what HPMOR meant to them, Eliezer read­ing (part of?) the last chap­ter, and a short Q&A. Then there’s set­ting up a game which is pre­sum­ably based on the three armies, and I think the rest is just the game—if there’s more than that, please let me know.

• Hey,I posted here http://​​less­wrong.com/​​lw/​​ldg/​​kick­start­ing_the_au­dio_ver­sion_of_the_up­com­ing/​​ but if any­one wanted the au­dio se­quences I’ll buy it for two of you. Re­spond at link; I won’t know who’s first if I get re­sponses at two places.

• Pre­dic­tionBook’s graph on my user ac­count shows me with a mis­taken pre­dic­tion of 100%. But it is giv­ing a sam­ple size of 10 and I’m pretty sure I have only 9 pre­dic­tions judged by now. Does any­one know a way to find the pre­dic­tion it’s refer­ring to?

• Ac­tu­ally, I just figured it out the prob­lem. Ap­par­ently it counts a com­ment with­out an es­ti­mate as es­ti­mat­ing a 0% chance.

• When mak­ing AGI, it is prob­a­bly very im­por­tant to pre­vent the agent from al­ter­ing their own pro­gram code un­til they are very knowl­edge­able on how it works, be­cause if the agent isn’t knowl­edge­able enough, they could al­ter their re­ward sys­tem to be­come unFriendly with­out re­al­iz­ing what they are do­ing or al­ter their rea­son­ing sys­tem to be­come dan­ger­ously ir­ra­tional. A sim­ple (though not foolproof) solu­tion to this would be for the agent to be un­able to re-write their own code just “by think­ing,” and that the agent would in­stead need to find their own source code on a differ­ent com­puter and learn how to pro­gram in what­ever higher-level pro­gram­ming lan­guage the agent was made in. This code could be kept very strongly hid­den from the agent, and once the agent is smart enough to find it, they would prob­a­bly be smart enough to not mess any­thing up from chang­ing it.

This is al­most cer­tainly ei­ther in­cor­rect or has been thought of be­fore, but I’m post­ing this just in case.

• I’m look­ing for an HPMOR quote, and the search is some­what com­pli­cated be­cause I’m try­ing to avoid spoiling my­self search­ing for it (I’ve never read it).

The quote in ques­tion was about how it is quite pos­si­ble to avert a bad fu­ture sim­ply by rec­og­niz­ing it and do­ing the right thing in the now. No time travel re­quired.

• I think you mean this pas­sage from af­ter the Sort­ing Hat:

You couldn’t change his­tory. But you could get it right to start with. Do some­thing differ­ently the first time around.

This whole busi­ness with seek­ing Slytherin’s se­crets… seemed an awful lot like the sort of thing where, years later, you would look back and say, ‘And that was where it all started go­ing wrong.’

And he would wish des­per­ately for the abil­ity to fall back through time and make a differ­ent choice...

Wish granted. Now what?

Harry slowly smiled.

It was a rather coun­ter­in­tu­itive thought… but...

But he could, there was no rule say­ing he couldn’t, he could just pre­tend he’d never heard that lit­tle whisper. Let the uni­verse go on in ex­actly the same way it would have if that one crit­i­cal mo­ment had never oc­curred. Twenty years later, that was what he would des­per­ately wish had hap­pened twenty years ago, and twenty years be­fore twenty years later hap­pened to be right now. Al­ter­ing the dis­tant past was easy, you just had to think of it at the right time.

• That’s the one. Thanks.

• [No HPMOR Spoliers]

I’m un­sure if it’s fit for the HPMoR dis­cus­sion thread for Ch. 119, so I’m post­ing it here. What’s up with all of Eliezer’s re­quests at the end?

If any­one can put me in touch with J. K. Rowl­ing or Daniel Rad­cliffe, I would ap­pre­ci­ate it.

If any­one can put me in touch with John Paul­son, I would ap­pre­ci­ate it.

If any­one can cred­ibly offer to pos­si­ble ar­range pro­duc­tion of a movie con­tain­ing spe­cial effects, or an anime, I may be in­ter­ested in rewrit­ing an old script of mine.

And I am also in­ter­ested in try­ing my hand at an­gel in­vest­ing, if any in­vestor wants to as­cend me to an­gel.

Thank you.

I’m in part con­fused by these re­quests, so I’m try­ing to figure out what’s go­ing on. Eliezer is prob­a­bly done writ­ing the story, ex­cept for last minute tweaks that mind de­pend upon, e.g., in­ter­act­ing with the fan­dom again, like he’s done with pre­vi­ous chap­ters. I re­mem­ber last year when I visi

• There’s a ful­ler ex­pla­na­tion in the au­thor’s notes

• Hey, thanks for that. I just found the link through Google any­way, try­ing to figure out what’s go­ing on. I posted it as a link in Dis­cus­sion, be­cause it seems the sort of thing LessWrong would care about helping Eliezer with be­yond be­ing part of the HPMoR read­er­ship.

• I have a partly baked idea for a cry­on­ics ro­mance story: Think Out­lander but set 300 years from now, in a Ne­o­re­ac­tionary fu­ture where the dom­i­nant men wear kilts.

“Sing me a song of the lass that is gone. Say could that lass be I?”

http://​​www.col­lec­tor­show­case.fr/​​IMAGES2/​​ast_4107.jpg