# If it’s worth say­ing, but not worth its own post (even in Dis­cus­sion), then it goes here.

• Luke wrote a de­tailed de­scrip­tion of his ap­proach to beat­ing pro­cras­ti­na­tion (here if you missed it).

Does any­one know if he’s ever given an up­date any­where as to whether or not this same al­gorithm works for him to this day? He seems to be very pro­lific and I’m cu­ri­ous about whether his view on pro­cras­ti­na­tion has changed at all.

• Yvain has started a nootrop­ics sur­vey: https://​​docs.google.com/​​forms/​​d/​​1aNmqagWZ0kkEMYOgByBd2t0b16dR029BoHmR_OClB7Q/​​viewform

I hope a lot of peo­ple take it; I’d like to run some analy­ses on the re­sults.

• Why is nico­tine not on that list?

• I have no idea. The se­lec­tion isn’t the best se­lec­tion ever (I haven’t even heard of some of them), but it can be im­proved for next time based on this time.

• I wrote a logic puz­zle, which you may have seen on my blog. It has got­ten a lot of praise, and I think it is a re­ally in­ter­est­ing puz­zle.

Imag­ine the fol­low­ing two player game. Alice se­cretly fills 3 rooms with ap­ples. She has an in­finite sup­ply of ap­ples and in­finitely large rooms, so each room can have any non-nega­tive in­te­ger num­ber of ap­ples. She must put a differ­ent num­ber of ap­ples in each room. Bob will then open the doors to the rooms in any or­der he chooses. After open­ing each door and count­ing the ap­ples, but be­fore he opens the next door, Bob must ac­cept or re­ject that room. Bob must ac­cept ex­actly two rooms and re­ject ex­actly one room. Bob loves ap­ples, but hates re­gret. Bob wins the game if the to­tal num­ber of ap­ples in the two rooms he ac­cepts is a large as pos­si­ble. Equiv­a­lently, Bob wins if the sin­gle room he re­jects has the fewest ap­ples. Alice wins if Bob loses.

Which of the two play­ers has the ad­van­tage in this game?

This puz­zle is a lot more in­ter­est­ing than it looks at first, and the solu­tion can be seen here.

I would also like to see some of your fa­vorite logic puz­zles. If you you have any puz­zles that you re­ally like, please com­ment and share.

• To make sure I un­der­stand this cor­rectly: Bob cares about win­ning, and get­ting no ap­ples is as good as 3^^^3 ap­ples, so long as he re­jects the room with the fewest, right?

• That is cor­rect.

• A long one-lane, no pass­ing high­way has N cars. Each driver prefers to drive at a differ­ent speed. They will each drive at that preferred speed if they can, and will tail­gate if they can’t. The high­way ends up with clumps of tail­gaters lead by slow drivers. What is the ex­pected num­ber of clumps?

• You got it.

• Coscott’s solu­tion seems in­cor­rect for N=3. la­bel 3 cars 1 is fastest, 2 is 2nd fastest 3 is slow­est. There are 6 pos­si­ble or­der­ings for the cars on the road. Th­ese are shown with the cars ap­pro­pri­ately clumped and the num­ber of clumps as­so­ci­ated with each or­der­ing:

1 2 3 .. 3 clumps

1 32 .. 2 clumps

21 3 .. 2 clumps

2 31 .. 2 clumps

312 .. 1 clump

321 .. 1 clump

Find the mean num­ber of clumps and it is 116 mean num­ber of clumps. Coscott’s solu­tion gives 106.

Fix?

• My solu­tion gives 116

• Dang you are right.

• Coscott’s solu­tion also wrong for N=4, ac­tual solu­tion is a mean of 2, Coscott’s gives 2512.

• 4 with prob 124, 3 with prob 624, 2 with prob 1124, 1 with prob 624

Mean of 2512

How did you get 2?

• Must have counted wrong. Counted again and you are right.

Great prob­lems though. I can­not figure out how to con­clude it is the solu­tion you got. Do you do it by in­duc­tion? I think I could prob­a­bly get the an­swer by in­duc­tion, but haven’t both­ered try­ing.

• Take the kth car. It is at the start of a cluster if it is the slow­est of the first k cars. The kth car is there­fore at the start of a cluster with prob­a­bil­ity 1/​k. The ex­pected num­ber of clusters is the sum over all cars of the prob­a­bil­ity that that car is in the front of a cluster.

• Hur­ray for the lin­ear­ity of ex­pected value!

• Imag­ine that you have a col­lec­tion of very weird dice. For ev­ery prime be­tween 1 and 1000, you have a fair die with that many sides. Your goal is to gen­er­ate a uniform ran­dom in­te­ger from 1 to 1001 in­clu­sive.

For ex­am­ple, us­ing only the 2 sided die, you can roll it 10 times to get a num­ber from 1 to 1024. If this re­sult is less than or equal to 1001, take that as your re­sult. Other­wise, start over.

This al­gorithm uses on av­er­age 10240/​1001=10.228770… rolls. What is the fewest ex­pected num­ber of die rolls needed to com­plete this task?

When you know the right an­swer, you will prob­a­bly be able to prove it.

Solution

• If you care about more than the first roll, so you want to make lots and lots of uniform ran­dom num­bers in 1, 1001, then the best die is (rot13′d) gur ynetrfg cevzr va enatr orpn­hfr vg tvirf lbh gur zbfg rage­bcl cre ebyy. Lbh ar­ire qvfp­neq erfhygf, fvapr gung jb­hyq or gue­b­j­vat njnl rage­bcl, naq vaf­grnq hfr jung vf rffragvnyyl nevguzrgvp pbq­vat.

On­fvp­nyyl, pbafvqre lbhe ebyyf gb or qvtvgf ns­gre gur qr­pvzny cb­vag va onfr C. Abgvpr gung, tvira gung lbh pb­hyq ebyy nyy 0f be nyy (C-1)f sebz urer, gur ah­zore vf pbaf­gen­varq gb n cnegv­phyne enatr. Abj ybbx ng onfr 1001: qbrf lbhe enatr snyy ragveryl jvguva n qvtvg va gung onfr? Gura lbh unir n enaqbz bhgchg. Zbir gb gur arkg qvtvg cbfvgvba naq er­crng.

Na va­gr­erfg­vat fvqr rss­rpg bs guvf genafs­bezngvba vf gung vs lbh tb sebz onfr N gb onfr O gura genafs­bez onpx, lbh trg gur fnzr frdhrapr rkprcg gurer’f n fznyy rkcrp­grq qrynl ba gur erfhygf.

I give work­ing code in “Trans­mut­ing Dice, Con­serv­ing En­tropy”.

• I will say as lit­tle as pos­si­ble to avoid spoilers, be­cause you seem to have thought enough about this to not want it spoiled.

The al­gorithm you are de­scribing is not op­ti­mal.

Edit: Oh, I just re­al­ized you were talk­ing about gen­er­at­ing lots of sam­ples. In that case, you are right, but you have not solved the puz­zle yet.

• Ebyy n frira­grra fvqrq qvr naq n svsgl guerr fvqrq qvr (fvqrf ner yno­ryrq mreb gb A zvahf bar). Zhygv­cyl gur svsgl-guerr fvqrq qvr erfhyg ol frira­grra naq nqq gur inyhrf.

Gur erfhyg jvyy or va mreb gb bar gub­h­f­naq gjb. Va gur rirag bs rvgure bs gurfr rkgerzr erfhygf, ergel.

Rkcrp­grq ah­zore bs qvpr ebyyf vf gjb gvzrf bar gub­h­f­naq guerr qvivqrq ol bar gub­h­f­naq bar, be gjb cb­vag mreb mreb sbhe qvpr ebyyf.

• You can do bet­ter :)

• Yeah, I re­al­ized that a few min­utes af­ter I posted, but didn’t get a chance to re­tract it… Gimme a cou­ple min­utes.

Vf vg gur fnzr vqrn ohg jvgu avar avargl frira gjvpr, naq hf­vat zbq 1001? Gung fr­rzf njshyyl fznyy, ohg V qba’g frr n tbbq cebbs. Vqrnyyl, gur ce­bqhpg bs gjb cevzrf jb­hyq or bar zber guna n zhygv­cyr bs 1001, naq gung’f gur bayl jnl V pna frr gb unir n fubeg cebbs. Guvf qbrfa’g qb gung.

• I am glad some­one is think­ing about it enough to fully ap­pre­ci­ate the solu­tion. You are sug­gest­ing tak­ing ad­van­tage of 709*977=692693. You can do bet­ter.

• You can do bet­ter than miss­ing one part in 692693? You can’t do it in one roll (not even a chance of one roll) since the dice aren’t large enough to ever uniquely iden­tify one re­sult… is there SOME way to get it ex­actly? No… then it would be a mul­ti­ple of 1001.

I am presently stumped. I’ll think on it a bit more.

ETA: OK, in­stead of hav­ing ONE left over, you leave TWO over. As­sum­ing the new pair is around the same size that nearly dou­bles your trou­ble rate, but in the event of trou­ble, it gives you one bit of in­for­ma­tion on the out­come. So, you can roll a sin­gle 503 sided die in­stead of retry­ing the outer pro­ce­dure?

Depend­ing on the pair of primes that pro­duce the two-left-over, that might be bet­ter. 709 is pretty large, though.

• The best you can do leav­ing 2 over is 709*953=675677, co­in­ci­den­tally us­ing the same first die. You can do bet­ter.

• It is in­ter­est­ing to con­tem­plate that the al­most fair solu­tion fa­vors bob:

Bob counts the num­bers of ap­ples in 1st room and ac­cepts it un­less it has zero ap­ples in it, in which case he re­jects it.
If he hasn’t re­jected room 1 he counts the ap­ples in 2 and if it is more than in 1 he ac­cepts it else he re­jects it.

For all pos­si­ble num­bers of ap­ples in rooms EXCEPT one room has zero ap­ples, Bob has 50% chance of get­ting it right. But for all pos­si­ble num­ber of ap­ples in rooms where one room has zero ap­ples in it, Bob has 56 chance of win­ning and only 16 chance of los­ing.

I think in some im­por­tant sense this is the tel­ling limit of why Coscott is right and how Alice can force a tie, but not win, if she knows Bob’s strat­egy. If Alilce knew Bob was us­ing this strat­egy, she would never put zero ap­ples in any room, and she and Bob would tie, i.e. Alice was able to force him ar­bi­trar­ily close to 50:50.

And the strat­egy to work re­lies upon the asym­me­try in the prob­lem, that you can go ar­bi­trar­ily high in ap­ples but you can’t go ar­bi­trar­ily low. Ini­tially I was think­ing Coscott’s solu­tion must be wrong, that it must be equiv­o­cat­ing some­how on the fact that Alice can choose ANY num­ber of ap­ples. But I think it is right, but that ev­ery strat­egy Bob uses to win can be defeated by Alice if she knows what his strat­egy is. I think with­out proof, that is :)

• I think in some im­por­tant sense this is the tel­ling limit of why Coscott is right

Right about what? The hint I give at the be­gin­ning of the solu­tion? My solu­tion?

Watch your quan­tifiers. The strat­egy you pro­pose for Bob can be re­sponded to by Alice never putting 0 ap­ples in any room. This strat­egy shows that Bob can force a tie, but this is not an ex­am­ple of Bob do­ing bet­ter than a tie.

• Right about it not be­ing a fair game. My first thought was that it re­ally is a fair game and that by com­par­ing only the cases where fixed num­bers a, b, and c are dis­tributed you get the slight ad­van­tage for Bob that you claimed. That if you con­sid­ered ALL pos­si­bil­ities you would have not ad­van­tage for Bob.

Then I thought you have a van­ish­ingly small ad­van­tage for Bob if you con­sider Alice us­ing ALL num­bers, in­clud­ing very very VERY high num­bers, where the prob­a­bil­ity of ever tak­ing the first room be­comes van­ish­ingly small.

And then by think­ing of my strat­egy, of only pick­ing the first room when you were ab­solutely sure it was cor­rect, i.e. it had in it as low a num­ber of ap­ples as a room can have, I con­vinced my­self that there re­ally is a net ad­van­tage to Bob, and that Alice can defeat that ad­van­tage if she knows Bob’s strat­egy, but Alice can’t find a way to win her­self.

So yes, I’m aware that Alice can defeat my 0 ap­ple strat­egy if she knows about it, just as you are aware that Alice can defeat your 2^-n strat­egy if she knows about that.

• So yes, I’m aware that Alice can defeat my 0 ap­ple strat­egy if she knows about it, just as you are aware that Alice can defeat your 2^-n strat­egy if she knows about that.

What? I do not be­lieve Alice can defeat my strat­egy. She can get ar­bi­trar­ily close to 50%, but she can­not reach it.

• 2.5 years ago I made an at­tempt to calcu­late an up­per bound for the com­plex­ity of the cur­rently known laws of physics. Since the is­sue of phys­i­cal laws and com­plex­ity keeps com­ing up, and my old post is hard to find with google searches, I’m re­post­ing it here ver­ba­tim.

I would re­ally like to see some solid es­ti­mates here, not just the usual hand-wav­ing. Maybe some­one bet­ter qual­ified can cri­tique the fol­low­ing.

By “a com­puter pro­gram to simu­late Maxwell’s equa­tions” EY pre­sum­ably means a lin­ear PDE solver for ini­tial bound­ary value prob­lems. The same gen­eral type of code should be able to han­dle the Schroed­inger equa­tion. There are a num­ber of those available on­line, most writ­ten in For­tran or C, with the rele­vant code size about a megabyte. The Kol­mogorov com­plex­ity of a solu­tion pro­duced by such a solver is prob­a­bly of the same or­der as its code size (since the solver effec­tively de­scribes the strings it gen­er­ates), so, say, about 10^6 “com­plex­ity units”. It might be much lower, but this is clearly the up­per bound.

One wrin­kle is that the ini­tial and bound­ary con­di­tions also have to be given, and the size of the rele­vant data heav­ily de­pends on the de­sired pre­ci­sion (you have to give the Dirich­let or Neu­mann bound­ary con­di­tions at each point of a 3D grid, and the grid size can be 10^9 points or larger). On the other hand, the Kol­mogorov com­plex­ity of this ini­tial data set should be much lower than that, as the val­ues for the points on the grid are gen­er­ated by a piece of code usu­ally much smaller than the main en­g­ine. So, in the first ap­prox­i­ma­tion, we can as­sume that it does not add sig­nifi­cantly to the over­all com­plex­ity.

Things get dicier if we try to es­ti­mate in a similar way the com­plex­ity of the mod­els like Gen­eral Rel­a­tivity, the Navier-Stokes equa­tions or the Quan­tum Field The­ory, due to their non-lin­ear­ity and a host of other is­sues. When no gen­eral-pur­pose solver is available, how does one es­ti­mate the com­plex­ity? Cur­rently, a lot of heuris­tics are used, effec­tively hid­ing part of the al­gorithm in the hu­man mind, thus mak­ing any es­ti­mate un­re­li­able, as hu­man mind (or “Thor’s mind”) is rather hard to simu­late.

One can ar­gue that the equa­tions them­selves for each of the the­o­ries are pretty com­pact, so the com­plex­ity can­not be that high, but then, as Feyn­man noted, all of phys­i­cal laws can be writ­ten sim­ply as A=0, where A hides all the gory de­tails. We still have to spec­ify the al­gorithm to gen­er­ate the pre­dic­tions, and that brings us back to nu­mer­i­cal solvers.

I also can­not re­sist not­ing, yet again, that all in­ter­pre­ta­tion of QM that rely on solv­ing the Schrod­inger equa­tion have ex­actly the same com­plex­ity, as es­ti­mated above, and so can­not be dis­t­in­guished by the Oc­cam’s ra­zor. This ap­plies, in par­tic­u­lar, to MWI vs Copen­hagen.

It is en­tirely pos­si­ble that my un­der­stand­ing of how to calcu­late the Kol­mogorov com­plex­ity of a phys­i­cal the­ory is flawed, so I wel­come any feed­back on the mat­ter. But no hand-wav­ing, please.

• Brought to mind by the re­cent post about dream­ing on Slate Star Codex:

Has any­one read a con­vinc­ing re­fu­ta­tion of the defla­tion­ary hy­poth­e­sis about dreams—that is, that there aren’t any? In the sense of noth­ing like wak­ing ex­pe­rience ever hap­pen­ing dur­ing sleep; just junk mem­o­ries with back­dated time-stamps?

My brain is at­tribut­ing this po­si­tion to Den­nett in one of his older col­lec­tions—maybe Brain­storms—but it prob­a­bly pre­dates him.

• Stim­uli can be in­cor­po­rated into dreams—for ex­am­ple, if some­one in a sleep lab sees you are in REM sleep and sprays wa­ter on you, you’re more likely to re­port hav­ing had a dream it was rain­ing when you wake up. Yes, this has been for­mally tested. This pro­vides strong ev­i­dence that dreams are go­ing on dur­ing sleep.

More di­rectly, com­mu­ni­ca­tion has been es­tab­lished be­tween dream­ing and wak­ing states by lu­cid dream­ers in sleep labs. Lu­cid dream­ers can make eye move­ments dur­ing their dreams to send pre­de­ter­mined mes­sages to lab­o­ra­tory tech­ni­ci­ans mon­i­tor­ing them with EEGs. Again, this has been for­mally tested.

• More di­rectly, com­mu­ni­ca­tion has been es­tab­lished be­tween dream­ing and wak­ing states by lu­cid dream­ers in sleep labs. Lu­cid dream­ers can make eye move­ments dur­ing their dreams to send pre­de­ter­mined mes­sages to lab­o­ra­tory tech­ni­ci­ans mon­i­tor­ing them with EEGs. Again, this has been for­mally tested.

Whoa, that’s cool. Do you have a refer­ence?

• Would this be re­futed by cases where lu­cid dream­ers were able to com­mu­ni­cate (one way) with re­searchers dur­ing their dreams through eye move­ments?

http://​​en.wikipe­dia.org/​​wiki/​​Lu­cid_dream#Per­cep­tion_of_time

In 1985, Stephen LaBerge performed a pi­lot study which showed that time per­cep­tion while count­ing dur­ing a lu­cid dream is about the same as dur­ing wak­ing life. Lu­cid dream­ers counted out ten sec­onds while dream­ing, sig­nal­ing the start and the end of the count with a pre-ar­ranged eye sig­nal mea­sured with elec­troocu­lo­gram record­ing.[31]

• In­deed, there is an es­say in Brain­storms ar­tic­u­lat­ing this po­si­tion. IIRC Den­nett does not ex­plic­itly com­mit to defend­ing it, rather he de­vel­ops it to make the point that we do not have a priv­ileged, first-per­son knowl­edge about our ex­pe­riences. There is con­ceiv­able third-per­son sci­en­tific ev­i­dence that might lead us to ac­cept this the­ory (even if, go­ing by Yvain’s com­ment, this does not seem to ac­tu­ally be the case), and our first-per­son in­tu­ition does not trump it.

• I’ve writ­ten a game (or see (github)) that tests your abil­ity to as­sign prob­a­bil­ities to yes/​no events ac­cu­rately us­ing a log­a­r­ith­mic scor­ing rule (called a Bayes score on LW, ap­par­ently).

There’s a cou­ple other ran­dom pro­cesses to guess in the game and also a quiz. The ques­tions are in­tended to force you to guess at least some of the time. If you have sug­ges­tions for other quiz ques­tions, send them to me by PM in the for­mat:

{q:”1+1=2. True?”, a:1} //​ source: my calculator

where a:1 is for true and a:0 is for false.

• This game has taught me some­thing. I get more en­joy­ment than I should out of watch­ing a ran­dom vari­able go up and down, and prob­a­bly should avoid gam­bling. :)

• Nice work, con­grats! Looks fun and use­ful, bet­ter than the cal­ibra­tion apps I’ve seen so far (in­clud­ing one I made, that used con­fi­dence in­ter­vals—I had a proper scor­ing rule too!)

My score:

Cur­rent score: 3.544 af­ter 10 plays, for an av­er­age score per play of 0.354.

• Thanks Emile,

Is there any­thing you’d like to see added?

For ex­am­ple, I was think­ing of run­ning it on nodejs and log­ging the scores of play­ers, so you could see how you com­pare. (I don’t have a way to host this, right now, though.)

Or an­other pos­si­bil­ity is to add di­ag­nos­tics. E.g. were you set­ting your guess too high sys­tem­at­i­cally or was it fluc­tu­at­ing more than the data would re­ally say it should (un­der some mod­els for the prior/​pos­te­rior, say).

Also, I’d be happy to have poin­t­ers to your cal­ibra­tion apps or oth­ers you’ve found use­ful.

• Thank you. I re­ally, re­ally want to see more of these.

Fea­ture re­quest #976: More stats to give you an in­di­ca­tion of over­con­fi­dence /​ un­der­con­fi­dence. (e.g. out of 40 ques­tions where you gave an an­swer be­tween .45 and .55, you were right 70% of the time).

• An ar­ti­cle on samu­rai men­tal tricks. Most of them will not be that sur­pris­ing to LWers, but it is nice to see mod­ern re­sults have a long his­tory of work­ing.

• An in­ter­est­ing quote, I won­der what peo­ple here will make of it...

True ra­tio­nal­ists are as rare in life as ac­tual de­con­struc­tion­ists are in uni­ver­sity English de­part­ments, or true bi­sex­u­als in gay bars. In a life­time spent in hotbeds of sec­u­larism, I have known per­haps two thor­ough­go­ing ra­tio­nal­ists—peo­ple who ac­tu­ally tried to elimi­nate in­tu­ition and nav­i­gate life by rea­son­ing about it—and countless hu­man­ists, in Comte’s sense, peo­ple who don’t go in for God but are en­thu­si­asts for tran­scen­dent mean­ing, for sa­cred pan­theons and pri­vate chapels. They have some syn­cretic mix­ture of rit­u­als: they pol­ish meno­rahs or dec­o­rate Christ­mas trees, med­i­tate upon the great be­yond, say a silent prayer, light can­dles to the dark­ness.

source

• I can’t tell if the au­thor means “ra­tio­nal­ists” in the tech­ni­cal sense (i.e. as op­posed to em­piri­cists) but if he doesn’t then I think it’s un­fair of him to re­quire that ra­tio­nal­ists “elimi­nate in­tu­ition and nav­i­gate life by rea­son­ing about it”, since this is so clearly ir­ra­tional (be­cause in­tu­ition is so in­dis­pens­ably pow­er­ful).

• I loved this quote. I think it’s a char­ac­ter­i­za­tion of UU-style hu­man­ism that is fair but that they would prob­a­bly agree with.

• Some­times I feel like look­ing into how I can help hu­man­ity (e.g. 80000 hours stuff), but other times I feel like hu­man­ity is just ir­re­deemable and may as well wipe it­self off the planet (via cli­mate change, nu­clear war, what­ever).

For in­stance, hu­mans are so fa­cepalm­ingly bad at mak­ing de­ci­sions for the long term (viz. cli­mate change, run­ning out of fos­sil fuels) that it seems clear that ge­netic or neu­rolog­i­cal en­hance­ments would be highly benefi­cial in chang­ing this (and other defi­cien­cies, of course). Yet dis­course about such things is over­whelm­ingly nega­tive, mired in what I think are ir­ra­tional knee­jerk re­ac­tions to defend “what it means to be hu­man.” So I’m just like, you know what? Fuck it. You can’t even help your­selves help your­selves. For­get it.

Thoughts?

• 12 Feb 2014 6:39 UTC
22 points
Parent

You know how when you see a kid about to fall off a cliff, you shrug and don’t do any­thing be­cause the stan­dards of dis­course aren’t as high as they could be?

Me nei­ther.

• lol yeah, I know what you’re talk­ing about.

Okay okay, fine. ;-)

• A task with a bet­ter ex­pected out­come is still bet­ter (in ex­pected out­come), even if it’s hope­less, silly, not as funny as some of the failure modes, not your re­spon­si­bil­ity or in some way emo­tion­ally less com­fortable.

• You’re of course cor­rect. I’m tempted to ques­tion the use of “bet­ter” (i.e. it’s a mat­ter of val­ues and opinion as to whether its “bet­ter” if hu­man­ity wipes it­self out or not), but I think it’s pretty fair to as­sume (as I be­lieve util­i­tar­i­ans do) that less suffer­ing is bet­ter, and the­o­ret­i­cally less suffer­ing would re­sult from bet­ter de­ci­sion-mak­ing and pos­si­bly from less cli­mate change.

Thanks for this.

• https://​​en.wikipe­dia.org/​​wiki/​​Iden­ti­fi­able_vic­tim_effect

Also, would you still want to save a drown­ing dog even if it might bite you out of fear and mi­s­un­der­stand­ing? (let’s say it is a small dog and a bite would not be dras­ti­cally in­ju­ri­ous)

• https://​​en.wikipe­dia.org/​​wiki/​​Iden­ti­fi­able_vic­tim_effect

True, true. But it’s still hard for me (and most peo­ple?) to cir­cum­vent that effect, even while I’m aware of it. I know Mother Theresa ac­tu­ally had a tech­nique for it (to just think of one child rather than the mil­lions in need). I guess I can try that. Any other sug­ges­tions?

Also, would you still want to save a drown­ing dog even if it might bite you out of fear and mi­s­un­der­stand­ing? (let’s say it is a small dog and a bite would not be dras­ti­cally in­ju­ri­ous)

I’ll pre­tend it’s a cat since I don’t re­ally like small dogs. ;-) Yes, of course I’d save it. I think this anal­ogy will help me mov­ing for­ward. Thank you! ^_^

• No prob­lem. I have an in­tu­ition that IMing might be more pro­duc­tive than struc­tured posts if you’re ex­plor­ing this space and want to cover a bunch of ground quickly. Feel free to ping me on gtalk if you’re in­ter­ested. romeosteven­sit is my google.

• I think it is amaz­ingly my­opic to look at the only species that has ever started a fire or crafted a wheel and con­clude that

hu­mans are so fa­cepalm­ingly bad at mak­ing decisions

The idea that cli­mate change is an ex­is­ten­tial risk seems wacky to me. It is not difficult to walk away from an ocean which is ris­ing at even 1 m a year and no one hy­poth­e­sizes any­thing close to that rate. We are adapted to a broad range of cli­mates and able to move north south east and west as the winds might blow us.

Run­ning out of fos­sil fuels, think­ing we are do­ing some­thing wildly stupid with our use of fos­sil fuels seems to me to be about as sen­si­ble as think­ing a cen­trally planned econ­omy will work bet­ter. It is not in­tu­itive that a cen­trally planned econ­omy will be a piece of crap com­pared to what we have, but it turns out to be true. Think­ing you or even a bunch of peo­ple like you with no track record do­ing ANYTHING can sec­ond guess the mar­kets in fos­sil fuels, well it seems in­tu­itively right but if you ever get in­volved in test­ing your in­tu­itions I don’t think you’ll find out it holds up. And if you think even dou­bling the price of fos­sil fuels re­ally changes the calcu­lus by much, I think Europe and Ja­pan have lived that life for decades com­pared to the US, and yet the US is the home to the wack­iest and ill-thought-out al­ter­na­tives to fos­sil fuels in the world.

Can any­body ex­plain to me why cre­at­ing a wildly pop­u­lar lux­ury car which effec­tively runs on burn­ing coal is such a boon to the en­vi­ron­ment that it should be sub­si­dized at \$7500 by the US fed­eral gov­ern­ment and an ad­di­tional \$2500 by states such as Cal­ifor­nia which has been so close to bankruptcy re­cently? Well that is what a Tesla is, if you drive one in a coun­try with coal on the grid, and most of Europe, China, and the US are in that cat­e­gory, The Tesla S Perfor­mance puts out the same amount of car­bon as a car get­ting (WRONG14WRONG) 25 mpg of gasoline.

• The Tesla S Perfor­mance puts out the same amount of car­bon as a car get­ting 14 mpg of gasoline.

The Tesla S takes about 38 kW-hr to go 100 miles, which works out to around 80 lb CO2 gen­er­ated. 14mpg would be 7.1 gal­lons of gasoline to go 100 miles, which works out to around 140lb CO2 gen­er­ated. I couldn’t find any in­de­pen­dent num­bers for the S Perfor­mance, but Tesla’s site claims the same range as the reg­u­lar S with the same bat­tery pack.

The rest of your point seems to hold, though; if the sub­sidy is pred­i­cated on re­duc­ing CO2 emis­sions then the equiv­a­lent of 25mpg still isn’t any­thing to brag about.

• works out to around 80 lb CO2 generated

This is likely an over­es­ti­ma­tion, since it as­sumes that you’re ex­clu­sively burn­ing coal. Elec­tric­ity pro­duc­tion in the US is about 68% fos­sil, the rest de­riv­ing from a mix­ture of nu­clear and re­new­ables; the fos­sil-fuel cat­e­gory also in­cludes nat­u­ral gas, which per your link gen­er­ates about 55-60% the CO2 of coal per unit elec­tric­ity. This varies quite a bit state to state, though, from al­most ex­clu­sively fos­sil (West Virginia; Delaware; Utah) to al­most ex­clu­sively nu­clear (Ver­mont) or re­new­able (Wash­ing­ton; Idaho).

Based on the same figures and break­ing it down by the na­tional av­er­age of coal, nat­u­ral gas, and nu­clear and re­new­ables, I’m get­ting a figure of 43 lb CO2 /​ 100 mi, or about 50 mpg equiv­a­lent. Since its sub­sidies came up, Cal­ifor­nia burns al­most no coal but gets a bit more than 60% of its en­ergy from nat­u­ral gas; its equiv­a­lent would be about 28 lb CO2.

• works out to around 80 lb CO2 generated

This is likely an over­es­ti­ma­tion, since it as­sumes that you’re ex­clu­sively burn­ing coal.

Yes, but that should be the right com­par­i­son to make. Con­sider two al­ter­na­tives: 1) World gen­er­ates N kwh + 38 kwh to fuel a Tesla to go 100 miles 2) World gen­er­ates N kwh and puts 4 gal­lons of gasoline in a car to go 100 miles.

If we are in­ter­ested in min­i­miz­ing CO2 emis­sions, then in world 2 com­pared to world 1 we will gen­er­ate 38 kWh fewer from our dirt­iest plant on the grid, which is go­ing to be a coal-fired plant.

So in world 1 we have an ex­tra 80 lbs of CO2 emis­sion from elec­tric gen­er­a­tion and 0 from gasoline. In world 2 we have 80 lbs less of CO2 emis­sion from elec­tric gen­er­a­tion and add 80 lbs from gasoline.

When adding elec­tric us­age, you need to “bill” it at the marginal costs to gen­er­ate that elec­tric­ity, which is true both in terms the price you charge cus­tomers for it and the CO2 emis­sions you at­tribute to it.

The US, China, and most of Europe have a lot of Coal in the mix on the grid. Un­til they scrub coal or stop us­ing it, it seems very clear that the Tesla puffs out the same amount of CO2 as a 25 mpg gasoline pow­ered car.

• It’s true that most of the flex­i­bil­ity in our power sys­tem comes from dirty sources, and that squeez­ing a few ex­tra kilo­watt-hours in the short term gen­er­ally means burn­ing more coal. If we’re talk­ing policy changes aimed at pop­u­lariz­ing elec­tric cars, though, then we aren’t talk­ing a megawatt here or there; we’ve moved into the realm of adding ca­pac­ity, and it’s not at all ob­vi­ous that new elec­tri­cal ca­pac­ity is go­ing to come from dirty sources—at least out­side of some­where like West Virginia. On those kinds of scales, I think it’s fair to as­sume a mix similar to what we’ve cur­rently got, out­side of spe­cial cases like Ger­many phas­ing out its nu­clear pro­gram.

(There are some caveats; re­new­ables are grow­ing strongly in the US, but nu­clear isn’t. But it works as a first ap­prox­i­ma­tion.)

• Coal elec­tric gen­er­a­tion isn’t go­ing away any­time soon. The only rea­son coal may look at the mo­ment like it is de­clin­ing in the US is be­cause at the mo­ment nat­u­ral gas gen­er­a­tion in the US is less ex­pen­sive than coal. But in Europe, coal is less ex­pen­sive and, re­mark­ably, gen­er­at­ing com­pa­nies re­spond by turn­ing up coal and turn­ing down nat­u­ral gas.

• Doesn’t need to be go­ing away for my ar­gu­ment to hold, as long as the rel­a­tive pro­por­tions are fa­vor­able—and as far as I can tell, most of that GIC delta in coal is hap­pen­ing in the de­vel­op­ing world, where I don’t see too many peo­ple buy­ing Tes­las. Europe and the US pro­ject new ca­pac­ity dis­pro­por­tionately in the form of re­new­ables; coal is go­ing up in Europe, but less quickly.

This isn’t ideal; I’m gen­er­ally long on wind and so­lar, but if I had my way we’d be build­ing Gen IV nu­clear re­ac­tors as fast as we could lay down con­crete. But nei­ther is it as grim as the pic­ture you seem to be paint­ing.

• This isn’t ideal; …. But nei­ther is it as grim as the pic­ture you seem to be paint­ing.

I would agree with that.. Cer­tainly my ini­tial pic­ture was just wrong. Even us­ing Coal as the stan­dard, the Tesla is as good as a 25 mpg gaso­lilne car. For that size and qual­ity of car, that is ac­tu­ally not bad, but it is best in class, not rev­olu­tion­ary.

As to sub­si­diz­ing a Tesla as op­posed to a 40 mpg diesel, for ex­am­ple, as long as we use coal for elec­tric­ity, we are bet­ter off adding a 40 mpg diesel to the fleet than adding a Tesla. This is al­most just me hat­ing on sub­sidies, prefer­ring that we just tax fuels pro­por­tional to their car­bon con­tent and let mar­ket forces de­cide how to dis­tribute that dis­tor­tion.

• This is al­most just me hat­ing on sub­sidies, prefer­ring that we just tax fuels pro­por­tional to their car­bon con­tent and let mar­ket forces de­cide how to dis­tribute that dis­tor­tion.

That prob­a­bly is bet­ter baseline policy from a car­bon min­i­miza­tion per­spec­tive, yeah; I have similar ob­jec­tions to the fleet mileage penalties im­posed on au­tomak­ers in the US, which ended up con­tribut­ing among other things to a good chunk of the SUV boom in the ’90s and ’00s. Now, I can see an ar­gu­ment for sub­sidies or even di­rect grants if they help kick­start build­ing EV in­fras­truc­ture or en­able game-chang­ing re­search, but that should be nar­rowly tar­geted, not the ba­sis of our en­tire ap­proach.

Un­for­tu­nately, ba­sic eco­nomic liter­acy is not ex­actly a hal­l­mark of en­vi­ron­men­tal policy.

• When adding elec­tric us­age, you need to “bill” it at the marginal costs to gen­er­ate that electricity

Yes, but marginal anal­y­sis re­quires iden­ti­fy­ing the cor­rect mar­gin. If you charge your car dur­ing the day at work, you are in­creas­ing peak load, which is of­ten coal. If you charge your car at night, you are con­tribut­ing to base load. This might not even re­quire build­ing new plants! This works great if you have nu­clear plants. With a suffi­ciently smart grid, it makes er­ratic sources like wind much more use­ful.

• Yes, but marginal anal­y­sis re­quires iden­ti­fy­ing the cor­rect mar­gin.

I do agree us­ing the rate for coal is pes­simistic.

On fur­ther re­search, I dis­cover that Li-ion bat­ter­ies are very en­er­get­i­cally ex­pen­sive to pro­duce. Their net life­time en­ergy in pro­duc­tion and then re­cy­cling is about 430 kWh per kWh of bat­tery. Li-ion can be recharged 300-500 times. Us­ing 430 recharges, amor­tiz­ing pro­duc­tion costs across all uses of the bat­tery we see that we have 1 kWh of pro­duc­tion en­ergy used for ev­ery 1 kWh of stor­age the bat­tery ac­com­plished dur­ing its life­time.

So now we have the more com­pli­cated ac­count­ing ques­tions, how much car­bon do we as­so­ci­ate with con­struct­ing the bat­tery vs how much with charg­ing the bat­tery? If con­struc­tion and charg­ing come from the same grid, we charge the same.

And of course to be fair, we need to figure the cost to re­fine a gal­lon of gasoline. Its pretty wacky out there but the num­bers out there range from 6 kwh to 12 kwh. The higher num­bers in­clude quite a bit of nat­u­ral gas di­rectly used in the pro­cess, which us­ing it di­rectly is about twice as effi­cient as mak­ing elec­tric­ity with it.

All in all, it looks to me like we have about 100% over­head on bat­tery pro­duc­tion en­ergy, and say 8 kWh to make a gal­lon of gas for about 25% over­head on gasoline.

Lets as­sign 1.3 lbs of CO2 per kwh elec­tric, which is 2009 US av­er­age ad­justed 7.5% for de­liv­ery losses.

Then a gal­lon of gasoline gives 19 lbs from the gasoline + 10.4 lbs from mak­ing/​trans­port­ing the gasoline.

A Tesla costs 1.3*38 = 39 lbs CO2 to go 100 miles from elec­tric charge + 39 lbs CO2 from amor­tiz­ing bat­tery life­time over CO2 cost or pro­duc­ing the bat­tery.

Tesla = 78 lbs CO2 per 100 miles.

A 78 lbs of CO2 comes from 7830 = 2.6 gal­lons of fuel.

So us­ing US av­er­age CO2 load for kwh elec­tric­ity, load­ing the Tesla with 100% over­head for bat­tery pro­duc­tion and load­ing gasoline with 34% over­head from re­fin­ing, min­ing, and trans­port, we get a Tesla S about equiv­a­lent to a 38 mpg car in CO2 emis­sions.

That num­ber is ac­tu­ally ex­tremely im­pres­sive for the class of car a Tesla is.

Nis­san Leaf uses 75% as much en­ergy as Tesla to go 100 miles. So Leaf has same CO2 emis­sions as a 51 mpg car.

If we use coal for elec­tric­ity these num­bers change to Tesla --> 19 mpg and Leaf --> 26 mpg. The Tesla still looks good-ish for the class of car it is, but the Leaf is lousy at 26 mpg, com­pet­ing with hy­brids that get 45 mpg or so.

• Your lithium-ion num­bers match my un­der­stand­ing of bat­ter­ies in gen­eral: they cost as much en­ergy to cre­ate as their life­time ca­pac­ity. That’s why you can’t use bat­ter­ies to smooth out er­ratic power sources like wind, or in­flex­ible ones like nu­clear.

I’m skep­ti­cal that it’s a good idea to fo­cus on the en­ergy used to cre­ate the bat­tery. There’s en­ergy used to cre­ate all the rest of the car, and cer­tainly en­ergy to cre­ate the gasoline-pow­ered car that you’re us­ing as a bench­mark. Pro­duc­tion en­ergy is difficult to com­pute and I think most peo­ple do such a bad job that I think it’s bet­ter to use price as a proxy.

• The rest of your point seems to hold, though; if the sub­sidy is pred­i­cated on re­duc­ing CO2 emis­sions then the equiv­a­lent of 25mpg still isn’t any­thing to brag about.

You are right I did my math wrong.

To make it a lit­tle clearer to peo­ple fol­low­ing along, 80 lbs of CO2 gen­er­ate to move a Tesla 100 miles us­ing coal gen­er­ated elec­tric­ity. 80 pounds of CO2 to move a 25 mpg gasoline car 100 miles.

I’ll ad­dress why the coal num­ber is the right one in com­ment­ing on the next com­ment.

• It’s not difficult to walk away from an ocean? Please ex­plain New Or­leans.

Tesla (and other stuff get­ting power from the grid) cur­rently run mostly on coal but ideally they can be run off (un­re­al­is­ti­cally) so­lar or wind or (re­al­is­ti­cally) nu­clear.

• It’s not difficult to walk away from an ocean? Please ex­plain New Or­leans.

Are you un­der the im­pres­sion that cli­mate change rise in ocean level will look like a dike break­ing? All refer­ences to sea lev­els ris­ing are re­ported at less than 1 cm a year, but lets say that rises 100 fold to 1 m/​yr. New Or­leans flooded a few me­ters in at most a few days, about 1 m/​day.

A fac­tor of 365 in rate could well be the sub­tle differ­ence be­tween find­ing your­self on the roof of a house and find­ing your­self liv­ing in a house a few miles in­land.

• No, ex­plain why we still have a city in new or­leans when it re­peat­edly gets de­stroyed by hur­ri­canes.

• The thread is about whether cli­mate change is an ex­is­ten­tial threat, not how to best man­age coastal cities that flood.

• you’re right, sorry.

• Un­cache eco­nomic liberal dogma and con­sider real world ex­pe­rience for a mo­ment? Be­cause just go­ing from ob­ser­va­tion, I would have to say that Elec­tric Grids do in fact work bet­ter when cen­trally planned. TVA, EDF and the rest of the reg­u­lated util­ities beat the stuffing out of ev­ery ex­am­ple of places that at­tempt to have com­pet­i­tive mar­kets in elec­tric­ity. That said, if we ac­tu­ally cared about the prob­lems of fos­sil fuels, we would long ago have tran­si­tioned to a fis­sion based grid, be­cause that would ac­tu­ally solve that prob­lem.

• fis­sion based grid

Googling doesn’t find many hits. What do you mean with the term?

• Nu­clear fis­sion. As in: “Every­one fol­lows the ex­am­ple of France and Swe­den, builds nu­clear re­ac­tors un­til they no longer have any fos­sil fuel based power plants”. There are no real re­source or eco­nomic limits keep­ing us from do­ing this—the Rus­si­ans have quite good breeder re­ac­tor de­signs, and on a per-ter­awatt hour pro­duced, it would kill a lot fewer peo­ple than any fos­sil fuel, and cost less money.

• If you think helping hu­man­ity is (in long term) a fu­tile effort, be­cause hu­mans are so stupid they will de­stroy them­selves any­way… I’d say the or­ga­ni­za­tion you are look­ing for is CFAR.

So, how would you feel about mak­ing a lot of money and donat­ing to CFAR? (Or other or­ga­ni­za­tion with a similar mis­sion.)

• How cool, I’ve never heard of CFAR be­fore. It looks awe­some. I don’t think I’m ca­pa­ble of mak­ing a lot of money, but I’ll cer­tainly look into CFAR.

Edit: I just re­al­ized that CFAR’s logo is at the top of the site. Just never looked into it. I am not a smart man.

• Thoughts?

Ta­boo hu­man­ity.

• I can’t speak for you, but I would hugely pre­fer for hu­man­ity to not wipe it­self out, and even if it seems rel­a­tively likely at times, I still think it’s worth the effort to pre­vent it.

If you think ex­is­ten­tial risks are a higher pri­or­ity than par­a­site re­moval, maybe you should fo­cus your efforts on those in­stead.

• Se­ri­ous, non-rhetor­i­cal ques­tion: what’s the ba­sis of your prefer­ence? Any­thing more than just af­finity for your species?

I’m not 100% sure what you mean by par­a­site re­moval… I guess you’re refer­ring to bad de­ci­sion-mak­ers, or bad de­ci­sion-mak­ing pro­cesses? If so, I think ex­is­ten­tial risks are in­ter­linked with par­a­site re­moval: the lat­ter causes or at least has­tens the former. There­fore, to truly ad­dress ex­is­ten­tial risks, you need to ad­dress par­a­site re­moval.

• If I live for­ever, through cry­on­ics or a pos­i­tive in­tel­li­gence ex­plo­sion be­fore my death, I’d like to have a lot of peo­ple to hang around with. Ad­di­tion­ally, the peo­ple you’d be helping through EA aren’t the peo­ple who are fuck­ing up the world at the mo­ment. Plus there isn’t re­ally any­thing di­rectly im­por­tant to me out­side of hu­man­ity.

Par­a­site re­moval refers to re­mov­ing literal par­a­sites from peo­ple in the third world, as an ex­am­ple of one of the effec­tive char­i­ta­ble causes you could donate to.

• EA? (Sorry to ask, but it’s not in the Less Wrong jar­gon glos­sary and I haven’t been here in a while.)

Par­a­site re­moval refers to re­mov­ing literal par­a­sites from peo­ple in the third world

Oh. Yes. I think that’s im­por­tant too, and it ac­tu­ally pulls on my heart strings much more than ex­is­ten­tial risks that are po­ten­tially far in the fu­ture, but I would like to try to avoid hy­per­bolic dis­count­ing and try to fo­cus on the most im­por­tant is­sue fac­ing hu­man­ity sans cog­ni­tive bias. But since hu­man mo­ti­va­tion isn’t flawless, I may end up fo­cus­ing on some­thing more im­me­di­ate. Not sure yet.

• I find it fas­ci­nat­ing to ob­serve.

• I as­sume you’re talk­ing about the fa­cepalm-in­duc­ing de­ci­sion-mak­ing? If so, that’s a pretty mor­bid fas­ci­na­tion. ;-)

• If you’re look­ing for ways to elimi­nate ex­is­ten­tial risk, then know­ing that hu­man­ity is about to kill it­self no mat­ter what you do and you’re just putting it off a few years in­stead of a few billion mat­ters. If you’re just look­ing for ways to help in­di­vi­d­u­als, it’s pretty ir­rele­vant. I guess it means that what mat­ters is what hap­pens now, in­stead of the flow through effects af­ter a billion years, but it’s still a big effect.

If you’re sug­gest­ing that the life of the av­er­age hu­man isn’t worth liv­ing, then sav­ing lives might not be a good idea, but there are still ways to help keep the pop­u­la­tion low.

Be­sides, if hu­man­ity was great at helping it­self, then why would we need you? It is pre­cisely the fact that we al­low ex­treme in­equal­ity to ex­ist that means that you can make a big differ­ence.

• For in­stance, hu­mans are so fa­cepalm­ingly bad at mak­ing de­ci­sions for the long term (viz. cli­mate change, run­ning out of fos­sil fuels) that it seems clear that ge­netic or neu­rolog­i­cal en­hance­ments would be highly benefi­cial in chang­ing this

I think you un­der­rate the ex­is­ten­tial risks that come along with sub­stan­tial ge­netic or neu­rolog­i­cal en­hance­ments. I’m not say­ing we shouldn’t go there but it’s no easy sub­ject mat­ter. It re­quires a lot of thought to ad­dress it in a way that doesn’t pro­duce more prob­lems than it solves.

For ex­am­ple the toolkit that you need for ge­netic en­g­ineer­ing can also be used to cre­ate ar­tifi­cial pan­demics which hap­pen to be the ex­is­ten­tial risk most feared by peo­ple in the last LW sur­veys.

When it comes to run­ning out of fos­sil fuels we seem to do quite well. So­lar en­ergy halves costs ev­ery 7 years. The sun doesn’t shine the whole day so there’s still fur­ther work to be done, but it doesn’t seem like an in­sur­mountable challenge.

• I think you un­der­rate the ex­is­ten­tial risks that come along with sub­stan­tial ge­netic or neu­rolog­i­cal en­hance­ments.

It’s true, I ab­solutely do. It ir­ri­tates me. I guess this is be­cause the ethics seem ob­vi­ous to me: of course we should pre­vent peo­ple from de­vel­op­ing a “su­per­virus” or what­ever, just as we try to pre­vent peo­ple from de­vel­op­ing nu­clear arms or chem­i­cal weapons. But steer­ing to­wards a pos­si­bly bet­ter hu­man­ity (or other sen­tient species) just seems worth the risk to me when the al­ter­na­tive is re­main­ing the vi­o­lent apes we are. (I know we’re ho­minds, not apes; it’s just a figure of speech.)

When it comes to run­ning out of fos­sil fuels we seem to do quite well. So­lar en­ergy halves costs ev­ery 7 years.

That’s cer­tainly a re­as­sur­ing statis­tic, but a less re­as­sur­ing one is that so­lar power cur­rently sup­plies less than one per­cent of global en­ergy us­age!! Chang­ing that (and es­pe­cially chang­ing that quickly) will be an ENORMOUS un­der­tak­ing, and there are many dis­heart­en­ing road­blocks in the way (util­ity com­pa­nies, lack of gov­ern­ment will, etc.). The fact that so­lar it­self is get­ting less ex­pen­sive is great, but un­for­tu­nately the chang­ing over from fos­sil fuels to so­lar (e.g. phas­ing out old power plants and build­ing brand new ones) is still in­cred­ibly ex­pen­sive.

• . I guess this is be­cause the ethics seem ob­vi­ous to me: of course we should pre­vent peo­ple from de­vel­op­ing a “su­per­virus” or what­ever, just as we try to pre­vent peo­ple from de­vel­op­ing nu­clear arms or chem­i­cal weapons.

Of course the ethics are ob­vi­ous. The road to hell is paved with good in­ten­tions. 200 years ago burn­ing all those fos­sil fuels to power steam en­g­ines sounded like a re­ally great idea.

If you sim­ply try to solve prob­lems cre­ated by peo­ple adopt­ing tech­nol­ogy by throw­ing more tech­nol­ogy at it, that’s dan­ger­ous.

The wise way is to un­der­stand the prob­lem you are fac­ing and do spe­cific in­ter­ven­tion that you be­lieve to help. CFAR style ra­tio­nal­ity train­ing might sound less im­pres­sive then chang­ing around peo­ples neu­rol­ogy but it might be an ap­proach with a lot less ugly side effects.

CFAR style ra­tio­nal­ity train­ing might seem less tech­nolog­i­cal to you. That’s ac­tu­ally a good thing be­cause it makes it eas­ier to un­der­stand the effects.

The fact that so­lar it­self is get­ting less ex­pen­sive is great, but un­for­tu­nately the chang­ing over from fos­sil fuels to so­lar (e.g. phas­ing out old power plants and build­ing brand new ones) is still in­cred­ibly ex­pen­sive.

It de­pends on what is­sue you want to ad­dress. Given how things are go­ing tech­nol­ogy in­volves in a way where I don’t think we have to fear that we will have no en­ergy when coal runs out. There plenty of coal around and green en­ergy evolves fast enough for that task.

On the other hand we don’t want to turn that coal. I want to eat tuna that’s not full of mer­cury and there already a recom­men­da­tion from the Euro­pean Food Safety Author­ity against eat­ing tuna ev­ery day be­cause there so much mer­cury in it. I want less peo­ple get­ting kil­led via fos­sil fuel emis­sions. I also want to have less green­house gases in the at­mo­sphere.

is still in­cred­ibly ex­pen­sive.

If you want to do policy that pays off in 50 years look­ing at how things are at the mo­ment nar­rows your field of vi­sion too much.

If so­lar con­tinues it’s price de­vel­op­ment and is 18 as cheap in 21 years you won’t need gov­ern­ment sub­sidies to get peo­ple to pre­fer so­lar over coal. With an­other 30 years of de­ploy­ment we might not burn any coal in 50 years.

dis­heart­en­ing road­blocks in the way (util­ity com­pa­nies, lack of gov­ern­ment will, etc.).

If you think lack of gov­ern­ment will or util­ity com­pa­nies are the core prob­lem, why fo­cus on chang­ing hu­man neu­rol­ogy? Ad­dress­ing poli­tics di­rectly is more straight­for­ward.

When it comes to so­lar power it might also be that no­body will use any so­lar pan­els in 50 years be­cause Craig Ven­ter’s al­gae are just a bet­ter en­ergy source. Bet­ting to much on sin­gle cards is never good.

• CFAR style ra­tio­nal­ity train­ing might sound less im­pres­sive then chang­ing around peo­ples neu­rol­ogy but it might be an ap­proach with a lot less ugly side effects.

It’s a start, and po­ten­tially fewer side effects is always good, but think of it this way: who’s go­ing to grav­i­tate to­wards ra­tio­nal­ity train­ing? I would bet peo­ple who are already more ra­tio­nal than not (be­cause it’s ir­ra­tional not to want to be more ra­tio­nal). Since par­ti­ci­pants are self-se­lected, a mas­sive part of the pop­u­la­tion isn’t go­ing to bother with that stuff. There are similar is­sues with ge­netic and neu­rolog­i­cal mod­ifi­ca­tions (e.g. they’ll be ex­pen­sive, at least ini­tially, and there­fore re­stricted to a small pool of wealthy peo­ple), but given the ad­van­tages over things like CFAR I’ve already men­tioned, it seems like it’d be worth it...

I have an­other is­sue with CFAR in par­tic­u­lar that I’m re­luc­tant to men­tion here for fear of caus­ing a shit-storm, but since it’s buried in this thread, hope­fully it’ll be okay. Ad­mit­tedly, I only looked at their web­site rather than ac­tu­ally at­tend­ing a work­shop, but it seems kind of creepy and culty—rather rem­i­nis­cent of Land­mark, for rea­sons not the least of which is the fact that it’s lu­dicrously, pro­hibitively ex­pen­sive (yes, I know they have “fel­low­ships,” but surely not that many. And you have to use and pay for their lodg­ings? wtf?). It’s sug­ges­tive of mind con­trol in the brain­wash­ing sense rather than ra­tio­nal­ity. (Frankly, I find that this fo­rum can get that way too, com­plete with sham­ing thought-stop­ping tech­niques (e.g. “That’s ir­ra­tional!”). Do you (or any­one else) have any ev­i­dence to the con­trary? (I know this is a lit­tle off-topic from my ques­tion—I could po­ten­tially cre­ate a work­shop that I don’t find culty—but since CFAR is cur­rently what’s out there, I figure it’s rele­vant enough.)

Given how things are go­ing tech­nol­ogy in­volves in a way where I don’t think we have to fear that we will have no en­ergy when coal runs out. There plenty of coal around and green en­ergy evolves fast enough for that task.

You could be right, but I think that’s rather op­ti­mistic. This blog post speaks to the prob­lems be­hind this ar­gu­ment pretty well, I think. Its ba­sic gist is that the amount of en­ergy it will take to build suffi­cient re­new­able en­ergy sys­tems de­mands sac­ri­fic­ing a por­tion of the econ­omy as is, to a point that no poli­ti­cian (let alone the free mar­ket) is go­ing to sup­port.

This brings me to your next point about ad­dress­ing poli­tics in­stead of neu­rol­ogy. Have you ever tried to get any­thing changed poli­ti­cally...? I’ve been in­volved in a cou­ple of move­ments, and my god is it dis­cour­ag­ing. You may as well try to knock a brick wall down with a feather. It ba­si­cally seems that hu­man­ity is just go­ing to be the way it is un­til it is changed on a fun­da­men­tal level. Yes, I know so­ciety has changed in many ways already, but there are many un­de­sir­able traits that seem pretty con­stant, par­tic­u­larly war and in­equal­ity.

As for so­lar as op­posed to other tech­nolo­gies, I am a bit torn as to whether it might be bet­ter to work on de­vel­op­ing tech­nolo­gies rather than what­ever seems most prac­ti­cal now. Fu­sion, for in­stance, if it’s ac­tu­ally pos­si­ble, would be in­cred­ible. I guess I feel that work­ing on what­ever’s prac­ti­cal now is bet­ter for me, per­son­ally, to ex­pend en­ergy on since ev­ery­thing else is so spec­u­la­tive. Sort of like triage.

• Well, there has not been a nu­clear war yet (ex­clud­ing WWII where deaths from nu­clear weapons were tiny in pro­por­tion), cli­mate change has only been a known risk for a few decades, and progress is be­ing made with elec­tric cars and so­lar power. Things could be worse. In­stead of moan­ing, pro­pose solu­tions : what would you do to stop global warm­ing when so much de­pends on fos­sil fuels?

On a sep­a­rate note, I agree with the knee­jerk re­ac­tions, but its a tem­po­rary cul­tural thing, caused par­tially by peo­ple bas­ing moral­ity on fic­tion. Get one group of peo­ple to watch GATTACA and an­other to watch Ghost in the shell, and they would have very differ­ent at­ti­tudes to­wards tran­shu­man­ism. More in­ter­est­ingly, cy­ber­goths (peo­ple who like to dress as cy­borgs as a fash­ion state­ment) seem to be pretty open to dis­cus­sions of ac­tual brain-com­puter in­ter­faces and there is mu­sic with H+ lyrics be­ing realeased on ac­tual record lables and brought by peo­ple who like the mu­sic and are not tran­shu­man­ists… yet.

In con­clu­sion, once en­hance­ment be­come pos­si­ble I think there will be a size­able minor­ity of peo­ple who back it—in fact this has al­lready hap­pend with modafinil and stu­dents.

• peo­ple bas­ing moral­ity on fic­tion.

Yes, and that seems truly dam­ag­ing. I get the need to cre­ate con­flict in fic­tion, but it seems to come always at the ex­pense of tech­nolog­i­cal progress, in a way I’ve never re­ally un­der­stood. When I read Brave New World, I gen­uinely thought it truly was a “brave new world.” So what if some guy was con­ceived nat­u­rally?? Why is that in­her­ently su­pe­rior? Sounds like sta­tus quo bias, if you ask me. Bun­cha Lud­dite pro­pra­ganda.

I’ve ac­tu­ally been work­ing on a pro-tech­nol­ogy, anti-Lud­dite text-based game. Maybe work­ing on it is in fact a good idea to­wards bal­anc­ing out the pro­pa­ganda and chang­ing pub­lic opinion...

• “Re­ac­tors by the thou­sand”. Fis­sile and fer­tile ma­te­ri­als are suffi­ciently abun­dant that we could run a econ­omy much larger than the pre­sent one en­tirely on fis­sion for mil­lions of years, and do­ing so would have con­sid­er­ably lower av­er­age health im­pacts and costs than what we are ac­tu­ally do­ing. - The fact that we still burn coal is ba­si­cally in­san­ity, even dis­re­gard­ing cli­mate change, be­cause of the sheer tox­i­c­ity of the wastestream from coal plants. Mer­cury has no halflife.

• [deleted]

• Well, true. All things shall pass.

• Thoughts?

Pretty sure you just feel like brag­ging about how much smarter you are than the rest of the world. If you think peo­ple have to be as smart as you think you are to be worth pro­tect­ing, you are a bad per­son.

• Does any­one have ad­vice for get­ting an en­try level soft­ware-de­vel­op­ment job? I’m find­ing a lot seem to want sev­eral years of ex­pe­rience, or a de­gree, while I’m self taught.

• Ig­nore what they say on the job post­ing, ap­ply any­way with a re­sume that links to your Github, web­sites you’ve built, etc. Many will still re­ject you for lack of ex­pe­rience, but in many cases it will turn out the job post­ing was a very op­ti­mistic de­scrip­tion of the can­di­date they were hop­ing to find, and they’ll in­ter­view you any­way in spite of not meet­ing the qual­ifi­ca­tions on the job list­ing.

This is just a guess, but I think it might be helpful to in­clude some screen­shots (in color) of the pro­grams, web­sites, etc. That would make them “more real” to the per­son who reads this. At least, save them some in­con­ve­nience. Of course, I as­sume that the pro­grams and web­sites have a nice user in­ter­face.

It’s also an op­por­tu­nity for an in­ter­est­ing ex­per­i­ment: ran­domly send 10 re­sumes with­out the screen­shorts, and 10 re­sumes with screen­shots. Mea­sure how many in­ter­view in­vi­ta­tions you get from each group.

If you have a cer­tifi­cate from Udac­ity or other on­line uni­ver­sity, men­tion that, too. Don’t list is as a for­mal ed­u­ca­tion, but some­where in the “other courses and cer­tifi­cates” cat­e­gory.

• I think ideally, you want your code run­ning on a web­site where they can in­ter­act with it, but maybe a screen­shot would help en­tice them to go to the web­site. Or help if you can’t get the code on a web­site for some rea­son.

• This is just a guess, but I think it might be helpful to in­clude some screen­shots (in color) of the pro­grams, web­sites, etc.

You want to sig­nal a hacker mind­set. In­stead of fo­cus­ing to in­clude screen­shots it might be more effec­tive to write your re­sume in LaTeX.

I re­al­ized that my im­plicit model is some half-IT-liter­ate HR per­son or man­ager. Some­one who doesn’t know what LaTeX is, and who couldn’t down­load and com­pile your pro­ject from Github. But they may look at a nice printed pa­per and say: “oh, shiny!” and choose you in­stead of some other can­di­date.

1. Live in a place with lots of de­mand. Sili­con Valley and Bos­ton are both good choices; there may be oth­ers but I’m less fa­mil­iar with them.

2. Have a github ac­count. Fill it with stuff.

3. Have a per­sonal site. Fill it with stuff.

4. Don’t worry about the de­gree re­quire­ments; ev­ery­body means “Bach­e­lor’s of CS or equiv­a­lent”.

5. Don’t worry about ex­pe­rience re­quire­ments. Un­like the de­gree re­quire­ment this does some­times mat­ter, but you won’t be able to tell by read­ing the ad­vert so just go ahead and ap­ply.

6. Pre­fer smaller com­pa­nies. The big­ger the com­pany, the more likely it is that your re­sume will be screened out by some au­to­mated pro­cess be­fore it can reach some­one like me. I read peo­ples’ githubs; HR nec­es­sar­ily does not.

• Live in a place with lots of de­mand.

Alter­na­tively, be will­ing to move.

• Prac­tic­ing white­board-style in­ter­view cod­ing prob­lems is very helpful. The best places to work will all make you code in the in­ter­view [1] so you want to feel at-ease in that en­vi­ron­ment. If you want to do a prac­tice in­ter­view I’d be up for do­ing that and giv­ing you an hon­est eval­u­a­tion of whether I’d hire you if I were hiring.

[1] Be very cau­tious about some­where that doesn’t make you code in the in­ter­view: you might end up work­ing with a lot of peo­ple who can’t re­ally code.

• If you have the skills to do soft­ware in­ter­views well, the hard­est part will be get­ting past re­sume screen­ing. If you can, try to use per­sonal con­nec­tions to by­pass that step and get in­ter­views. Then your skills will speak for them­selves.

• I wrote a piece for work on quota sys­tems and af­fir­ma­tive ac­tion in em­ploy­ment (“Fix­ing Our Model of Mer­i­toc­racy”). It’s poli­tics-re­lated, but I did get to cite a re­ally fun nat­u­ral ex­per­i­ment and talk about quo­tas for the use of coun­ter­ing the availa­bil­ity heuris­tic.

• This is a tan­gent, but since you men­tion the “good founders started [pro­gram­ming] at 13” meme, it’s a lit­tle bit rele­vant …

I find it deeply bizarre that there’s this idea to­day among some pro­gram­mers that if you didn’t start pro­gram­ming in your early teens, you will never be good at pro­gram­ming. Why is this so bizarre? Be­cause un­til very re­cently, there was no such thing as a pro­gram­mer who started at a young age; and yet there were peo­ple who be­came good at pro­gram­ming.

Prior to the 1980s, most peo­ple who ended up as pro­gram­mers didn’t have ac­cess to a com­puter un­til uni­ver­sity, of­ten not un­til grad­u­ate school. Even for uni­ver­sity stu­dents, rel­a­tively un­fet­tered ac­cess to a com­puter was an un­usual ex­cep­tion, found only in ex­tremely hacker-friendly cul­tures such as MIT.

Put an­other way: Don­ald Knuth prob­a­bly didn’t use a com­puter un­til he was around 20. John McCarthy was born in 1927 and prob­a­bly couldn’t have come near a com­puter un­til he was a pro­fes­sor, in his mid-20s. (And of course Alan Tur­ing, Jack Good, or John von Neu­mann couldn’t have grown up with com­put­ers!)

(But all of them were math­e­mat­i­ci­ans, and sev­eral of them physi­cists. Knuth, for one, was also a puz­zle afi­cionado and a mu­si­cian from his early years — two in­tel­lec­tual pur­suits of­ten be­lieved to cor­re­late with pro­gram­ming abil­ity.)

In any event, it should be ev­i­dent from the his­tor­i­cal record that peo­ple who didn’t see a com­puter un­til adult­hood could still be­come ex­tremely profi­cient pro­gram­mers and com­puter sci­en­tists.

I’ve heard some peo­ple defend the “you can’t be good un­less you started early” meme by com­par­i­son with lan­guage ac­qui­si­tion. Hu­mans gen­er­ally can’t gain na­tive-level fluency in a lan­guage un­less they are ex­posed to it as young chil­dren. But lan­guage ac­qui­si­tion is a very spe­cific de­vel­op­men­tal pro­cess that has evolved over thou­sands of gen­er­a­tions, and oc­curs in a de­vel­op­men­tally-crit­i­cal pe­riod of very early child­hood. Pro­gram­ming hasn’t been around that long, and there’s no rea­son to be­lieve that a crit­i­cal de­vel­op­men­tal pe­riod in early ado­les­cence could have come into ex­is­tence in the last few hu­man gen­er­a­tions.

So as far as I can tell, we should re­ally treat the idea that you have to start early to be­come a good pro­gram­mer as a defen­sive and prej­u­di­cial myth, a bit of tribal lore aris­ing in a re­cent (and pow­er­ful) sub­cul­ture — which has the effect of ex­clud­ing and driv­ing off peo­ple who would be perfectly ca­pa­ble of learn­ing to code, but who are not mem­bers of that sub­cul­ture.

• Seems to me that us­ing com­put­ers since your child­hood is not nec­es­sary, but there is some­thing which is nec­es­sary, and which is likely to be ex­pressed in child­hood as an in­ter­est in com­puter pro­gram­ming. And, as you men­tioned, in the ab­sence of com­put­ers, this some­thing is likely to be ex­pressed as an in­ter­est in math­e­mat­ics or physics.

So the cor­rect model is not “early pro­gram­ming causes great pro­gram­mers”, but rather “X causes great pro­gram­mers, and X causes early pro­gram­ming; there­fore early pro­gram­ming cor­re­lates with great pro­gram­mers”.

Start­ing early with pro­gram­ming is not strictly nec­es­sary… but these days when com­put­ers are al­most ev­ery­where and they are rel­a­tively cheap, not ex­press­ing any in­ter­est in pro­gram­ming dur­ing one’s child­hood is an ev­i­dence this per­son is prob­a­bly not meant to be a good pro­gram­mer. (The only ques­tion is how strong this ev­i­dence is.)

Com­par­ing with lan­guage ac­qui­si­tion is wrong… un­less the com­par­i­son is true for math­e­mat­ics. (Is there a re­search on this?) Again, the model “you need pro­gram­ming ac­qui­si­tion as a child” would be wrong, but the model “you need math ac­qui­si­tion as a child, and with­out this you later will not grok pro­gram­ming” might be cor­rect.

• the cor­rect model is not “early pro­gram­ming causes great pro­gram­mers”, but rather “X causes great pro­gram­mers, and X causes early pro­gram­ming; there­fore early pro­gram­ming cor­re­lates with great pro­gram­mers”.

Yeah, I think this is ex­plic­itly the claim Paul Gra­ham made, with X = “deep in­ter­est in tech­nol­ogy”.

The prob­lem with that is I think, at least with tech­nol­ogy com­pa­nies, the peo­ple who are re­ally good tech­nol­ogy founders have a gen­uine deep in­ter­est in tech­nol­ogy. In fact, I’ve heard star­tups say that they did not like to hire peo­ple who had only started pro­gram­ming when they be­came CS ma­jors in col­lege. If some­one was go­ing to be re­ally good at pro­gram­ming they would have found it on their own. Then if you go look at the bios of suc­cess­ful founders this is in­vari­ably the case, they were all hack­ing on com­put­ers at age 13.

• Hu­mans gen­er­ally can’t gain na­tive-level fluency in a lan­guage un­less they are ex­posed to it as young chil­dren.

The only as­pect of lan­guage with a crit­i­cal pe­riod is ac­cent. Adults com­monly achieve fluency. In fact, adults learn a sec­ond lan­guage faster than chil­dren.

• As far as I know, the de­gree to which sec­ond-lan­guage speak­ers can ac­quire na­tive-like com­pe­tence in do­mains other than pho­net­ics is some­what de­bated. Anec­do­tally, it’s a rare per­son who man­ages to never make a syn­tac­tic er­ror that a na­tive speaker wouldn’t make, and there are some as­pects of lan­guage (I’m told that sub­junc­tive in French and as­pect in Slavic lan­guages may be ex­am­ples) that may be im­pos­si­ble to fully ac­quire for non-na­tive speak­ers.

So I wouldn’t ac­cept this the­o­ret­i­cal as­ser­tion with­out fur­ther ev­i­dence; and for all prac­ti­cal pur­poses, the claim that you have to learn a lan­guage as a child in or­der to be­come perfect (in the sense of na­tive-like) with it is true.

• Not my down­votes, but you’re prob­a­bly get­ting flak for just as­sert­ing stuff and then de­mand­ing ev­i­dence for the op­pos­ing side. A more mel­low ap­proach like “huh that’s funny I’ve always heard the op­po­site” would be bet­ter re­ceived.

• In­deed, I prob­a­bly ex­pressed my­self quite badly, be­cause I don’t think what I meant to say is that out­ra­geous: I heard the op­po­site, and anec­do­tally, it seems right—so I would have liked to see the (non-anec­do­tal) ev­i­dence against it. Per­haps I phrased it a bit harshly be­cause what I was re­spond­ing to was also just an un­sub­stan­ti­ated as­ser­tion (or, al­ter­na­tively, a non-se­quitur in that it dropped the “na­tive-like” be­fore fluency).

• [er­ror]

• As far as I know, the de­gree to which sec­ond-lan­guage speak­ers can ac­quire na­tive-like com­pe­tence in do­mains other than pho­net­ics is some­what de­bated.

Links? As far as I know it’s not de­bated.

there are some as­pects of lan­guage (I’m told that sub­junc­tive in French and as­pect in Slavic lan­guages may be ex­am­ples) that may be im­pos­si­ble to fully ac­quire for non-na­tive speak­ers.

That’s, ahem, bul­lshit. Why in the world would some fea­tures of syn­tax be “im­pos­si­ble to fully ac­quire”?

for all prac­ti­cal pur­poses, the claim that you have to learn a lan­guage as a child in or­der to be­come perfect (in the sense of na­tive-like) with it is true.

For all prac­ti­cal pur­poses it is NOT true.

• You may eas­ily know more about this is­sue than me, be­cause I haven’t ac­tu­ally re­searched this.

That said, let’s be more pre­cise. If we’re talk­ing about mere fluency, there is, of course, no ques­tion.

But if we’re talk­ing about ac­tu­ally na­tive-equiv­a­lent com­pe­tence and perfor­mance, I have se­vere doubts that this is even reg­u­larly achieved. How many L2 speak­ers of English do you know who never, ever pick an un­nat­u­ral choice from among the myr­iad of differ­ent ways in which the fu­ture can be ex­pressed in English? This is some­thing that is com­pletely effortless for na­tive speak­ers, but very hard for L2 speak­ers.

The peo­ple I know who are can­di­dates for that level of profi­ciency in an L2 are at the up­per end of the in­tel­li­gence spec­trum, and I also know a non-dumb per­son who has lived in a Ger­man-speak­ing coun­try for decades and still uses wrong plu­ral for­ma­tions. Hell, there’s peo­ple who are em­ployed and teach at MIT and so are pre­sum­ably non-dumb who say things like “how it sounds like”.

The two things I men­tioned are se­man­tic/​prag­matic, not syn­tac­tic. I know there is a study that shows L2 learn­ers don’t have much of a prob­lem with the mor­phosyn­tax of Rus­sian as­pect, and that doesn’t sur­prise me very much. I don’t know and didn’t find any work that tried to test na­tive-like perfor­mance on the se­man­tic and prag­matic level.

I’m not sure how to an­swer the “why” ques­tion. Why should there be a crit­i­cal pe­riod for any­thing? … In­tu­itively, I find that se­man­tics/​prag­mat­ics, hav­ing to do with cat­e­gori­sa­tion, is a bet­ter can­di­date for some­thing crit­i­cal-pe­riod-like than pure (mor­pho)syn­tax. I’m not even sure you need crit­i­cal pe­ri­ods for ev­ery­thing, any­way. If A learns to play the pi­ano start­ing at age 5 and B starts at age 35, I wouldn’t be sur­prised if A is not only on av­er­age, but al­most always, bet­ter at age 25 than B is at 55. Un­for­tu­nately, that’s ba­si­cally im­pos­si­ble to study while con­trol­ling for all con­founders like gen­eral in­tel­li­gence, qual­ity of in­struc­tion, and num­ber of hours spent on prac­tice. (The pi­ano ex­am­ple would be analo­gous more to the perfor­mance than the com­pe­tence as­pect of lan­guage, I sup­pose.)

There is a study about Rus­sian da­tive sub­jects that sug­gests even highly ad­vanced L2 speak­ers with lots of ex­po­sure don’t get things quite right. Ad­mit­tedly, you can still com­plain that they don’t sep­a­rate the peo­ple who have lived in a Rus­sian-speak­ing coun­try for only a cou­ple of months from those who have lived there for a decade.

The thing about the sub­junc­tive is, at best, wrong, but cer­tainly not bul­lshit. The fact that it was told to me by a very in­tel­li­gent French lin­guist about a friend of his whose L2-French is flawless ex­cept for oc­ca­sional er­rors in that do­main is bet­ter ev­i­dence for that be­ing a very hard thing to ac­quire than your “bul­lshit” is against that.

• If A learns to play the pi­ano start­ing at age 5 and B starts at age 35, I wouldn’t be sur­prised if A is not only on av­er­age, but al­most always, bet­ter at age 25 than B is at 55. Un­for­tu­nately, that’s ba­si­cally im­pos­si­ble to study while con­trol­ling for all con­founders like gen­eral in­tel­li­gence, qual­ity of in­struc­tion, and num­ber of hours spent on prac­tice.

If all you are say­ing is that peo­ple who start learn­ing a lan­guage at age 2 are al­most always bet­ter at it than peo­ple who start learn­ing the same lan­guage at age 20, I don’t think any­one would dis­agree. The whole dis­cus­sion is about con­trol­ling for con­founders...

• Yes and no—the whole dis­cus­sion is ac­tu­ally two dis­cus­sions, I think.

One is about in-prin­ci­ple pos­si­bil­ity, the pres­ence of some­thing like a crit­i­cal pe­riod, etc. There it is cru­cial for con­founders.

The sec­ond dis­cus­sion is about in-prac­tice pos­si­bil­ity, whether peo­ple start­ing later can rea­son­ably ex­pect to get to the same level of profi­ciency. Here the “con­founders” are ac­tu­ally part of what this is about.

• There is a study about Rus­sian da­tive sub­jects that sug­gests even highly ad­vanced L2 speak­ers with lots of ex­po­sure don’t get things quite right.

Bonus points for giv­ing a spe­cific ex­am­ple, which helped me to un­der­stand your point, and at this mo­ment I fully agree with you. Be­cause I un­der­stand the ex­am­ple; my own lan­guage has some­thing similar, and wouldn’t ex­pect a stranger to use this cor­rectly. The rea­son is that it would be too much work to learn prop­erly, for too lit­tle benefit. It’s a differ­ent way to say things, and you only achieve a small differ­ence in mean­ing. And even if you asked a non-lin­guist na­tive, they would prob­a­bly find it difficult to ex­plain the differ­ence prop­erly. So you have lit­tle chance to learn it right, and also lit­tle mo­ti­va­tion to do.

Here is my at­tempt to ex­plain the ex­am­ples from the link, pages 3 and 4. (I am not a Rus­sian lan­guage speaker, but my na­tive lan­guage is also Slavic, and I learned Rus­sian. If I got some­thing wrong, please cor­rect me.)

“ya us­lyshala …” = “I heard …”
″mne poslyshalis …” = “to-me hap­pened-to-be-heard …”

“ya xo­tel …” = “I wanted …”
″mne xotelos …” = “to-me hap­pened-to-want …”

That’s pretty much the same mean­ing, it’s just that the first var­i­ant is “more agenty”, and the sec­ond var­i­ant is “less agenty”, to use the LW lingo. But that’s kinda difficult to ex­plain ex­plic­itly, be­case… you know, how ex­actly can “hear­ing” (not ac­tive listen­ing, just hear­ing) be “agenty”; and how ex­actly can “want­ing” be “non-agenty”? It doesn’t seem to make much sense, un­til you think about it, right? (The “non-agenty want­ing” is some­thing like: my emo­tions made me to want. So I ad­mit that I wanted, but at the same time I deny full re­spon­si­bil­ity for my want­ing.)

As a stranger, what is the chance that (1) you will hear it ex­plained in a way that will make sense to you, (2) you will re­mem­ber it cor­rectly, and (3) when the op­por­tu­nity comes, you will re­mem­ber to use it. Pretty much zero, I guess. Un­less you de­cide to put an ex­tra effort into this as­pect of the lan­gauge speci­fi­cally. But con­sid­er­ing the costs and benefits, you are ex­tremely un­likely to do it, un­less be­ing a pro­fes­sional trans­la­tor to Rus­sian is ex­tremely im­por­tant for you. (Or un­less you speak a Slavic lan­guage that has a similar con­cept, so the costs are lower for you, but even then you need a mo­ti­va­tion to be very good at Rus­sian.)

Now when you think about con­texts, these kinds of words are likely to be used in sto­ries, but don’t ap­pear in tech­ni­cal liter­a­ture or offi­cial doc­u­ments, etc. So if you are a Rus­sian child, you heard them a lot. If you are a Rus­sian-speak­ing for­eigner work­ing in Rus­sia, there is a chance you will liter­ally never hear it at the work­place.

• The pa­per doesn’t even find a statis­ti­cally sig­nifi­cant differ­ence. The point es­ti­mate is that ad­vanced L2 do worse than na­tives, but na­tives make al­most as many mis­takes.

• They did found differ­ences with the ad­vances L2 speak­ers, but I guess we care about the highly ad­vanced ones. They point out a differ­ence at the bot­tom of page 18, though ad­mit­tedly, it doesn’t seem to be that much of a big deal and I don’t know enough about statis­tics to tell whether it’s very mean­ingful.

• ‘mne poslyshalos’ I think. This one has con­no­ta­tions of ‘hear­ing things,’ though.

• Note: “Mne poslyshalis’ shagi na kr­ishe.” was the origi­nal ex­am­ple; I just re­moved the un­chang­ing parts of the sen­tences.

• Ah I see, yes you are right. That is the cor­rect plu­ral in this case. Sorry about that! ‘Mne poslyshalos chtoto’ (“some­thing made it­self heard by me”) would be the sin­gu­lar, vs the plu­ral above (“the steps on the roof made them­selves heard by me.”). Or at least I think it would be—I might be los­ing my ear for Rus­sian.

• How many L2 speak­ers of English do you know who never, ever pick an un­nat­u­ral choice from among the myr­iad of differ­ent ways in which the fu­ture can be ex­pressed in English?

You are com­mit­ting the nir­vana fal­lacy. How many na­tive speak­ers of English never make mis­takes or never “pick an un­nat­u­ral choice”?

For ex­am­ple, I know a woman who im­mi­grated to the US as an adult and is fully bil­in­gual. As an ob­jec­tive mea­sure, I think she had the perfect score on the ver­bal sec­tion of the LSAT. She speaks bet­ter English than most “na­tives”. She is not un­usual.

The fact that it was told to me by a very in­tel­li­gent French lin­guist about a friend of his whose L2-French is flawless ex­cept for oc­ca­sional er­rors in that domain

Tell your French lin­guist to go into coun­tryside and listen to the French of the un­e­d­u­cated na­tive speak­ers. Do they make mis­takes?

• How many na­tive speak­ers of English never make mis­takes or never “pick an un­nat­u­ral choice”?

I’m not talk­ing about perfor­mance er­rors in gen­eral. I’m talk­ing about the fact that it is ex­tremely hard to ac­quire na­tive-like com­pe­tence wrt the se­man­tics and prag­mat­ics of the ways in which English al­lows one to ex­press some­thing about the fu­ture.

She speaks bet­ter English than most “na­tives”.

Your ut­ter­ance of this sen­tence severely dam­ages your cred­i­bil­ity with re­spect to any lin­guis­tic is­sue. The proper way to say this is: she speaks higher-sta­tus English than most na­tive speak­ers. Be­sides, the fact that she gets perfect scores on some test (whose con­tent and for­mat is un­known to me), which pre­sum­ably na­tive speak­ers don’t, sug­gests that she is far from an av­er­age in­di­vi­d­ual any­way.

Also, that you’re not bring­ing up a sin­gle rele­vant study that com­pares long-time L2 speak­ers with na­tive speak­ers on some in­ter­est­ing, in­tri­cate and sub­tle is­sue where a com­pe­tence differ­ence might be sus­pected leaves me with a very low ex­pec­ta­tion of the fruit­ful­ness of this dis­cus­sion, so maybe we should just leave it at that. I’m not even sure to what ex­tent we aren’t sim­ply talk­ing past each other be­cause we have differ­ent ideas about what na­tive-like perfor­mance means.

Tell your French lin­guist to go into coun­tryside and listen to the French of the un­e­d­u­cated na­tive speak­ers. Do they make syn­tax er­rors?

They don’t, by defi­ni­tion; not the way you prob­a­bly mean it. I wouldn’t know why the rate of perfor­mance er­rors should cor­re­late in any way with ed­u­ca­tion (con­trol­ling for in­tel­li­gence). I also trust the man’s judg­ment enough to as­sume that he was talk­ing about a sort of er­ror that stuck out be­cause a na­tive speaker wouldn’t make it.

• I’m talk­ing about the fact that it is ex­tremely hard to ac­quire na­tive-like com­pe­tence wrt the se­man­tics and prag­mat­ics of the ways in which English al­lows one to ex­press some­thing about the fu­ture.

I don’t think so. This looks like an em­piri­cal ques­tion—what do you mean by “ex­tremely hard”? Any ev­i­dence?

Your ut­ter­ance of this sen­tence severely dam­ages your cred­i­bil­ity with re­spect to any lin­guis­tic is­sue. The proper way to say this is: she speaks higher-sta­tus English than most na­tive speak­ers.

No, I still don’t think so—for ei­ther of your claims. Leav­ing aside my cred­i­bil­ity, non-black English in the United States (as op­posed to the UK) has few ways to show sta­tus and they tend to be re­gional, any­way. She speaks bet­ter English (with some ac­cent, to be sure) in the usual sense—she has a rich vo­cab­u­lary and doesn’t make many mis­takes.

she is far from an av­er­age in­di­vi­d­ual any­way.

While that is true, your claims weren’t about av­er­ages. Your claims were about im­pos­si­bil­ity—for any­one. An av­er­age per­son isn’t suc­cess­ful at any­thing, in­clud­ing sec­ond lan­guages.

• I don’t think so. This looks like an em­piri­cal ques­tion—what do you mean by “ex­tremely hard”? Any ev­i­dence?

I don’t know if any­body has ever stud­ied this—I would be sur­prised if they had -, so I have only anec­do­tal ev­i­dence from the un­cer­tainty I my­self ex­pe­rience some­times when choos­ing be­tween “will”, “go­ing to”, plain pre­sent, “will + pro­gres­sive”, and pre­sent pro­gres­sive, and from the tes­ti­mony of other highly ad­vanced L2 speak­ers I’ve talked to who feel the same way—while na­tive speak­ers are usu­ally not even aware that there is an is­sue here.

She speaks bet­ter English (with some ac­cent, to be sure) in the usual sense—she has a rich vo­cab­u­lary and doesn’t make many mis­takes.

How ex­actly is “rich vo­cab­u­lary” not high-sta­tus? (Also, are you sure it ac­tu­ally con­tains more non-tech­ni­cal lex­emes and not just higher-sta­tus lex­emes?) I’m not ex­actly sure what you mean by “mis­takes”. Things that are un­gram­mat­i­cal in your idiolect of English?

While that is true, your claims weren’t about av­er­ages. Your claims were about im­pos­si­bil­ity—for any­one. An av­er­age per­son isn’t suc­cess­ful at any­thing, in­clud­ing sec­ond lan­guages.

I ac­tu­ally made two claims. The one was that it’s not en­tirely clear that there aren’t any such in-prin­ci­ple im­pos­si­bil­ities, though I ad­mit that the case for them isn’t very strong. I will be very happy if you give me a refer­ence sur­vey­ing some re­search on this and say­ing that the em­piri­cal side is re­ally set­tled and the lin­guists who still go on tel­ling their stu­dents that it isn’t are just not up-to-date.

The sec­ond is that in any case, only the most ex­cep­tional L2 learn­ers can in prac­tice ex­pect to ever achieve na­tive-like fluency.

• the un­cer­tainty … while na­tive speak­ers are usu­ally not even aware that there is an is­sue here.

It seems you are talk­ing about be­ing self-con­scious, not about lan­guage fluency.

The one was that it’s not en­tirely clear that there aren’t any such in-prin­ci­ple impossibilities

Why in the world would there be “in-prin­ci­ple im­pos­si­bil­ities”—where does this idea even come from? What pos­si­ble mechanism do you have in mind?

only the most ex­cep­tional L2 learn­ers can in prac­tice ex­pect to ever achieve na­tive-like fluency.

Well, let’s get spe­cific. Which test do you as­sert na­tive speak­ers will pass and ESL peo­ple will not (ex­cept for the “most ex­cep­tional”)?

• It seems you are talk­ing about be­ing self-con­scious, not about lan­guage fluency.

I didn’t say it was about fluency. But I don’t think it’s about self-con­scious­ness, ei­ther. Na­tive speak­ers of a lan­guage pick the ap­pro­pri­ate tense and as­pect forms of verbs perfectly effortlessly—or how of­ten do you hear a na­tive speaker of English use a pro­gres­sive in a case where it strikes you as in­ap­pro­pri­ate and you would say that they should re­ally have used a plain tense here, for ex­am­ple?* - while for L2 speak­ers, it is gen­er­ally pretty hard to grasp all the de­tails of a lan­guage’s tense/​as­pect sys­tem.

*I’m choos­ing the pro­gres­sive as an ex­am­ple be­cause it’s eas­iest to de­scribe, not be­cause I think it’s a can­di­date for se­ri­ous un­ac­quira­bil­ity. It’s known to be quite hard for na­tive speak­ers of a lan­guage that has no as­pect, but it’s cer­tainly pos­si­ble to get to a point where you don’t use the pro­gres­sive wrongly es­sen­tially ever.

What pos­si­ble mechanism do you have in mind?

For syn­tax, you would re­ally need to be a strong Chom­skian to ex­pect any such things. For se­man­tics, it seems to be a bit more plau­si­ble a pri­ori: maybe as an adult, you have a hard time learn­ing new ways of carv­ing up the world?

Well, let’s get spe­cific. Which test do you as­sert na­tive speak­ers will pass and ESL peo­ple will not (ex­cept for the “most ex­cep­tional”)?

I don’t know of a pass/​fail for­mat test, but I ex­pect read­ing speed and the speed of their speech to be lower in L2 speak­ers than in L1 speak­ers of com­pa­rable in­tel­li­gence. I would also ex­pect that if you mea­sure cog­ni­tive load some­how, lan­guage pro­cess­ing in an L2 re­quires more of your ca­pac­ity than pro­cess­ing your L1. I would also ex­pect that the ac­tive vo­cab­u­lary of L1 speak­ers is gen­er­ally larger than that of an L2 speaker even if all the words in the L1 speaker’s ac­tive lex­i­con are in the L2 speaker’s pas­sive vo­cab­u­lary.

• The proper way to say this is: she speaks higher-sta­tus English than most na­tive speak­ers.

I won­der if there’s an im­pli­ca­tion that col­lo­quial lan­guage is more com­plex than high sta­tus lan­guage.

• The things be­ing mea­sured are differ­ent. To a first ap­prox­i­ma­tion, all na­tive speak­ers do max­i­mally well at sound­ing like a na­tive speaker.

Lu­mifer’s friend may in­deed speak like a na­tive speaker (though it’s rare for peo­ple who learned as adults to do so), but she can­not be bet­ter at it than “most ‘na­tives’”.

What she can be bet­ter at than most na­tives is:

It is pos­si­ble, though, for a lower-sta­tus di­alect to be more com­plex than a higher-sta­tus one. Ex­am­ple: the Black English verb sys­tem.

• Or maybe it means that high sta­tus and low sta­tus English have differ­ent difficul­ties, and na­tive speak­ers tend to learn the one that their par­ents use (find­ing oth­ers harder) while L2 speak­ers learn to speak from a de­scrip­tion of English which is ac­tu­ally a de­scrip­tion of a par­tic­u­lar high sta­tus ac­cent (usu­ally ei­ther Oxford or New England I think)

• The “Stan­dard Amer­i­can Ac­cent” spo­ken in the me­dia and gen­er­ally taught to forieg­n­ers is the con­fus­ingly named “Mid­west­ern” Ac­cent, which due to in­ter­nal mi­gra­tion and a sub­se­quent vowel shift, is now mostly spo­ken in Cal­ifor­nia and the Pa­cific North­west.

In­ter­est­ingly enough, my old Ja­panese in­struc­tor was a na­tive Osakan, who’s nat­u­ral di­alect was Kan­sai-ben; de­spite this, she con­ducted the class us­ing the stan­dard, Tokyo Dialect.

• What do you mean by “the­o­ret­i­cal”? Is this just an in­sult you fling at peo­ple you dis­agree with?

• Huh? What a cu­ri­ous mi­s­un­der­stand­ing! The the­o­ret­i­cal referred just the—the­o­ret­i­cal! - ques­tion of whether it’s in prin­ci­ple pos­si­ble to ac­quire na­tive-like profi­ciency, which was con­trasted with my claim that even if it is, most peo­ple can­not ex­pect to reach that state in prac­tice.

• I thought that my choice of the word “com­monly” in­di­cated that I was not talk­ing about the limits of the pos­si­ble.

• You re­ally think it’s com­mon for L2 speak­ers to achieve na­tive-like lev­els of profi­ciency? Where do you live and who are these ge­niuses? I’m se­ri­ous. For ex­am­ple, I see peo­ple speak­ing at con­fer­ences who have lived in the US for years, but aren’t na­tive speak­ers, and they are still not do­ing so with na­tive-like fluency and elo­quence. And pre­sum­ably you have to be more than av­er­agely in­tel­li­gent to give a talk at a sci­en­tific con­fer­ence...

I’m not talk­ing about just any kind of fluency here, and nei­ther was fubarobfusco, I as­sume. I sus­pect I was try­ing to in­ter­pret your ut­ter­ance in a way that I didn’t as­sign very low prob­a­bil­ity to (i.e. not as claiming that it’s com­mon for peo­ple to be­come na­tive-like) and that also wasn’t a non-se­quitur wrt the claim you were refer­ring to (by re­duc­ing na­tive-like fluency to some weaker no­tion) and kind of failed.

• Maybe I should have said “rou­tinely” rather than “com­monly.” But the key differ­en­tia­tor is effort.

I don’t care about your the­o­ret­i­cal ques­tion of whether you can come up with a test that L2 speak­ers fail. I as­sume that fubarobfusco meant the same thing I meant. I’m done.

• This is a tan­gent, but since you men­tion the “good founders started [pro­gram­ming] at 13” meme, it’s a lit­tle bit rele­vant …

There is a rule of thumb that achiev­ing ex­cep­tional mas­tery in any spe­cific field re­quires 10,000 hours of prac­tice. This seems to be true across fields, in clas­si­cal mu­si­ci­ans, chess play­ers, sports play­ers, schol­ars/​aca­demics etc… It’s a lot eas­ier to meet that stan­dard if you start from child­hood. Note that peo­ple who make this claim in the com­put­ing field are talk­ing about hack­ers, not pro­fes­sional pro­gram­mers in a gen­eral sense. It’s very pos­si­ble to be­come a pro­duc­tive pro­gram­mer at any age.

• I find it deeply bizarre that there’s this idea to­day among some pro­gram­mers that if you didn’t start pro­gram­ming in your early teens, you will never be good at pro­gram­ming.

Sup­pose you re­placed it with the idea that peo­ple who started pro­gram­ming when they were 13 have a much eas­ier time be­com­ing good pro­gram­mers as adults, and so are over­rep­re­sented among pro­gram­mers at ev­ery level. Does that still sound bizarre?

• Don­ald Knuth was prob­a­bly do­ing real math in his early teens. Maybe this counts.

• The same tor­tured anal­y­sis plays out in the busi­ness world, where Paul Gra­ham, the head of YCom­bi­na­tor, a startup in­cu­ba­tor, ex­plained that one rea­son his com­pany funds fewer women-led com­pa­nies is be­cause fewer of them fit this pro­file of a suc­cess­ful founder:

If some­one was go­ing to be re­ally good at pro­gram­ming they would have found it on their own. Then if you go look at the bios of suc­cess­ful founders this is in­vari­ably the case, they were all hack­ing on com­put­ers at age 13.

The trou­ble is, suc­cess­ful founders don’t run through a pure mer­i­toc­racy, ei­ther. They’re sup­ported, men­tored, and funded when they’re cho­sen by ven­ture cap­i­tal­ists like Gra­ham. And, if ev­ery­one is work­ing on the same model of “good founders started at 13″ then a lot of clever ideas, cre­ated by peo­ple of ei­ther gen­der, might get left on the table.

A similar ar­gu­ment was pre­sented in an ar­ti­cle at Slate: Affir­ma­tive ac­tion doesn’t work. It never did. It’s time for a new solu­tion.:

But even if the gov­ern­ment were keep­ing bet­ter tabs on af­fir­ma­tive ac­tion, the big­ger prob­lem is that its ju­ris­dic­tion doesn’t reach the parts of the econ­omy where af­fir­ma­tive ac­tion is most des­per­ately needed: the places where real money is made and real power is al­lo­cated. The best ex­am­ple of this is the in­dus­try that dom­i­nates so much of our econ­omy to­day: the tech­nol­ogy sec­tor. Sili­con Valley’s racial di­ver­sity is pretty ter­rible, the kind of gross im­bal­ance that in­spires spe­cial re­ports on CNN.

It’s a dis­mal state of af­fairs, but how could it re­ally be oth­er­wise? Sili­con Valley isn’t just an in­dus­try; it’s a so­cial and cul­tural ecosys­tem that grew out of a very spe­cific so­cial and cul­tural set­ting: mostly West Coast, up­per-mid­dle-class white guys who liked to tin­ker with moth­er­boards and microchips. If you were around that cul­ture, you be­came a part of it. If you weren’t, you didn’t. And be­cause of the so­cial seg­re­ga­tion that per­vades our so­ciety, very few black peo­ple were around to be a part of it.

Some would pur­port to rem­edy this by fix­ing the tech in­dus­try job pipeline: more STEM grad­u­ates, more minor­ity in­tern­ships and boot camps, etc. And that will get you changes here and there, at the mar­gins, but it doesn’t get at the real prob­lem. The big suc­cess sto­ries of the In­ter­net age—In­sta­gram, YouTube, Twit­ter—all came about in similar ways: A cou­ple of peo­ple had an idea, they got to­gether with some of their friends, built some­thing, called some other friends who knew some other friends who had ac­cess to friends with se­ri­ous money, and then the thing took off and now we’re all us­ing it and they’re all mil­lion­aires. The pro­cess is or­ganic, some­what ac­ci­den­tal, and it moves re­ally, re­ally fast. And by the time those com­pa­nies are big enough to worry about their “di­ver­sity,” the ground-floor op­por­tu­ni­ties have already been spo­ken for

• I al­most up­voted your post on re­al­iz­ing you are a woman and think­ing I’d like more women on LW. Then I re­al­ized how ironic that was. Then I did it any­way, likely in­fluenced by the pretty photo on your ar­ti­cle (not car­ing whether it was stock or you).

Fix­ing our mer­i­toc­racy pre­sumes we have a mer­i­toc­racy to fix. Cer­tainly a democ­racy is not a mer­i­toc­racy, un­less your defi­ni­tion of merit is EXTREMELY flex­ible to the point of defin­ing merit as “get­ting elected.” Cer­tainly the Athe­nian and 19th cen­tury Amer­i­can democ­ra­cies which sup­ported hu­man slav­ery were not mer­i­toc­ra­cies, un­less again your defi­ni­tion of merit is flex­ible enough to in­clude be­ing white and/​or pa­tri­cian.

If there is a les­son from the study you cite, it would seem to be that one should push for quo­tas at the gov­ern­men­tal level. It is said by many, but I don’t know the ev­i­dence, that an ad­van­tage Europe and US have over Is­lamic so­cieties is that we are much bet­ter about mon­e­tiz­ing the tal­ents of women, and so we have up to 200% more per cap­ita effec­tive pro­duc­tivity available. Your In­dian les­son shows an ex­am­ple where rather ex­treme and anti-demo­cratic quo­tas ap­peared to shift the prefer­ences of the broad pop­u­la­tion to in­clude more hu­mans more broadly in what they see as the tal­ent pool.

Is it likely that quo­tas in the US have worked nega­tively rather than pos­i­tively? Looked at my­opi­cally one might make the case. But pre-quota US was a MUCH LESS in­te­grated so­ciety. I grew up in a mid­dle class sub­urb in Long Is­land (Farm­ing­dale) hardly a bas­tion of white privelege. In the 1970s, a black fam­ily bought a house and had stuff thrown through their win­dows and a range of other ha­rass­ments per­pe­trated upon them by anony­mous but I’m will­ing to bet white per­pe­tra­tors. Now we have in­ter­ra­cial cou­ples all over the south­ern US, and tremen­dous re­duc­tion in racist feel­ing in peo­ple younger than my­self. Cor­re­la­tion is not cau­sa­tion, but it ain’t ex­actly an ar­gu­ment against cau­sa­tion ei­ther.

• Speed read­ing doesn’t reg­ister many hits here, but in a re­cent thread on sub­vo­cal­iza­tion there are claims of speeds well above 500 WPM.

My stan­dard read­ing speed is about 200 WPM (based on my eReader statisitcs, varies by con­tent), I can push my­self to maybe 240 but it is not en­joy­able (I wouldn’t read fic­tion at this speed) and 450-500 WPM with RSVP.

My aim this year is to get my­self at 500+ WPM base (i.e. us­able also for leisure read­ing and with­out RSVP). Is this even pos­si­ble? Claims seem to be con­tra­dic­tory.

Does any­body have recom­men­da­tions on sys­tems that ac­tu­ally work? Most I’ve seen seem like overblown claims to pump for money from des­per­ate man­agers… I’m will­ing to put into it money if it ac­tu­ally can de­liver.

Thank you very much.

• I read around 600 wpm with­out ever tak­ing speedread­ing les­sons so with train­ing it should be very pos­si­ble.

• Some­thing I re­cently no­ticed: steel­man­ning is pop­u­lar on LessWrong. But the se­quences con­tain a post called Against Devil’s Ad­vo­cacy, which ar­gues strongly against devil’s ad­vo­cacy, and steel­man­ning of­ten looks a lot like devil’s ad­vo­cacy. What, if any­thing is the differ­ence be­tween the two?

• Steel­man­ning is about fix­ing er­rors in an ar­gu­ment (or oth­er­wise im­prov­ing it), while re­tain­ing (some of) the ar­gu­ment’s as­sump­tions. As a re­sult, the ar­gu­ment be­comes bet­ter, even if you dis­agree with some of the as­sump­tions. The con­clu­sion of the ar­gu­ment may change as a re­sult, what’s fixed about the con­clu­sion is only the ques­tion that it needs to clar­ify. Devil’s ad­vo­cacy is about find­ing ar­gu­ments for a given con­clu­sion, in­clud­ing fal­la­cious but con­vinc­ing ones.

So the differ­ence is in the di­rec­tion of rea­son­ing and in­tent re­gard­ing epistemic hy­giene. Steel­man­ning starts from (some­what) fixed as­sump­tions and looks for more ro­bust ar­gu­ments fol­low­ing from them that would ad­dress a given ques­tion (care­ful hy­po­thet­i­cal rea­son­ing), while devil’s ad­vo­cacy starts from a fixed con­clu­sion (not just a fixed ques­tion that the con­clu­sion would judge) and looks for con­vinc­ing ar­gu­ments lead­ing to it (ra­tio­nal­iza­tion with al­lowed use of dark arts).

A bad as­pect of a steel­manned ar­gu­ment is that it can be use­less: if you don’t ac­cept the as­sump­tions, there is of­ten lit­tle point in in­ves­ti­gat­ing their im­pli­ca­tions. A bad as­pect of a devil’s ad­vo­cate’s ar­gu­ment is that it may be mis­lead­ing, act­ing as filtered ev­i­dence for the cho­sen con­clu­sion. In this sense, devil’s ad­vo­cates ex­er­cise the skill of com­ing up with mis­lead­ing ar­gu­ments, which might be bad for their abil­ity to rea­son care­fully in other situ­a­tions.

• Devil’s ad­vo­cacy is about find­ing ar­gu­ments for a given con­clu­sion, in­clud­ing fal­la­cious but con­vinc­ing ones.

But what if you steel­man devil’s ad­vo­cacy to ex­clude fal­la­cious but con­vinc­ing ar­gu­ments?

• Then the main prob­lem is that it pro­duces (and ex­er­cises the skill of pro­duc­ing) ar­gu­ments that are filtered ev­i­dence in the di­rec­tion of the pre­defined con­clu­sion, in­stead of well-cal­ibrated con­sid­er­a­tion of the ques­tion on which the con­clu­sion is one po­si­tion.

• So I’m still not sure what the differ­ence with steel­man­ning is sup­posed to be, un­less it’s that with steel­man­ning you limit your­self to fix­ing flaws in your op­po­nents’ ar­gu­ments that can be fixed with­out es­sen­tially chang­ing their ar­gu­ments, as op­posed just try­ing to find the best ar­gu­ments you can for their con­clu­sion (the lat­ter be­ing a way of fil­ter­ing ev­i­dence?)

That would seem to im­ply that steel­man­ning isn’t a uni­ver­sal duty. If you think an ar­gu­ment can’t be fixed with­out es­sen­tially steel­man­ning it, you’ll just be forced to say it can’t be steel­manned.

• As far as I can tell...noth­ing. Most likely, there are sim­ply many LessWrongers (like me) that dis­agree with E.Y. on this point.

• What leads you to be­lieve that you dis­agree with Eliezer on this point? I sus­pect that you are just go­ing by the ti­tle. I just read the es­say and he en­dorses lots of prac­tices that oth­ers call Devil’s Ad­vo­cacy. I’m re­ally not sure what prac­tice he is con­demn­ing. If you can iden­tify a spe­cific prac­tice that you dis­agree with him about, could you de­scribe it in your own words?

• A TEDx video about teach­ing math­e­mat­ics; in Slo­vak, you have to se­lect English sub­ti­tles. “Math­e­mat­ics as a source of joy” Had to share it, but I am afraid the video does not ex­plain too much, and there is not much ma­te­rial in English to link to—I only found two ar­ti­cles. So here is a bit more info:

The video is about an ed­u­ca­tional method of a Czech math teacher Vít He­jný; it is told by his son. Prof. He­jný cre­ated an ed­u­ca­tional method­ol­ogy based mostly on Pi­aget, but speci­fi­cally ap­plied to the do­main of teach­ing math­e­mat­ics (el­e­men­tary- and high-school lev­els). He taught the method to some vol­un­teers, who used it to teach chil­dren in Czech Rep. and Slo­vakia. Th­ese days the in­ven­tor of the method is dead, he started writ­ing a book but didn’t finish it, and most of the vol­un­teers are not work­ing in ed­u­ca­tion any­more. So I was afraid the art would be lost, which would a pity. Luck­ily, his son finished the book, other peo­ple added their notes and ex­pe­riences, and re­cently the method was made very pop­u­lar among teach­ers; and in Czech Rep. the gov­ern­ment offi­cially su­ports this method (in 10% of schools). My ex­pe­rience with this method from my child­hood (out­side of the school sys­tem, in sum­mer camps), is that it’s ab­solutely great.

I am afraid that if I try to de­scribe it, most of it will just sound like com­mon sense. Ex­am­ples from real life are used. Kids are en­couraged to solve the prob­lems for them­selves. The teacher is just a coach or mod­er­a­tor; s/​he helps kids dis­cuss each other’s solu­tions. Start with spe­cific ex­am­ples, only later move to ab­stract gen­er­al­iza­tion of them. Let the chil­dren dis­cover the solu­tion; they will re­mem­ber it bet­ter. In some situ­a­tion spe­cific tools are used (e.g. the ba­sic ad­di­tion and sub­trac­tion is taught by walk­ing on a nu­meric axis on the floor; also see pic­tures here). For mo­ti­va­tion, the spe­cific ex­am­ples are de­scribed us­ing sto­ries or an­i­mals or some­thing in­ter­est­ing (e.g. the deriva­tive of the func­tion is in­tro­duced us­ing a cater­pillar climb­ing on hills). There is a big em­pha­sis on keep­ing a good mood in the class­room.

• This was fun. I like how he em­pha­sizes that ev­ery kid can figure out all of math by her­self, and that think­ing cit­i­zens are what you need for a democ­racy rather than a to­tal­i­tar­ian state—be­cause the Czech re­pub­lic was a com­mu­nist dic­ta­tor­ship only a gen­er­a­tion ago, and many teach­ers were already teach­ers then.

• A cul­tural de­tail which may help to ex­plain this at­ti­tude:

In com­mu­nist coun­tries a car­reer in sci­ence or ed­u­ca­tion of math or physics was a very pop­u­lar choice of smart peo­ple. It was maybe the only place where you could use your mind freely, with­out be­ing afraid of con­tra­dict­ing some­thing that Party said (which could ruin your ca­reer and per­sonal life).

So there are many peo­ple here who have both “math­e­mat­ics” and “democ­racy” as ap­plause lights. But I’d say that af­ter the end of com­mu­nist regime the qual­ity of math ed­u­ca­tion ac­tu­ally de­creased, be­cause the best teach­ers sud­denly had many new ca­reer paths available. (I was in a math-ori­ented high school when the regime ended, and most of the best teach­ers left the school within two years, and started their pri­vate com­pa­nies or non-gov­ern­men­tal or­ga­ni­za­tions; usu­ally some­how re­lated to ed­u­ca­tion.) Even the math­e­mat­i­cal cur­ricu­lum of prof. He­jný was in­vented dur­ing com­mu­nism… but only in democ­racy his son has the free­dom to ac­tu­ally pub­lish it.

• That’s very true. Small ad­di­tion: Many smart peo­ple went into medicine, too.

• I’m in­ter­ested in learn­ing pure math, start­ing from pre­calcu­lus. Can any­one give ad­vise on what text­books I should use? Here’s my cur­rent list (a lot of these text­books were taken from the MIRI and LW’s best text­book list):

• Calcu­lus for Science and Engineering

• Calcu­lus—Spivak

• Lin­ear Alge­bra and its Ap­pli­ca­tions—Strang

• Lin­ear Alge­bra Done Right

• Div, Grad, Curl and All That (Vec­tor calc)

• Fun­da­men­tals of Num­ber The­ory—LeVeque

• Ba­sic Set Theory

• Discrete Math­e­mat­ics and its Applications

• In­tro­duc­tion to Math­e­mat­i­cal Logic

• Ab­stract Alge­bra—Dummit

I’m well versed in sim­ple calcu­lus, go­ing back to pre­calc to fill gaps I may have in my knowl­edge. I feel like I’m miss­ing some ma­jor gaps in knowl­edge jump­ing from the un­der­grad to grad­u­ate level. Do any math PhDs have any ad­vice?

Thanks!

• I ad­vise that you read the first 3 books on your list, and then reeval­u­ate. If you do not know any more math than what is gen­er­ally taught be­fore calcu­lus, then you have no idea how difficult math will be for you or how much you will en­joy it.

It is im­por­tant to ask what you want to learn math for. The last four books on your list are cat­e­gor­i­cally differ­ent from the first four (or at least three of the first four). They are not a ran­dom sam­ple of pure math, they are speci­fi­cally the sub­set of pure math you should learn to pro­gram AI. If that is your goal, the en­tire calcu­lus se­quence will not be that use­ful.

If your goal is to learn physics or eco­nomics, you should learn calcu­lus, statis­tics, anal­y­sis.

If you want to have a true un­der­stand­ing of the math that is built into ra­tio­nal­ity, you want prob­a­bil­ity, statis­tics, logic.

If you want to learn what most math PhDs learn, then you need things like alge­bra, anal­y­sis, topol­ogy.

• Thanks, I made an edit you might not have seen, I men­tioned I do have ex­pe­rience with calcu­lus (differ­en­tial, in­te­gral, multi-var), dis­crete math (ba­sic graph the­ory, ba­sic proofs), just filling in some gaps since it’s been awhile since I’ve done ‘math’. I imag­ine I’ll get through the first two books quickly.

Can you recom­mend some alge­bra/​anal­y­sis/​topol­ogy books that would be a nat­u­ral pro­gres­sion of the books I listed above?

• Dum­mit & Foote’s Ab­stract Alge­bra is a good alge­bra book and Munkres’ Topol­ogy is a good topol­ogy book. They’re pretty ad­vanced, though. In uni­ver­sity one nor­mally one tack­les them in late un­der­grad or early grad years af­ter tak­ing some proof-based anal­y­sis and lin­ear alge­bra courses. There are gen­tler in­tro­duc­tions to alge­bra and topol­ogy, but I haven’t read them.

• Great, I’ll look into the Topol­ogy book.

• A cou­ple more topol­ogy books to con­sider: “Ba­sic Topol­ogy” by Arm­strong, one of the Springer UTM se­ries; “Topol­ogy” by Hock­ing and Young, available quite cheap from Dover. I think I read Arm­strong as a (slightly but not ex­trav­a­gantly pre­co­cious) first-year un­der­grad­u­ate at Cam­bridge. Hock­ing and Young is less fun and prob­a­bly more of a shock if you’ve been away from “real” math­e­mat­ics for a while, but goes fur­ther and is, as I say, cheap.

• Given how much effort it takes to study a text­book, cost shouldn’t be a sig­nifi­cant con­sid­er­a­tion (com­pare a typ­i­cal cost per page with the amount of time per page spent study­ing, if you study se­ri­ously and not just cram for ex­ams; the im­pres­sion from the to­tal price is mis­lead­ing). In any case, most texts can be found on­line.

• cost shouldn’t be a sig­nifi­cant consideration

And yet, some­times, it is. (Espe­cially for im­pe­cu­nious stu­dents, though that doesn’t seem to be quite cursed’s situ­a­tion.)

most texts can be found online

Some peo­ple may pre­fer to avoid break­ing the law.

• There’s some ab­surd re­cency effects in text­book pub­lish­ing. In well-trod­den fields it’s of­ten pos­si­ble to find a last-edi­tion text­book for sin­gle-digit pen­nies on the dol­lar, and the edi­tion change will have close to zero im­pact if you’re do­ing self-study rather than work­ing a highly ex­act prob­lem set ev­ery week.

(Even if you are in a for­mal class, buy­ing an edi­tion back is of­ten worth the trou­ble if you can find the diffs eas­ily, for ex­am­ple by mak­ing friends with some­one who does have the cur­rent edi­tion. I did that for a cou­ple semesters in col­lege, and pock­eted close to \$500 be­fore I started get­ting into text­books ob­scure enough not to have fre­quent edi­tion changes.)

• I am not go­ing to be able to recom­mend any books. I learned all my math di­rectly from pro­fes­sors’ lec­tures.

What is your goal in learn­ing math?

If you want to learn for MIRI pur­poses, and youve already seen some math, then re­learn­ing calcu­lus might not be worth your time

• I have a de­gree in com­puter sci­ence, look­ing to learn more about math to ap­ply to a math grad­u­ate pro­gram and for fun.

• My guess is that if you have an in­ter­est in com­puter sci­ence, you will have the most fun with logic and dis­crete math, and will not have much fun with the calcu­lus.

If you are se­ri­ous about get­ting into a math grad­u­ate pro­gram, then you have to learn the calcu­lus stuff any­way, be­cause it is a large part of the Math GRE.

• It’s worth men­tion­ing that this is a US pe­cu­liar­ity. If you ap­ply to a pro­gram el­se­where there is a lot less em­pha­sis on calcu­lus.

• But you should still know rthe ba­sics of calcu­lus (and lin­ear alge­bra) - at least the equiv­a­lent of calc 1, 2 & 3,

• In my ex­pe­rience, “anal­y­sis” can re­fer to two things: (1) A proof-based calcu­lus course; or (2) mea­sure the­ory, func­tional anal­y­sis, ad­vanced par­tial differ­en­tial equa­tions. Spi­vak’s Calcu­lus is a good ex­am­ple of (1). I don’t have strong opinions about good texts for (2).

• For what it’s worth, I’m do­ing roughly the same thing, though start­ing with lin­ear alge­bra. At first I started with mul­ti­vari­able calc, but when I found it too con­fus­ing, peo­ple ad­vised me to skip to lin­ear alge­bra first and then re­turn to MVC, and so far I’ve found that that’s ab­solutely the right way to go. I’m not sure why they’re usu­ally taught the other way around; LA definitely seems more like a pre­req of MVC.

I tried to read Spi­vak’s Calc once and didn’t re­ally like it much; I’m not sure why ev­ery­one loves it. Maybe it gets bet­ter as you go along, idk.

I’ve been do­ing LA via Gilbert Strang’s lec­tures on the MIT Open CourseWare, and so far I’m find­ing them thor­oughly fas­ci­nat­ing and charm­ing. I’ve also been read­ing his book and just started Hoff­man & Kunze’s Lin­ear Alge­bra, which sup­pos­edly has a bit more the­ory (which I re­ally can’t go with­out).

Just some notes from a fel­low trav­eler. ;-)

• I tried to read Spi­vak’s Calc once and didn’t re­ally like it much; I’m not sure why ev­ery­one loves it. Maybe it gets bet­ter as you go along, idk.

“Not lik­ing” is not very spe­cific. It’s good all else equal to “like” a book, but all else is of­ten not equal, so al­ter­na­tives should be com­pared from other points of view as well. It’s very good for train­ing in rigor­ous proofs at in­tro­duc­tory un­der­grad­u­ate level, if you do the ex­er­cises. It’s not nec­es­sar­ily en­joy­able.

I’ve also been read­ing his book and just started Hoff­man & Kunze’s Lin­ear Alge­bra, which sup­pos­edly has a bit more theory

It’s a much more ad­vanced book, more suit­able for a deeper re­view some­where at the in­ter­me­di­ate or ad­vanced un­der­grad­u­ate level. I think Axler’s “Lin­ear Alge­bra Done Right” is bet­ter as a sec­ond lin­ear alge­bra book (though it’s less com­pre­hen­sive), af­ter a more se­ri­ous real anal­y­sis course (i.e. not just Spi­vak) and an in­tro com­plex anal­y­sis course.

• Oh yeah, I’m not say­ing Spi­vak’s Calcu­lus doesn’t provide good train­ing in proofs. I re­ally didn’t even get far enough to tell whether it did or not, in which case, feel free to dis­re­gard my com­ment as un­in­formed. But to be more spe­cific about my “not lik­ing”, I just found the part I did read to be more opaque than en­gag­ing or in­trigu­ing, as I’ve found other texts (like Strang’s Lin­ear Alge­bra, for in­stance).

Edit: Also, I’m speci­fi­cally re­spond­ing to state­ments that I thought refer­ring to lik­ing the book in the en­joy­ment sense (ex­pressed on this thread and el­se­where as well). If that’s not the kind of lik­ing they meant, then my com­ment is ir­rele­vant.

It’s a much more ad­vanced book, more suit­able for a deeper re­view some­where at the in­ter­me­di­ate or ad­vanced un­der­grad­u­ate level. I think Axler’s “Lin­ear Alge­bra Done Right” is bet­ter as a sec­ond lin­ear alge­bra book (though it’s less com­pre­hen­sive), af­ter a more se­ri­ous real anal­y­sis course (i.e. not just Spi­vak) and an in­tro com­plex anal­y­sis course.

Damn, re­ally?? But I hate it when math books (and classes) effec­tively say “as­sume this is true” rather than delve into the rea­son be­hind things, and those rea­sons aren’t ex­plained un­til 2 classes later. Why is it not more ped­a­gog­i­cally sound to fully learn some­thing rather than slice it into shal­low, in­com­pre­hen­si­ble lay­ers?

• I think peo­ple gen­er­ally agree that anal­y­sis, topol­ogy, and ab­stract alge­bra to­gether provide a pretty solid foun­da­tion for grad­u­ate study. (Lots of in­ter­est­ing stuff that’s ac­cessible to un­der­grad­u­ates doesn’t eas­ily fall un­der any of these head­ings, e.g. com­bi­na­torics, but hav­ing a foun­da­tion in these head­ings will equip you to learn those things quickly.)

For anal­y­sis the stan­dard recom­men­da­tion is baby Rudin, which I find dry, but it has good ex­er­cises and it’s a good filter: it’ll be hard to do well in, say, math grad school if you can’t get through Rudin.

For point-set topol­ogy the stan­dard recom­men­da­tion is Munkres, which I gen­er­ally like. The prob­lem I have with Munkres is that it doesn’t re­ally ex­plain why the ax­ioms of a topolog­i­cal space are what they are and not some­thing else; if you want to know the an­swer to this ques­tion you should read Vick­ers. Go through Munkres af­ter go­ing through Rudin.

I don’t have a ready recom­men­da­tion for ab­stract alge­bra be­cause I mostly didn’t learn it from text­books. I’m not all that satis­fied with any par­tic­u­lar ab­stract alge­bra text­books I’ve found. An op­tion which might be a lit­tle too hard but which is at least fairly com­pre­hen­sive is Ash, which is also freely legally available on­line.

For the sake of ex­po­sure to a wide va­ri­ety of top­ics and cul­ture I also strongly, strongly recom­mend that you read the Prince­ton Com­pan­ion. This is an amaz­ing book; the only bad thing I have to say about it is that it didn’t ex­ist when I was a high school se­nior. I have other read­ing recom­men­da­tions along these lines (less for be­ing hard­core, more for plea­sure and be­ing ex­posed to in­ter­est­ing things) at my blog.

• For anal­y­sis the stan­dard recom­men­da­tion is baby Rudin, which I find dry, but it has good ex­er­cises and it’s a good filter: it’ll be hard to do well in, say, math grad school if you can’t get through Rudin.

I feel that it’s only good as a test or for re­view, and oth­er­wise a bad recom­men­da­tion, made worse by its pop­u­lar­ity (which makes its flaws harder to take se­ri­ously), and the wide­spread “I’m smart enough to un­der­stand it, so it works for me” satis­fic­ing at­ti­tude. Pugh’s “Real Math­e­mat­i­cal Anal­y­sis” is a bet­ter al­ter­na­tive for ac­tu­ally learn­ing the ma­te­rial.

• For point-set topol­ogy the stan­dard recom­men­da­tion is Munkres, which I gen­er­ally like.

I would pref­ace any text­book on topol­ogy with the first chap­ter of Ishan’s “Differ­en­tial ge­om­e­try”. It builds the rea­son for study­ing topol­ogy and why the ax­ioms have the shape they have in a won­der­ful crescendo, and at the end even dabs a bit into nets (non point-set topol­ogy). It’s very clear and builds a lot of in­tu­ition.

Also, as a side dish in a topol­ogy lunch, the pe­cu­liar “Coun­terex­am­ples in topol­ogy”.

• Keep a file with notes about books. Start with Spi­vak’s “Calcu­lus” (do most of the ex­er­cises at least in out­line) and Polya’s “How to Solve It”, to get a feel­ing of how to un­der­stand a topic us­ing proofs, a skill nec­es­sary to prop­erly study texts that don’t have ex­cep­tion­ally well-de­signed prob­lem sets. (Courant&Rob­bins’s “What Is Math­e­mat­ics?” can warm you up if Spi­vak feels too dry.)

Given a good text such as Munkres’s “Topol­ogy”, search for any­thing that could be con­sid­ered a pre­req­ui­site or an eas­ier al­ter­na­tive first. For ex­am­ple, start­ing from Spi­vak’s “Calcu­lus”, Munkres’s “Topol­ogy” could be pre­ceded by Strang’s “Lin­ear Alge­bra and Its Ap­pli­ca­tions”, Hub­bard&Hub­bard’s “Vec­tor Calcu­lus”, Pugh’s “Real Math­e­mat­i­cal Anal­y­sis”, Need­ham’s “Vi­sual Com­plex Anal­y­sis”, Men­del­son’s “In­tro­duc­tion to Topol­ogy” and Axler’s “Lin­ear Alge­bra Done Right”. But then there are other great books that would help to ap­pre­ci­ate Munkres’s “Topol­ogy”, such as Flegg’s “From Geom­e­try to Topol­ogy”, Stil­lwell’s “Geom­e­try of Sur­faces”, Reid&Szen­drői’s “Geom­e­try and Topol­ogy”, Vick­ers’s “Topol­ogy via Logic” and Arm­strong’s “Ba­sic Topol­ogy”, whose read­ing would benefit from other pre­req­ui­sites (in alge­bra, ge­om­e­try and cat­e­gory the­ory) not strictly needed for “Topol­ogy”. This is a down­side of a nar­row fo­cus on a few harder books: it leaves the sub­ject dry. (See also this com­ment.)

• Maybe the most im­por­tant thing to learn is how to prove things. Spi­vak’s Calcu­lus might be a good place to start learn­ing proofs; I like that book a lot.

• I’m do­ing pre­calcu­lus now, and I’ve found ALEKS to be in­ter­est­ing and use­ful. For you in par­tic­u­lar it might be use­ful be­cause it tries to as­sess where you’re up to and fill in the gaps.

I also like the Art of Prob­lem Solv­ing books. They’re re­ally thor­ough, and if you want to be very sure you have no gaps then they’re definitely worth a look. Their In­ter­me­di­ate Alge­bra book, by the way, cov­ers a lot of ma­te­rial nor­mally re­served for Pre­calcu­lus. The web­site has some as­sess­ments you can take to see what you’re ready for or what’s too low-level for you.

• Given your back­ground and our wish for pure math, I would skip the calcu­lus and ap­pli­ca­tions of lin­ear alge­bra and go di­rectly to ba­sic ba­sic set the­ory, then ab­stract alge­bra, then mathy lin­ear alge­bra or real anal­y­sis, then topol­ogy.

Or, do dis­crete math di­rectly if you already know how to write a proof.

• 11 Feb 2014 19:21 UTC
4 points

Are there any rea­sons for be­com­ing util­i­tar­ian, other than to satisfy one’s em­pa­thy?

• I am in­ter­ested in this, or pos­si­bly a differ­ent closely-re­lated thing.

I ac­cept the log­i­cal ar­gu­ments un­der­ly­ing util­i­tar­i­anism (“This is the morally right thing to do.”) but not the ac­tion­able con­se­quences. (“There­fore, I should do this thing.”) I ‘pro­tect’ only my so­cial cir­cle, and have never seen any rea­son why I should ex­tend that.

• What does “the morally right thing to do” mean if not “the thing you should do”?

• To rephrase: I ac­cept that util­i­tar­i­anism is the cor­rect way to ex­trap­o­late our moral in­tu­itions into a co­her­ent gen­er­al­iz­able frame­work. I feel no ‘should’ about it—no need to ap­ply that frame­work to my­self—and feel no cog­ni­tive dis­so­nance when I rec­og­nize that an ac­tion I wish to perform is im­moral, if it hurts only peo­ple I don’t care about.

• Ul­ti­mately I think that is the way all util­i­tar­i­anism works. You define an in group of peo­ple who are im­por­tant, effec­tively equiv­a­lently im­por­tant to each other and pos­si­bly equiv­a­lently im­por­tant to your­self.

For most mod­ern util­i­tar­i­ans, the in-group is all hu­mans. Some mod­ern util­i­tar­i­ans put mam­mals with rel­a­tively com­plex ner­vous sys­tems in the group, and for the most part be­come veg­e­tar­i­ans. Others put ev­ery­thing with a ner­vous sys­tem in there and for the most part be­come ve­g­ans. Very darn few put all life forms in there as they would starve. Im­plicit in this is that all life forms would place nega­tive util­ity on be­ing kil­led to be eaten which may be rea­son­able or may be pro­jec­tion of hu­man val­ues on to non-hu­man en­tities.

But log­i­cally it makes as much sense to shrink the group you are util­i­tar­ian about as to ex­pand it. Only Amer­i­cans seems like a pop­u­lar one in the US when dis­cussing im­mi­gra­tion policy. Only my friends and fam­ily has a fol­low­ing. Only LA Raiders fans or Manch­ester United fans seems to also gather its pro­po­nents.

Around here, I think you find peo­ple try­ing to put all think­ing things, even me­chan­i­cal, in the in-group, per­haps only all con­scious think­ing things. Maybe the way to cre­ate a friendly AI would be to make sure the AI never val­ues its own life more than it val­ues its own death, then we would always be able to turn it off with­out it fight­ing back.

Also, I sus­pect in re­al­ity you have a slid­ing scale of ac­cep­tance, that you would not be morally neu­tral about kil­ling a stranger on the road and tak­ing their money if you thought you could get away with it. But you cer­tainly won’t ac­cord the stranger the full benefit of your con­cern, just a par­tial benefit.

• Also, I sus­pect in re­al­ity you have a slid­ing scale of ac­cep­tance, that you would not be morally neu­tral about kil­ling a stranger on the road and tak­ing their money if you thought you could get away with it. But you cer­tainly won’t ac­cord the stranger the full benefit of your con­cern, just a par­tial benefit.

Oh, there are definitely gra­da­tions. I prob­a­bly wouldn’t do this, even if I could get away with it. I don’t care enough about strangers to go out of my way to save them, but nei­ther do I want to kill them. On the other hand, if it was a per­son I had an ac­tive dis­like for, I prob­a­bly would. All of which is ba­si­cally ir­rele­vant, since it pre­sup­poses the in­cred­ibly un­likely “if I thought I could get away with it”.

• I used to think I thought that way, but then I had some op­por­tu­ni­ties to ca­su­ally steal from peo­ple I didn’t know (and eas­ily get away with it), but I didn’t. With that said, I pirate things all the time de­spite be­liev­ing that do­ing so fre­quently harms the con­tent own­ers a lit­tle.

• I have taken that pre­cise ac­tion against some­one who mildly an­noyed me. I re­mem­ber it (and the per­ceived slight that mo­ti­vated it), but feel no guilt over it.

• By util­i­tiar­ian you mean:

1. Car­ing about all peo­ple equally

2. He­donism, i.e. car­ing about plea­sure/​pain

3. Both of the above (=Ben­tham’s clas­si­cal util­i­tar­i­anism)?

In any case, what an­swer do you ex­pect? What would con­sti­tute a valid rea­son? What are the as­sump­tions from which you want to de­rive this?

• Both of the above (=Ben­tham’s clas­si­cal util­i­tar­i­anism)

I mean this.

In any case, what an­swer do you ex­pect?

I do not ex­pect any spe­cific an­swer.

What would con­sti­tute a valid rea­son?

For me per­son­ally, prob­a­bly noth­ing, since, ap­par­ently, I nei­ther re­ally care about peo­ple (I guess I over­in­tel­lec­tu­al­lized my em­pa­thy), nor about plea­sure and suffer­ing. The ques­tion, how­ever, was asked mostly to bet­ter un­der­stand other peo­ple.

What are the as­sump­tions from which you want to de­rive this?

I don’t know any.

• You can band to­gether lots of peo­ple to work to­gether to­wards the same util­i­tar­i­anism.

• i.e. change hap­piness-suffer­ing to some­thing else?

• I don’t know how to parse that ques­tion.

I am claiming that peo­ple with no em­pa­thy at all can agree to work to­wards util­i­tar­i­anism, for the same rea­son they can agree to co­op­er­ate in the re­peated pris­oner’s dilemma.

• I am claiming that peo­ple with no em­pa­thy at all can agree to work to­wards util­i­tar­i­anism, for the same rea­son they can agree to co­op­er­ate in the re­peated pris­oner’s dilemma.

I don’t un­der­stand why is this an ar­gu­ment in fa­vor of util­i­tar­i­anism.

A bunch of peo­ple can agree to work to­wards pretty much any­thing, for ex­am­ple get­ting rid of the un­clean/​heretics/​un­ter­men­schen/​etc.

• I think you are tak­ing this sen­tence out of con­text. I am not try­ing to pre­sent an ar­gu­ment in fa­vor of util­i­tar­i­anism. I was try­ing to ex­plain why em­pa­thy is not nec­es­sary for util­i­tar­i­anism.

I in­ter­preted the ques­tion as “Why (other than my em­pa­thy) should I try to max­i­mize other peo­ple’s util­ity?”

• I in­ter­preted the ques­tion as “Why (other than my em­pa­thy) should I try to max­i­mize other peo­ple’s util­ity?”

Right, and here is your an­swer:

You can band to­gether lots of peo­ple to work to­gether to­wards the same util­i­tar­i­anism.

I don’t un­der­stand why this is a rea­son “to max­i­mize other peo­ple’s util­ity”.

• You can en­tan­gle your own util­ity with other’s util­ity, so that what max­i­mizes your util­ity also max­i­mizes their util­ity and vice versa. Your ter­mi­nal value does not change to max­i­miz­ing other peo­ple’s util­ity, but it be­comes a side effect.

• So you are ba­si­cally say­ing that some­times it is in your own self-in­ter­est (“own util­ity”) to co­op­er­ate with other peo­ple. Sure, that’s a pretty ob­vi­ous ob­ser­va­tion. I still don’t see how it leads to util­i­tar­i­anism.

If you ter­mi­nal value is still self-in­ter­est but it so hap­pens that there is a side-effect of in­creas­ing other peo­ple’s util­ity—that doesn’t look like util­i­tar­i­anism to me.

• I was only try­ing to make the ob­vi­ous ob­ser­va­tion.

Just try­ing to satisfy your em­pa­thy does not re­ally look like pure util­i­tar­i­anism ei­ther.

• There’s no need to parse it any­more, I didn’t get your com­ment ini­tially.

for the same rea­son they can agree to co­op­er­ate in the re­peated pris­oner’s dilemma.

I agree the­o­ret­i­cally, but I doubt that util­i­tar­i­anism can bring more value to ego­is­tic agent than be­ing ego­is­tic with­out re­gard to other hu­mans’ hap­piness.

• I agree in the short term, but many of my long term goals (e.g. not dy­ing) re­quire lots of co­op­er­a­tion.

• I guess the rea­son is max­i­miz­ing one’s util­ity func­tion, in gen­eral. Em­pa­thy is just one com­po­nent of the util­ity func­tion (for those agents who feel it).

If mul­ti­ple agents share the same util­ity func­tion, and they know it, it should make their co­op­er­a­tion eas­ier, be­cause they only have to agree on facts and mod­els of the world; they don’t have to “fight” against each other.

• Ap­par­ently, we mean differ­ent things by “util­i­tar­i­anism”. I meant moral sys­tem whose ter­mi­nal goal is to max­i­mize plea­sure and min­i­mize suffer­ing in the whole world, while you’re talk­ing about agent’s util­ity func­tion, which may have no re­gard for plea­sure and suffer­ing.

I agree, thought, that it makes sense to try to max­i­mize one’s util­ity func­tion, but to me it’s just ego­ism.

• I sus­pect that most peo­ple already are util­i­tar­i­ans—albeit with im­plicit calcu­la­tion of their util­ity func­tion. In other words, they already figure out what they think is best and do that (if they thought some­thing else was bet­ter, it’s what they’d do in­stead).

• Utili­tar­ian =/​= util­ity max­i­mizer.

• Para­phrased from #less­wrong: “Is it wrong to shoot ev­ery­one who be­lieves Teg­mark level 4?” “No, be­cause, ac­cord­ing to them, it hap­pens any­way”. (It’s tongue-in-cheek, for you hu­mor­less types.)

• I got to de­sign my first in­fo­graphic for work and I’d re­ally ap­pre­ci­ate feed­back (it’s here: “Did We Mess Up on Mam­mo­grams?”).

I’m also cu­ri­ous about recom­men­da­tions for tools. I used Easl.ly which is a WYSIWYG ed­i­tor, but it was an­noy­ing in that I couldn’t just tell it I wanted an mxn block of peo­ple icons, evenly spaced, but had to do it by hand in­stead.

• I am still seek­ing play­ers for a mul­ti­player game of Vic­to­ria 2: Hearts of Dark­ness. We have con­verted from an ear­lier EU3 game, it­self con­verted from CK2; the re­sult­ing his­tory is very un­like our own. We are cur­rently in 1844:

• Is­lamic Spain has pub­li­cly de­clared half of Europe to be dar al Harb, li­able to at­tack at any time, while quietly seek­ing the re­turn of its Caribbean colonies by diplo­matic means.

• The Chris­tian pow­ers of Europe dis­cuss the par­ti­tion of Greece-across-the-sea, the much-de­cayed fi­nal rem­nant of the Ro­man Em­pire, which nonethe­less rules east­ern Africa from the Nile Delta to Lake Tan­gany­ika.

• United In­dia jos­tles with China for supremacy in Asia, both court­ing the lesser pow­ers of Sind and the Mon­gol Khanate as al­lies in their strug­gle. The Malayan Sul­tanate, the world’s fore­most naval power, keeps its vast fleet as the bal­anc­ing weight in these scales, sup­port­ing now one, now an­other as the ad­van­tage shifts—while keep­ing a wary eye on the West, look­ing for a Euro­pean challenge to its Pa­cific hege­mony.

• The Elbe, mark­ing the bor­der of the minor pow­ers France-Alle­magne and Bavaria, re­mains a flash­point for Great-Power ri­valries, as it has been for cen­turies. The diplo­matic bal­ance is once again shift­ing, with France-Alle­magne op­por­tunis­ti­cally seek­ing sup­port from Bavaria’s his­toric pro­tec­tor Spain, Scan­d­i­navia eye­ing the Baltic ports of both sides, and Rus­sia seem­ingly dis­tracted by im­pe­rial con­cerns in Asia.

• An enor­mous dark­ness shrouds the South Amer­i­can con­ti­nent; where the an­cient Inca king­dom has ex­tended its rule, and its hu­man sac­ri­fices, from the Tierra del Fuego to the Rio Grande. Only a few Ama­zo­nian tribes, pro­tected by the jun­gle canopy, main­tain a pre­car­i­ous in­de­pen­dence; and the Jaguar Knights are ever in search of new con­quests to feed their gods. The oceans have pro­tected Europe, and dis­tance and desert North Amer­ica; but an age of steam ships and iron horses dawns, and the globe shrinks. Be­plumed cav­alry may yet ride in triumph through the streets of Lon­don, and ob­sidian knives flash atop the Great Pyra­mid.

Sev­eral na­tions are available to play:

• Sind, an im­por­tant re­gional power, oc­cu­py­ing roughly the area of Pak­istan, Afghanistan, and parts of Iran. Con­tend with In­dia for the rule of the sub­con­ti­nent!

• Najd, like­wise a sig­nifi­cant fac­tor in the power-bal­ance of both Asia and Europe, tak­ing up most of the Mid­dle East. Fight Rus­sia for Ana­to­lia, Greece for Africa, or ally with In­dia to par­ti­tion Sind!

• The Khanate, a land­locked power stretch­ing from the Urals to very nearly the Pa­cific—but not quite, cour­tesy of the Korean War. Re­v­erse the out­come and bring a new Man­date to rule China!

• Greece-in-ex­ile, least among the pow­ers that be­stride the Earth—that is, not count­ing the var­i­ous city-states, vas­sals, and half-in­de­pen­dent bor­der marches that some Great Pow­ers find it con­ve­nient to main­tain. Take on usurp­ing Italia Re­nata, bul­ly­ing Rus­sia and in­fidel Spain, and re­store the glory that was Rome!

Next ses­sion is this Sun­day; PM me for de­tails.

• Ad­di­tion­ally, play­ing in an MP cam­paign offers all sorts of op­por­tu­ni­ties for sharp­en­ing your writ­ing skills through sto­ries set in the al­ter­nate his­tory!

• If you play in this game, you get to play with not one, but two LWers! I am Spain, bea­con of learn­ing, cul­ture, and in­dus­try.

• Other than the al­ter­nate start, are there any mods?

• Yes, we have re­dis­tributed the RGOs for great bal­ance, and stripped out the na­tion-spe­cific de­ci­sions.

• BBC Ra­dio : Should we be fright­ened of in­tel­li­gent com­put­ers? http://​​www.bbc.co.uk/​​pro­grammes/​​p01rqkp4 In­cludes Nick Bostrom from about halfway through.

• I don’t think it has already been posted here on LW, but SMBC has a won­der­ful lit­tle strip about UFAI: http://​​www.smbc-comics.com/​​?id=3261#comic

• It’s a re­post from last week.

Though reread­ing it, does any­one know whether Zach knows about MIRI and/​or less­wrong? I ex­pect “un­friendly hu­man-cre­ated In­tel­li­gence ” to parse to AI with bad man­ners to peo­ple un­fa­mil­iar with MIRI’s work, which is prob­a­bly not what the sci­en­tist is wor­ried about.

• I ex­pect “un­friendly hu­man-cre­ated In­tel­li­gence ” to parse to AI with bad man­ners to peo­ple un­fa­mil­iar with MIRI’s work

I ex­pect “un­friendly hu­man-cre­ated In­tel­li­gence ” to parse to HAL and Skynet to reg­u­lar peo­ple.

• The use of “friendly” to mean “non-dan­ger­ous” in the con­text of AI is, I be­lieve, rather idiosyn­cratic.

• All this talk of P-zom­bies. Is there even a hint of a mechanism that any­body can think of to de­tect if some­thing else is con­scious, or to mea­sure their de­gree of con­scious­ness as­sum­ing it ad­mits of de­gree?

I have spent my life figur­ing other hu­mans are prob­a­bly con­scious purely on an Oc­cam’s ra­zor kind of ar­gu­ment that I am con­scious and the most straight­for­ward ex­pla­na­tion for my similar­i­ties and group­ing with all these other peo­ple is that they are in rele­vant re­spects just like me. But I have always thought that in­creas­ingly com­plex simu­la­tions of hu­mans could be both “ob­vi­ously” not con­scious but be mis­taken by oth­ers as con­scious. Does ev­ery hu­man on the planet who reaches “voice mail jail,” voice text in­ter­ac­tive sys­tems, are they all aware that they have not reached a con­scious­ness? Do even those of us who are aware for­get some­times when we are not be­ing care­ful? Is this go­ing to be­come even a harder dis­tinc­tion to make as tech con­tinues to get bet­ter?

I have been en­joy­ing the tele­vi­sion show “al­most hu­man.” In this show there are an­droids, most of which have been de­signed to NOT be too much like hu­mans, al­though what they are re­ally like is bor­ing rule-fol­low­ing hu­mans. It is clear in this show that the value on an an­droid “life” is a tiny frac­tion of the value on a “hu­man” life, in the first epi­sode a hu­man cop kills his an­droid part­ner in or­der to get an­other one. The part­ner he does get is much more like a hu­man, but still con­sid­ered the prop­erty of the po­lice de­part­ment for which he works, and no­body re­ally has much of a prob­lem with this. Iron­i­cally, this “al­most hu­man” an­droid part­ner is Afri­can Amer­i­can.

• Is this go­ing to be­come even a harder dis­tinc­tion to make as tech con­tinues to get bet­ter?

Wei once de­scribed an in­ter­est­ing sce­nario in that vein. Imag­ine you have a bunch of hu­man up­loads, com­puter pro­grams that can truth­fully say “I’m con­scious”. Now you start op­ti­miz­ing them for space, com­press­ing them into smaller and smaller pro­grams that have the same out­puts. Then at some point they might start say­ing “I’m con­scious” for rea­sons other than be­ing con­scious. After all, you can have a very small pro­gram that out­puts the string “I’m con­scious” with­out be­ing con­scious.

So you might be able turn a pop­u­la­tion of con­scious crea­tures into a pop­u­la­tion of p-zom­bies or Elizas just by com­press­ing them. It’s not clear where the cut­off hap­pens, or even if it’s mean­ingful to talk about the cut­off hap­pen­ing at some point. And this is some­thing that could hap­pen in re­al­ity, if we ask a fu­ture AI to op­ti­mize the uni­verse for more hu­mans or some­thing.

Also this sce­nario re­opens the ques­tion of whether up­loads are con­scious in the first place! After all, the pro­cess of up­load­ing a hu­man mind to a com­puter can also be viewed as a com­pres­sion step, which can fold con­stant com­pu­ta­tions into literal con­stants, etc. The usual jus­tifi­ca­tion says that “it pre­serves be­hav­ior at ev­ery step, there­fore it pre­serves con­scious­ness”, but as the above ar­gu­ment shows, that jus­tifi­ca­tion is in­com­plete and could eas­ily be wrong.

• So you might be able turn a pop­u­la­tion of con­scious crea­tures into a pop­u­la­tion of p-zom­bies or Elizas just by com­press­ing them.

Sup­pose you mean lossless com­pres­sion. The com­pressed pro­gram has ALL the same out­puts to the same in­puts as the origi­nal pro­gram.

Then if the un­com­pressed pro­gram run­ning had con­scious­ness and the com­pressed pro­gram run­ning did not, you have ei­ther proved or defined con­scious­ness as some­thing which is not an out­put. If it is pos­si­ble to do what you are sug­gest­ing then con­scious­ness has no effect on be­hav­ior, which is the pre­sump­tion one must make in or­der to con­clude that p-zom­bies are pos­si­ble.

From an evolu­tion­ary point of view, can a fea­ture with no out­put, ab­solutely zero effect on the in­ter­ac­tion of the crea­ture with its en­vi­ron­ment ever evolve? There would be no mechanism for it to evolve, there is no ba­sis on which to se­lect for it. It seems to me that to be­lieve in the pos­si­bil­ity of p-zom­bies is to be­lieve in the su­per­nat­u­ral, a world of phe­nom­ena such as con­scious­ness that for some rea­son is not al­lowed to be listed as a phe­nomenon of the nat­u­ral world.

At the mo­ment, I can’t re­ally dis­t­in­guish how a be­lief that p-zom­bies are pos­si­ble is any differ­ent from a be­lief in the su­per­nat­u­ral.

Also this sce­nario re­opens the ques­tion of whether up­loads are con­scious in the first place!

Years ago I thought an in­ter­est­ing ex­per­i­ment to do in terms of ar­tifi­cial con­scious­ness would be to build an in­creas­ingly com­plex ver­bal simu­la­tion of a hu­man, to the point where you could have con­ver­sa­tions in­volv­ing re­flec­tion with the simu­la­tion. At that point you could ask it if it was con­scious and see what it had to say. Would it say “not so far as I can tell?”

The p-zom­bie as­sump­tion is that it would say “yeah I’m con­scious duhh what kind of ques­tion is that?” But the way a simu­la­tion ac­tu­ally gets built is you have the list of re­quire­ments and you keep ac­cret­ing code un­til all the re­quire­ments are met. If your re­quire­ments in­cluded a vast ar­ray of fea­tures but NOT the fea­ture that it an­swer this ques­tion one way or an­other, con­ceiv­ably you could elicit an “hon­est” an­swer from your sim. If all such sims an­swers “yes,” you might con­clude that some­how in the col­lec­tion of fea­tures you HAD re­quired, con­scious­ness emerged, and you could do other ex­per­i­ments where you re­moved fea­tures from the sim and kept statis­tics on how those sims an­swered that ques­tion. You might see the sim say­ing “no, don’t think so.” and con­clude that what­ever it is in us that makes us func­tion as con­scious we hadn’t found that thing yet and put it in our list of re­quire­ments.

• Then if the un­com­pressed pro­gram run­ning had con­scious­ness and the com­pressed pro­gram run­ning did not, you have ei­ther proved or defined con­scious­ness as some­thing which is not an out­put. If it is pos­si­ble to do what you are sug­gest­ing then con­scious­ness has no effect on be­hav­ior, which is the pre­sump­tion one must make in or­der to con­clude that p-zom­bies are pos­si­ble.

I haven’t thought about this stuff for a while and my mem­ory is a bit hazy in re­la­tion to it so I could be get­ting things wrong here but this com­ment doesn’t seem right to me.

First, my p-zom­bie is not just a du­pli­cate of me in terms of my in­put-out­put pro­file. Rather, it’s a perfect phys­i­cal du­pli­cate of me. So one can deny the pos­si­bil­ity of zom­bies while still hold­ing that a com­puter with the same in­put-out­put pro­file as me is not con­scious. For ex­am­ple, one could hold that only car­bon-based life could be con­scious while deny­ing the pos­si­bil­ity of zom­bies (deny­ing that a phys­i­cal du­pli­cate of a car­bon-based life­form that is con­scious could lack con­scious­ness) while deny­ing that an iden­ti­cal in­put-out­put pro­file im­plies con­scious­ness.

Se­cond, if it could be shown that the same in­put-out­put pro­file could ex­ist even with con­scious­ness was re­moved this doesn’t show that con­scious­ness can’t play a causal role in guid­ing be­havi­our. Rather, it shows that the same in­put-out­put pro­file can ex­ist with­out con­scious­ness. That doesn’t mean that con­scious­ness can’t cause that in­put-out­put pro­file in one sys­tem and some­thing else cause it in the other sys­tem.

Third, it seems that one can deny the pos­si­bil­ity of zom­bies while ac­cept­ing that con­scious­ness has no causal im­pact on be­havi­our (con­tra the last sen­tence of the quoted frag­ment): one could hold that the be­havi­our causes the con­scious ex­pe­rience (or that the thing which causes the be­havi­our also causes the con­scious ex­pe­rience). One could then deny that some­thing could be phys­i­cally iden­ti­cal to me but lack con­scious­ness (that is, deny the pos­si­bil­ity of zom­bies) while still ac­cept­ing that con­scious­ness lacks causal in­fluence on be­havi­our.

Am I con­fused here or do the three points above seem to hold?

• Am I con­fused here or do the three points above seem to hold?

I think for­mally you are right.

But I think that if con­scious­ness is es­sen­tial to how we get im­por­tant as­pects of our in­put-out­put map, then I think the chances of there be­ing an­other mechanism that works to get the same in­put-out­put map are equal to the chances that you could pro­gram a car to drive from here to Los An­ge­les with­out us­ing any feed­back mechanisms, by just di­al­ing in all the stops and starts and turns and so on that it would need ahead of time. For­mally pos­si­ble, but ab­solutely bear­ing no real re­la­tion­ship to how any­thing that works has ever been built.

I am not a math­e­mat­i­cian about these things, I am an en­g­ineer or a physi­cist in the sense of Feyn­man.

• A few points:

1) Ini­tial mind up­load­ing will prob­a­bly be lossy, be­cause it needs to con­vert ana­log to digi­tal.

2) I don’t know if even lossless com­pres­sion of the whole in­put-out­put map is go­ing to pre­serve ev­ery­thing. Let’s say you have ten sec­onds left to live. Your in­put-out­put map over these ten sec­onds prob­a­bly doesn’t con­tain many in­ter­est­ing state­ments about con­scious­ness, but that doesn’t mean you’re al­lowed to com­press away con­scious­ness. And even on longer timescales, peo­ple don’t seem to be very good at in­tro­spect­ing about con­scious­ness, so all your be­liefs about con­scious­ness might be com­press­ible into a small in­put-out­put map. Or at least we can’t say that in­put-out­put map is large, un­less we figure out more about con­scious­ness in the first place!

3) Even if con­scious­ness plays a large causal role, I agree with crazy88′s point that con­scious­ness might not be the small­est pos­si­ble pro­gram that can fill that role.

4) I’m not sure that con­scious­ness is just about the in­put-out­put map. Doesn’t it feel more like in­ter­nal pro­cess­ing? I seem to have con­scious­ness even when I’m not talk­ing about it, and I would still have it even if my re­li­gion pro­hibited me from talk­ing about it. Or if I was mute.

• I don’t know if even lossless com­pres­sion of the whole in­put-out­put map is go­ing to pre­serve ev­ery­thing. Let’s say you have ten sec­onds left to live. Your in­put-out­put map over these ten sec­onds prob­a­bly doesn’t con­tain many in­ter­est­ing state­ments about con­scious­ness, but that doesn’t mean you’re al­lowed to com­press away con­scious­ness.

It is not your ac­tual in­put-out­put map that mat­ters, but your po­ten­tial. What is up­loaded must be in­for­ma­tion about the func­tional or­ga­ni­za­tion of you, not some ab­stracted map­ping func­tion. If I have 10 s left to live and I am up­loaded, my up­load should type this com­ment in re­sponse to your com­ment above even if it is well more than 10 s since I was up­loaded.

And even on longer timescales, peo­ple don’t seem to be very good at in­tro­spect­ing about con­scious­ness, so all your be­liefs about con­scious­ness might be com­press­ible into a small in­put-out­put map.

If with years of in­tense and ex­pert school­ing I could say more about con­scious­ness, then that is part of my in­put-out­put map. My up­load would need to have the same prop­erty.

Even if con­scious­ness plays a large causal role, I agree with crazy88′s point that con­scious­ness might not be the small­est pos­si­ble pro­gram that can fill that role.

Might not be, but prob­a­bly is. Biolog­i­cal func­tion seems to be very effi­cient, with most bio fea­tures not equalled in effi­ciency by hu­man man­u­fac­tured sys­tems even now. The chances that evolu­tion would have cre­ated con­scious­ness if it didn’t need to seem slim to me. So as an en­g­ineer try­ing to plan an at­tack on the prob­lem, I’d ex­pect con­scious­ness to show up in any suc­cess­ful up­load. If it did not, that would be a very in­ter­est­ing re­sult. But of course, we need a way to mea­sure con­scious­ness to tell whether it is there in the up­load or not.

To the best of my knowl­edge, no one any­where has ever said how you go about dis­t­in­guish­ing be­tween a con­scious be­ing and a p-zom­bie.

’m not sure that con­scious­ness is just about the in­put-out­put map. Doesn’t it feel more like in­ter­nal pro­cess­ing? I seem to have con­scious­ness even when I’m not talk­ing about it, and I would still have it even if my re­li­gion pro­hibited me from talk­ing about it. Or if I was mute.

I mean your in­put-out­put map writ broadly. But again, since you don’t even know how to dis­t­in­guish a con­scious me from a p-zom­bie me, we are not in a po­si­tion yet to worry about the in­put-out­put map and com­pres­sion, in my opinion.

If a simu­la­tion of me can be com­plete, able to at­tend grad­u­ate school and get 13 patents do­ing re­search af­ter­wards, able to carry on an ob­ses­sive re­la­tion­ship with a mar­ried woman for a decade, able to en­joy a con­vert­ible he has owned for 8 years, able to post on less­wrong posts much like this one, then I would be shocked if it wasn’t con­scious. But I would never know whether it was con­scious, nor for that mat­ter will I ever know whether you are con­scious, un­til some­body figures out how to tell the differ­ence be­tween a p-zom­bie and a con­scious per­son.

• Biolog­i­cal func­tion seems to be very efficient

Even if that’s true, are you sure that AI will be op­ti­miz­ing us for the same mix of speed/​size that evolu­tion was op­ti­miz­ing for? If the weight­ing of speed vs size is differ­ent, the re­sult of op­ti­miza­tion might be differ­ent as well.

Can you ex­pand what you mean by “writ broadly”? If we know that speech is not enough be­cause the per­son might be mute, how do you con­vince your­self that a cer­tain set of in­puts and out­puts is enough?

That said, if you also think that up­load­ing and fur­ther op­ti­miza­tion might ac­ci­den­tally throw away con­scious­ness, then I guess we’re in agree­ment.

• Biolog­i­cal func­tion seems to be very efficient

Even if that’s true, are you sure that AI will be op­ti­miz­ing us for the same mix of speed/​size that evolu­tion was op­ti­miz­ing for? If the weight­ing of speed vs size is differ­ent, the re­sult of op­ti­miza­tion might be differ­ent as well.

I was think­ing of up­loads in the Han­so­nian sense, a short­cut to “build­ing” AI. In­stead of un­der­stand­ing AI/​con­scious­ness from the ground up and de­sign­ing de novo an IA, we sim­ply copy an ac­tual per­son. Copy­ing the per­son, if suc­cess­ful, pro­duces a com­puter run per­son which seems to do the things the per­son would have done un­der similar con­di­tions.

The per­son is much sim­pler than the po­ten­tial in­put-out­put map. THe hu­man sys­tem has mem­ory, so a semi-com­plete in­put-out­put map could not be gen­er­ated un­less you started with a myr­iad of fresh copies of the per­son and ran them through all sorts of con­ceiv­able life­times.

You seem to be pre­sum­ing the up­load would con­sist of tak­ing the in­put-out­put map and, like a smart com­piler, try­ing to in­vent the least amount of code that would pro­duce that, or in an­other metaphor, try to op­ti­mally com­press that in­put-out­put map. I don’t think this is at all how an up­load would work.

Con­sider du­pli­cat­ing or up­load­ing a car. WOuld you drive the car back and forth over ev­ery road in the world un­der ev­ery con­ceiv­able traf­fic and weather con­di­tion, and then take that very large in­put out­put map and try to com­press and up­load that? Or would you take each part of the car and up­load it, and its re­la­tion­ship when as­sem­bled, to each other part in the car? You would do the sec­ond, there are too many pos­si­ble in­puts to imag­ine the in­put-out­put ap­proach could be even vaguely as effi­cient.

So I am think­ing of Han­so­nian up­loads for Han­so­nian rea­sons, and so it is fair to in­sist we do some­thing which is more effi­cient, up­load a copy of the ma­chine rather than a com­pressed in­put-out­put map, es­pe­cially if the ra­tio of effi­ciency is > 10^100:1.

Can you ex­pand what you mean by “writ broadly”? If we know that speech is not enough be­cause the per­son might be mute, how do you con­vince your­self that a cer­tain set of in­puts and out­puts is enough?

I think I have ex­plained that above. TO char­ac­ter­ize the ma­chine by its in­put-out­put map, you need to con­sider ev­ery pos­si­ble in­put. In the case of a per­son with mem­ory, that means ev­ery pos­si­ble life­time: the in­put-out­put map is gi­gan­tic, much big­ger than the ma­chine it­self, which is the brain/​body.

That said, if you also think that up­load­ing and fur­ther op­ti­miza­tion might ac­ci­den­tally throw away con­scious­ness, then I guess we’re in agree­ment.

What I think is that we don’t know whether or not con­scious­ness has been thrown away be­cause we don’t even have a method for de­ter­min­ing whether the origi­nal is con­scious or not. To the ex­tent you be­lieve I am con­scious, why is it? Un­til you can an­swer that, un­til you can build a con­scious­ness-me­ter, how do we even check an up­load for con­scious­ness? What we could check it for is whether it SEEMS to act like the per­son up­loaded, our sort of fuzzy opinion.

What I would say is IF there is a con­scious­ness-me­ter even pos­si­ble, and I think there is but I don’t know, then any op­ti­miza­tion that ac­ci­den­tally threw away con­scious­ness would have changed other be­hav­iors away and would be a mea­surably in­fe­rior simu­la­tion than a con­scious simu­la­tion would have been.

If on the other hand there is NO mea­sure of con­scious­ness that could be de­vel­oped as a con­scious­ness-me­ter (or con­scious­ness-eval­u­at­ing pro­gram if you pre­fer), then con­scious­ness is su­per­nat­u­ral, which for all in­tents and pur­poses means it is make-be­lieve. Liter­ally, you make your­self be­lieve some­thing for rea­sons which by defi­ni­tion have noth­ing to do with some­thing hap­pened in the real, nat­u­ral, mea­surable world.

Do we agree on any of these last two para­graphs?

• You seem to be pre­sum­ing the up­load would con­sist of tak­ing the in­put-out­put map and, like a smart com­piler, try­ing to in­vent the least amount of code that would pro­duce that, or in an­other metaphor, try to op­ti­mally com­press that in­put-out­put map. I don’t think this is at all how an up­load would work.

Well, pre­sum­ably you don’t want an atom-by-atom simu­la­tion. You want to at least com­press each neu­ron to an ap­prox­i­mate in­put-out­put map for that neu­ron, ob­served from prac­tice, and then use that. Also you might want to take some im­ple­men­ta­tion short­cuts to make the thing run faster. You seem to think that all these changes are ob­vi­ously harm­less. I also lean to­ward that, but not as strongly as you, be­cause I don’t know where to draw the line be­tween harm­less and harm­ful op­ti­miza­tions.

• Sup­pose you mean lossless compression

Right; with lossless com­pres­sion then you’re not go­ing to lose any­thing. So cousin_it prob­a­bly means lossy com­pres­sion, like with jpgs and mp3s, smaller ver­sions that are very similar to what you had be­fore.

• Well, ini­tial mind up­load­ing is go­ing to be lossy be­cause it will con­vert ana­log to digi­tal.

That said, I don’t know if even lossless com­pres­sion of the whole in­put-out­put map is go­ing to pre­serve ev­ery­thing. Let’s say you have ten sec­onds left to live. Your in­put-out­put map over these ten sec­onds prob­a­bly doesn’t con­tain many in­ter­est­ing state­ments about con­scious­ness, but that doesn’t mean you’re al­lowed to com­press away con­scious­ness...

And even on longer timescales, peo­ple don’t seem to be very good at in­tro­spect­ing about con­scious­ness, so all your be­liefs about con­scious­ness might be com­press­ible into a small in­put-out­put map. Or at least we can’t say that in­put-out­put map is large, un­less we figure out more about con­scious­ness in the first place.

(Also I agree with crazy88′s point that con­scious­ness might play a large causal role but still be com­press­ible to a smaller non-con­scious pro­gram.)

More gen­er­ally, I’m not sure that con­scious­ness is just about the in­put-out­put map. Doesn’t it feel more like in­ter­nal pro­cess­ing? I seem to have con­scious­ness even when I’m not talk­ing about it, and I would still have it even if my re­li­gion pro­hibited me from talk­ing about it, or some­thing.

• It de­pends on whether you sub­scribe to ma­te­ri­al­ism. If you do then there noth­ing to mea­sure. Con­scious might even be a tricky illu­sion as Den­nett sug­gests.

If on the other hand you do be­lieve that there some­thing be­yond ma­te­ri­al­ism there are plenty of frame­works to choose from that provide ideas about what one could mea­sure.

• If on the other hand you do be­lieve that there some­thing be­yond ma­te­ri­al­ism there are plenty of frame­works to choose from that provide ideas about what one could mea­sure.

OMG then some­one should get busy! Tell me what I can mea­sure and if it makes any kind of sense I will start work­ing on it!

• I do have a qualia for per­ceiv­ing whether some­one else is pre­sent in a med­i­ta­tion or is ab­sent minded. It could be that it’s some men­tal re­ac­tions that picks up micro­ges­tures or some other thing that I don’t con­sciously per­ceive and sum­ma­rizes that in­for­ma­tion into a qualia for men­tal pres­ence.

In­ves­ti­gat­ing how such a qualia works is what I would do per­son­ally when I would want to in­ves­ti­gate con­scious­ness.

But you prob­a­bly have no such qualia, so you ei­ther need some­one who has or de­velop it your­self. In both cases that prob­a­bly means seek­ing a good med­i­ta­tion teacher.

It’s a difficult sub­ject to talk about in a medium like this where peo­ple who are into a spiritual frame­work that has some model of what con­scious hap­pens to be have phe­nomenolog­i­cal prim­i­tives that the au­di­ence I’m ad­dress­ing doesn’t have. In my ex­pe­rience most of the peo­ple who I con­sider ca­pa­ble in that re­gard are very un­will­ing to talk about de­tails with peo­ple who don’t have phe­nomenolog­i­cal prim­i­tives to make sense of them. In­stead of an­swer­ing a ques­tion di­rectly a Zen teacher might give you a koan and tell you to come back in a month when you build the phe­nomenolog­i­cal prim­i­tives to un­der­stand it, ex­pect that he doesn’t tell you about phe­nomenolog­i­cal prim­i­tives.

• I don’t know of a hu­man-in­de­pen­dent defi­ni­tion of con­scious­ness, do you? If not, how can one say that “some­thing else is con­scious”? So the statement

in­creas­ingly com­plex simu­la­tions of hu­mans could be both “ob­vi­ously” not con­scious but be mis­taken by oth­ers as conscious

will only make sense once there is a defi­ni­tion of con­scious­ness not rely­ing on be­ing a hu­man or us­ing one to eval­u­ate it. (I have a cou­ple ideas about that, but they are not firm enough to ex­pli­cate here.)

• I don’t know of a hu­man-in­de­pen­dent defi­ni­tion of con­scious­ness, do you? If not, how can one say that “some­thing else is con­scious”? So the statement

I don’t know of ANY defi­ni­tion of con­scious­ness which is testable, hu­man-in­de­pen­dent or not.

• I don’t know of a hu­man-in­de­pen­dent defi­ni­tion of con­scious­ness, do you?

In­te­grated In­for­ma­tion The­ory is one at­tempt at a defi­ni­tion. I read about it a lit­tle, but not enough to de­ter­mine if it is com­pletely crazy.

• IIT is pro­vides a math­e­mat­i­cal ap­proach to mea­sur­ing con­scious­ness. It is not crazy, and has a sig­nifi­cant num­ber of good pa­pers on the topic. It is hu­man-independent

• I don’t un­der­stand it, but from read­ing the wikipe­dia sum­mary it seems to me it mea­sures a com­plex­ity of the sys­tem. A com­plex­ity is not nec­es­sar­ily con­scious­ness.

Ac­cord­ing to this the­ory, what is the key differ­ence be­tween a hu­man brain, and… let’s say a hard disk of the same ca­pac­ity, con­nected to a high-re­s­olu­tion cam­era? Let’s as­sume that the data from the cam­era are be­ing writ­ten in real time to pseudo-ran­dom parts of the hard disk. The pseudo-ran­dom parts are cho­sen by calcu­lat­ing a check­sum of the whole hard disk. This sys­tem ob­vi­ously is not con­scious, but seems com­plex enough.

• IIT pro­poses that con­scious­ness is in­te­grated in­for­ma­tion.

The key differ­ence be­tween a brain and the hard disk is the disk has no way of know­ing what it is ac­tu­ally sens­ing. Brain can tell differ­ence be­tween many more sense and re­ceive and use more forms of in­for­ma­tion. The cam­era is not con­scious of the fact it sens­ing light and colour.

This ar­ti­cle is a good in­tro­duc­tion to the topic and the pho­to­di­ode ex­am­ple in the pa­per is the sim­ple ver­sion of your ques­tion http://​​www.biolbull.org/​​con­tent/​​215/​​3/​​216.full

• Thanks! The ar­ti­cle was good. At this mo­ment, I am… not con­vinced, but also not able to find an ob­vi­ous er­ror.

• I am go­ing to or­ga­nize a coach­ing course to learn Javascript + Node.js.

My par­tic­u­lar tech­nol­ogy of choice is node.js be­cause:

• If start­ing from scratch, hav­ing to learn just one lan­guage for both fron­tend and back­end makes sense. Javascript is the only lan­guage you can use in a browser and you will have to learn it any­way. They say it’s kind of Lisp or Scheme in dis­guise and a pretty cool lan­guage by it­self.

• Node.js is a mod­ern asyn­chronous web frame­work, made by run­ning Javascript code server-side on Google’s open-source V8 JavaScript Eng­ine It seems to be well suited for build­ing highly-loaded back­end servers, and works for reg­u­lar web­sites, too.

• Hack Re­ac­tor teaches it to make 98% of grad­u­ates earn \$110k/​year, on av­er­age, af­ter 3 months of study. But their tu­ition is \$17,780. We will do much cheaper.

I wanted to learn mod­ern web tech­nolo­gies for a while, but haven’t got­ten my­self to ac­tu­ally do it. When I tried to start learn­ing, I was over­whelmed by the num­ber of things I still have to learn to get any­thing done. Here’s the bare min­i­mum:

• html

• css

• javascript

• node.js

• git

I be­lieve the op­ti­mum course of ac­tion is to hire a guru to do coach­ing for me and sev­eral other stu­dents and split the cost. The benefits com­pared to learn­ing by your­self are:

• per­sonal com­mu­ni­ca­tion (via Skype or similar) and do­ing tasks along with the oth­ers pro­vides an ad­di­tional drive to com­plete your studies

• guru can choose an op­ti­mum path for me to reach the de­sired ca­pa­bil­ities in short­est time.

The ca­pa­bil­ities that I want to achieve are:

i. To be able to add func­tion­al­ity to my Tum­blr blog (where I run a writ­ing prompt) by ei­ther us­ing cus­tom theme + Tum­blr API or ex­tract­ing posts via API and us­ing them to ren­der my blog on a sep­a­rate web­site. node.js is definitely not needed here, rather than this is the sim­plest case of do­ing some­thing use­ful that I need to with web tech­nolo­gies and node.js is my web tech­nol­ogy of choice.

ii. To hack on Un­dum, a client-side hy­per­text in­ter­ac­tive fic­tion frame­work. My thoughts on why I think Un­dum and IF are cool are here.

• To port fea­tures from one ver­sion of Un­dum to an­other and cre­ate a ver­sion of Un­dum that is able to run all ex­ist­ing games (about 5 of them)

• To ab­stract away Un­dum’s in­ter­nal game rep­re­sen­ta­tion and state so that they can be loaded and saved ex­ter­nally, over a network

• To cre­ate a server part for Un­dum that con­trols the ver­sion of the book you’re al­lowed to read (al­lows to read one new chap­ter a day, re­mem­bers the branch you’re read­ing, up to the end, if you’ve read to the end, etc.)

• To cre­ate a web­site that works as a YouTube and an ed­i­tor for Un­dum games

iii. To cre­ate new ex­per­i­ments that uti­lize mod­ern web tech­nolo­gies to in­ter­est­ing and novel effect. I know that this sounds re­ally vague, but the point is that some­times you never know what can be done un­til you learn the rele­vant skills. One ex­am­ple of the kind of thing that I think about is what this pa­per is talk­ing about:

Friend’s ad­vice: Skype Premium + Drop­box + Piratepad + Slide­share + Doo­dle should be enough. What do you think?

Want to join? Ques­tions? Sugges­tions for bet­ter video­con­ferenc­ing soft­ware than Skype?

• I would sug­gest us­ing An­gu­larJs in­stead, since it can be purely client-side code, you don’t need to deal with any­thing server-side.

There are also some nice on­line de­vel­op­ment en­vi­ron­ments like co­denvy that can provide a pretty rich en­vi­ron­ment and I be­lieave have some col­labu­ra­tive fea­tures too (in­stead of us­ing drop­box, doo­dle and slide­share, maybe).

If all those tech­nolo­gies seem in­timi­dat­ing, some strate­gies:

• Fo­cus on a sub­set, i.e. only html and css

• Use Anki a lot—I’ve used anki to put in git com­mands, An­gu­larJS con­cepts and CSS tricks so that even if I wasn’t ac­tively work­ing on a pro­ject us­ing those, they’d stay at the back of my mind.

• EDIT: This par­tic­u­lar site does mar­gin trad­ing differ­ently to how I thought mar­gin trad­ing nor­mally works. So… dis­re­gard ev­ery­thing I just said?

Bit­coin econ­omy and a pos­si­ble vi­o­la­tion of the effi­cent mar­ket hy­poso­sis. With the grow­ing ma­tu­rity of the Bit­coin ecosys­tem, there has ap­peared a web­site which al­lows lev­er­aged trad­ing, mean­ing that peo­ple who think they know which way the price is go­ing can bor­row money to in­crease their prof­its. At the time of writ­ing, the bid-ask spread for the rates offered is 0.27% − 0.17% per day, which is 166% − 86% per an­num. De­pos­i­tors are not ac­tu­ally trad­ing them­selves, so the only way failure modes I can see is if the ex­change takes the money and runs, if there is a catas­trophic failure of the trad­ing en­g­ine, or if they get hacked. I Gw­ern es­ti­mates that a Bit­coin ex­change has a 1% chance of failure per month based upon past perfor­mance, but that was writ­ten some time ago, and the in­creased le­gal recog­ni­tion of Bit­coin plus peo­ple learn­ing from mis­takes should de­crease this prob­a­bil­ity. On the other hand the biggest ex­change MtGox froze with­drawals a few days ago, but note that they claim that this is a tem­po­rary tech­ni­cal fault. As ad­di­tional in­for­ma­tion, Bit­finex’s web­site states “The com­pany is in­cor­po­rated in Hong Kong as a Limited Li­a­bil­ity Cor­po­ra­tion.”, which would seem to de­crease the like­li­hood of the com­pany steal­ing the money. In con­clu­sion, even as­sum­ing a pes­simistic 1% chance of failure per month I reach a con­ser­va­tive es­ti­mate of 65% APR ex­pected re­turns (as­sum­ing that the in­ter­est is con­stant at the lower 0.17% figure) . So why aren’t peo­ple flock­ing to the web­site, start­ing a bid­ding war to drive the in­ter­est rate down to a tenth of its cur­rent value? Un­less there is some­thing wrong with my pre­vi­ous calcu­la­tions, the best ex­pla­na­tion I can think of is that it sim­ply has not gen­er­ated enough pub­lic­ity. Per­haps also ev­ery­one in the Bit­coin com­mu­nity is as­sum­ing the price is go­ing to in­crease by 10000%, or they are look­ing for the next big alt­coin, or they are day­trad­ing, but ei­ther way a bor­ing but safe op­tion doesn’t seem so in­ter­est­ing. In con­clu­sion, this seems to be an ex­am­ple where the effi­cent mar­ket hy­potho­sis does not hold, due to in­suffi­cent prop­a­ga­tion of in­for­ma­tion.

Dis­claimers: I don’t have shares in Bit­finex, and I hope this doesn’t look like spam. This is a the­o­ret­i­cal dis­cus­sion of the EMH, not fi­nanal ad­vice, and if you lose your money I am not re­spon­si­ble. I’m not sure whether this de­serves its own post out­side of dis­cus­sion – please let me know.

• the only way failure modes I can see is if the ex­change takes the money and runs, if there is a catas­trophic failure of the trad­ing en­g­ine, or if they get hacked.

The ex­change can just fail in a large va­ri­ety of ways and close (go bankrupt). If you’re not “in­sured” you are ex­posed to the trad­ing risk and in­surance costs what, about 30%? and, of course, it doesn’t help you with the ex­change coun­ter­party risk.

• 30% per an­num? Even if this were true (and this sounds quite high, as I men­tioned with Gw­erns 1% per month es­ti­mate) then pro­vid­ing liquidity with them would still be +EV (86% in­crease vs 30% risk).

• Um, did you make your post with­out ac­tu­ally read­ing the Bit­finex site about how it works..?

• Upvoted for point­ing out my stupid mis­take (I as­sumed it works in a cer­tain way, and skipped readig the vi­tal bit)

• Ahh, oops. I think I missed the last line… I thought if some­one ex­ceeded their mar­gin, they were forced to close their po­si­tion so that no money was lost.

• De­pos­i­tors are not ac­tu­ally trad­ing them­selves, so the only way failure modes I can see is if the ex­change takes the money and runs, if there is a catas­trophic failure of the trad­ing en­g­ine, or if they get hacked.

There is risk that is baked in from the fact that de­pos­i­tors are on the hook if trades can not be un­wound quickly enough, and be­cause this is Bit­coins, where volatility is crazy there is even more of this risk.

For ex­am­ple as­sume you lend money for some trader to go long, and now say that sud­denly prices drop so quickly that it puts the trader be­yond a mar­gin call, in fact it puts him at liqui­da­tion, oh uh...the traders mar­gin wallet is now de­pleted, who makes up the bal­ance, the lenders. They ac­tu­ally do men­tion this on their web­site. But they don’t tell you what the mar­gin call policy is. This is a re­ally im­por­tant part of the risk. If they al­low a trader to only put up \$50 of a \$100 po­si­tion and call you in when your por­tion hits 25% that would be nor­mal for some­thing like in­dex equities but pretty in­sane for some­thing like Bit­coin.

• How does solip­sism change one’s pat­tern of be­hav­ior, com­pared to other things be­ing al­ive? I no­ticed that when you take en­light­ened self-in­ter­est into ac­count, it seems that many be­hav­iors don’t change re­gard­less of whether the peo­ple around you are sen­tient or not.

For ex­am­ple, if you steal from your neigh­bor, you can ob­serve that you run the risk of him catch­ing you, and thus you hav­ing to deal with con­se­quences that will be painful or un­pleas­ant. Similarly, as­sum­ing you’re a healthy per­son, you have a con­science that makes you feel bad about cer­tain things, even when you get away with them.

Do you think your con­science would cease to bother you if you could know for a fact that there were no other liv­ing crea­tures feel­ing pain around you? In what other cases does a true solip­sis­tic world make your be­hav­ior dis­tinct from a non-solip­sis­tic one?

• I’m cer­tainly com­fortable with vi­o­lent fan­tasy when the roles are acted out. This sug­gests to me that if I were con­vinced that cer­tain per­son-seem­ing things were not al­ive, con­scious, were not what they seemed that this might tip me in to some vi­o­lent be­hav­iors. I think at min­i­mum I would ex­per­i­ment with it, try a slap here, a punch there. And where I went from there would de­pend on how it felt I sup­pose.

Also I would al­most cer­tainly steal more stuff if I was con­vinced that ev­ery­thing was land­scape.

• In fan­tasies you’re in to­tal con­trol. Same ap­plies to video games for ex­am­ple. Risk of se­vere re­tal­i­a­tion isn’t a real.

• Well, the ob­vi­ous differ­ence would be that non-solip­sists might care about what hap­pens af­ter they die, and act ac­cord­ingly.

• no­ticed that when you take en­light­ened self-in­ter­est into ac­count, it seems that many be­hav­iors don’t change re­gard­less of whether the peo­ple around you are sen­tient or not.

When I was younger and study­ing an­a­lyt­i­cal philos­o­phy, I no­ticed the same thing. Un­less solip­sism morphs into ap­a­thy, there are still ‘rep­re­sen­ta­tions’ you can’t con­trol and that you can care about. Un­less it al­ters your val­ues, there should be no differ­ence in be­havi­our too.

• If I didn’t care about other peo­ple, I wouldn’t worry about donat­ing to char­i­ties that ac­tu­ally help peo­ple. I’d donate a lit­tle to char­i­ties that make me look good, and if I’m feel­ing guilty and dis­tract­ing my­self doesn’t seem to be cost-effec­tive, I’d donate to char­i­ties that make me feel good. I would still keep quite a bit of my money for my­self, or at least work less.

As it is, I’ve figured that other peo­ple mat­ter, and some of them are a lot cheaper to make happy than me, so I de­cided that I’m go­ing to donate pretty much ev­ery­thing I can to the best char­ity I can find.

• If there were no other be­ings that could con­sciously suffer, I would prob­a­bly adopt a moral­ity that would be ut­terly hor­rible in the real world. Video games might hint at how solip­sism would make you be­have.

• Has any­one else had one of those odd mo­ments when you’ve ac­ci­den­tally con­firmed re­duc­tion­ism (of a sort) by un­know­ingly re­spond­ing to a situ­a­tion al­most iden­ti­cally to the last time or times you en­coun­tered it? For my part, I once gave the same con­dolences to an ac­quain­tance who was liv­ing with some­one we both knew to be very un­pleas­ant, and also just at­tempted to add the word for “tomato” in Lo­jban to my list of words af­ter see­ing the Po­modoro tech­nique men­tioned.

• A freaky thing I once saw… when my daugh­ter was about 3 there were cer­tain things she re­sponded to ver­bally, I can’t re­mem­ber what the thing was in this ex­am­ple, but some­thing like me ask­ing here “who is your rab­bit?” and her re­ply­ing “Kisses” (which was the name of her rab­bit).

I had videoed some of this ex­change and was play­ing it on a TV with her in the room. I was ap­palled to hear her re­spond­ing “Kisses” upon hear­ing me on the TV say­ing “who is your fa­vorite rab­bit.” Her re­sponse was ex­tremely similar to her re­sponse on the video, tremen­dous over­lap in timing tone and in­flec­tion. Maybe 20 to 50 ms off in timing (al­most sounded like uni­son).

I re­ally had the sense that she was a ma­chine and it did not feel good.

• After a brain surgery, my father de­vel­oped An­tero­grade am­ne­sia. Think Me­mento by Chris Nolan. His re­ac­tions to differ­ent com­ments/​situ­a­tions were always iden­ti­cal. If I were to men­tion a cer­tain word, it would always in­voke the same joke. See­ing his wife wear­ing a cer­tain dress always pro­duces the same witty com­ment. He was also equally amused by his wit­ti­ness ev­ery time.

For sev­eral months af­ter the surgery he had to be kept on tight watch, and was prone to just do some­thing that was rou­tine pre-op, so we found a joke he finds ex­tremely funny and which he hasn’t heard be­fore the surgery, and we would tell it ev­ery time we want him to for­get where he was go­ing. So, he would laugh for a good while, get com­pletely di­s­ori­ented, and go back to his sofa.

For a long while, we were un­able to con­vince him that he had a prob­lem, or even that he had the surgery (he would ex­plain the scar away through some fan­tasy). And even when we man­age, it lasts only for a minute or two.. Since then, I’ve de­vel­oped sev­eral sig­nals I would use if I found my­self in an iso­mor­phic situ­a­tion. I had already read HPMoR by that time, but have dis­carded Harry’s lip-bit­ing as mostly pointless in real life.

• Th­ese are both pretty much ex­actly what I’m think­ing of! The feel­ing that some­one (or you!) is/​are a ter­rify­ingly pre­dictable black box.

• My goal in life is to be­come some­one so pre­dictable that you can figure out what I’ll do just by calcu­lat­ing what choice would max­i­mize util­ity.

• That seems em­i­nently ex­ploitable and con­se­quently ex­tremely dan­ger­ous. Safety and un­ex­pected delight lie in un­pre­dictabil­ity.

• This doesn’t seem re­lated to re­duc­tion­ism to me, ex­cept in that most re­duc­tion­ists don’t be­lieve in Knigh­tian free will.

• Sort of in the sense of hu­man minds be­ing more like fixed black boxes that one might like to think. What’s Knigh­tian free will, though?

• Knigh­tian un­cer­tainty is un­cer­tainty where prob­a­bil­ities can’t even be ap­plied. I’m not con­vinced it ex­ists. Some peo­ple seem to think free will is res­cued by it; that the hu­man mind could be un­pre­dictable even in the­ory, and this some­how means it’s “you” “mak­ing choices”. This seems like deep con­fu­sion to me, and so I’m prob­a­bly not ex­press­ing their po­si­tion cor­rectly.

Re­duc­tion­ism could be con­sis­tent with that, though, if you ex­plained the mind’s work­ings in terms of the sim­plest Knigh­tian atomic thin­gies you could.

• Can you give me some ex­am­ples of what some peo­ple think con­sti­tutes Knigh­tian un­cer­tainty? Also: what do they mean by “you”? They seem to be pos­tu­lat­ing some­thing su­per­nat­u­ral.

• Again, I’m not a good choice for an ex­plainer of this stuff, but you could try http://​​www.scot­taaron­son.com/​​blog/​​?p=1438

• Thanks! I’ll have a read through this.

• I de­cided I should ac­tu­ally read the pa­per my­self, and… as of page 7, it sure looks like I was mis­rep­re­sent­ing Aaron­son’s po­si­tion, at least. (I had only skimmed a cou­ple Less Wrong threads on his pa­per.)

• In my case, it seems more likely that the other per­son will re­mem­ber that I’d said the same thing be­fore.

• In mine, too, at least for the first few sec­onds. Other­wise, know­ing I had already re­sponded a cer­tain way, I would prob­a­bly re­spond differ­ently.

• I par­ti­ci­pated in an eco­nomics ex­per­i­ment a few days ago, and one of the tasks was as fol­lows. Choose one of the fol­low­ing gam­bles where each out­come has 50% prob­a­bil­ity Op­tion 1: \$4 definitely Op­tion 2: \$6 or \$3 Op­tion 3: \$8 or \$2 Op­tion 4: \$10 or \$1 Op­tion 5: \$12 or \$0

I choose op­tion 5 as it has the high­est ex­pected value. Asymp­tot­i­cally this is the best op­tion but for a sin­gle trial, is it still the best op­tion?

• Tech­ni­cally, it de­pends on your util­ity func­tion. How­ever, even with­out know­ing your util­ity func­tion, I can say that for such a low amount of money, your util­ity func­tion is very close to lin­ear, and op­tion 5 is the best.

• Here’s one in­ter­est­ing way of view­ing it that I once read:

Sup­pose that the op­tion you chose, rather than be­ing a sin­gle trial, were ac­tu­ally 1,000 tri­als. Then, risk averse or not, Op­tion 5 is clearly the best ap­proach. The only difficulty, then, is that we’re con­sid­er­ing a sin­gle trial in iso­la­tion. How­ever, when you con­sider all such risks you might en­counter in a long pe­riod of time (e.g. your life), then the situ­a­tion be­comes much closer to the 1,000 trial case, and so you should always take the high­est ex­pected value op­tion (un­less the amounts in­volved are ab­solutely huge, as oth­ers have pointed out).

• That de­pends on your util­ity func­tion, speci­fi­cally your risk tol­er­ance. If you’re risk-neu­tral, op­tion 5 has the high­est value, oth­er­wise it de­pends.

• As a poker player, the idea we always bat­ted back and forth was that Ex­pected Value doesn’t change over shorter sam­ple sizes, in­clud­ing a sin­gle trial. How­ever you may have a risk of ruin or some ex­ter­nal fac­tor (like if you’re poor and given the op­tion of be­ing handed \$1,000,000 or flip­ping a coin to win \$2,000,001).

Bar­ring that, if you’re only in­ter­ested in max­i­miz­ing your re­sult, you should fol­low EV. Even in a sin­gle trial.

• Clearly op­tion 5 has the higest mean out­come. If you value money lin­early (that is, \$12 is ex­actly 3 times as good as \$4, and there’s no spe­cial util­ity thresh­old along the way (or di­su­til­ity at \$0), it’s the best op­tion.

For larger val­ues, your value for money may be non­lin­ear (mean­ing: the differ­ence be­tween \$0 and \$50k may be much much larger than the differ­ence be­tween \$500k and \$550k to your hap­piness), and then you’ll need to con­vert the pay­outs to sub­jec­tive value be­fore do­ing the calcu­la­tion. Like­wise if you’re in a spe­cial cir­cum­stance where there’s a thresh­old value that has spe­cial value to you—if you need \$3 for bus fare home, then op­tion 1 or 2 be­come much more at­trac­tive.

• That de­pends on the amount of back­ground money and ran­dom­ness you have.

Although I can’t re­ally see any case where I wouldn’t pick op­tion five. Even if that’s all the money I will ever have, my lifes­pan, and by ex­ten­sion my hap­piness, will be ap­prox­i­mately lin­ear with time.

If you spec­ify that I get that much money each day for the rest of my life, and that’s all I get, then I’d go for some­thing lower risk.

• In gen­eral, pick­ing the high­est EV op­tion makes sense, but in the con­text of what sounds like a stupid/​lazy eco­nomics ex­per­i­ment, you have a moral duty to do the wrong thing. Per­haps you could have flipped a coin twice to choose among the first 4 op­tions? That way you are pro­vid­ing crappy/​use­less data and they have to pay you for it!

• Why do I have a moral duty to do wrong thing? Shouldn’t I act in my own self in­ter­est to max­imise the amount of money I make?

• An Iter­ated Pri­soner’s Dilemma var­i­ant I’ve been think­ing about —

There is a pool of play­ers, who may be run­ning var­i­ous strate­gies. The num­ber of rounds played is ran­domly de­ter­mined. On each round, play­ers are matched ran­domly, and play a one-shot PD. On the sec­ond and sub­se­quent rounds, each player is in­formed of its op­po­nent’s pre­vi­ous moves; but play­ers have no in­for­ma­tion about what move was played against them last round, nor whether they have played the same op­po­nent be­fore.

In other words, as a player you know your cur­rent op­po­nent’s move his­tory — but you don’t know whom they were play­ing those moves against; and you don’t know what your score is look­ing like, ei­ther.

If you’re play­ing with a pool of TFT bots, it’s go­ing to seem the same as if you were play­ing against a sin­gle TFT bot. TFT judges you on your pre­vi­ous move, re­gard­less of whom you were play­ing.

But defect­ing against Co­op­er­ateBot or Defec­tBot doesn’t look so good if your next op­po­nent pre­dicts you based on your defec­tion, and doesn’t know you were up against a bot.

• Would just like to make sure ev­ery­one here is aware of LessWrong.txt

• Why?

• Crit­i­cism’s well and good, but 140 char­ac­ters or less of out-of-con­text quo­ta­tion doesn’t lend it­self to in­tel­li­gent crit­i­cism. From the looks of that feed, about half of it is in­fer­en­tial dis­tance prob­lems and the other half is sa­cred cows, and nei­ther one’s very in­ter­est­ing.

If we can get any­thing from it, it’s a re­minder that kil­ling sa­cred cows has so­cial con­se­quences. But I’m frankly tired of beat­ing that par­tic­u­lar drum.

• Things like this merely mean that you ex­ist and some­one else has no­ticed it.

• 17 Feb 2014 13:14 UTC
1 point

Self-driv­ing cars had bet­ter use (some ap­prox­i­ma­tion of) some form of acausal de­ci­sion the­ory, even more so than a sin­gle­ton AI, be­cause the former will in­ter­act in PD-like and Chicken-like ways with other in­stan­ti­a­tions of the same al­gorithm.

• Self driv­ing cars have very com­plex goal met­rics, along the lines of get­ting to the des­ti­na­tion while dis­rupt­ing the traf­fic the least (still grossly over­sim­plify­ing).

The man­u­fac­turer is in­ter­ested in ev­ery one of his cars get­ting to the des­ti­na­tion in the least time, so the cars are pro­grammed to op­ti­mize for the sake of all cars. They’re also in­ter­ested in get­ting hu­man drivers to buy their cars, which also makes not driv­ing like a jerk a goal. PD is prob­le­matic when agents are self­ish, not when agents en­tirely share the goal. Think of 2 peo­ple in PD played for money, who both want to donate all pro­ceeds to same char­ity. This changes the pay­offs to the point where it’s not PD any more.

• They’re also in­ter­ested in get­ting hu­man drivers to buy their cars, which also makes not driv­ing like a jerk a goal.

Depends on who those hu­mans are. For a large frac­tion of low-IQ young males...
• I dunno, hav­ing a self driv­ing jerk car takes away what ever ma­cho­ism one could have about driv­ing… there’s some­thing about a car where you can go ma­cho and drive man­ual to be a jerk.

I don’t think it’d help sales at all if self driv­ing cars were caus­ing ac­ci­dents while them­selves evad­ing the col­li­sion en­tirely.

• Already de­ployed is a bet­ter ex­am­ple: com­puter net­work pro­to­cols.

• Or differ­ent al­gorithms. How long af­ter wide re­lease will it be be­fore some­one mod­ifies their car’s code to drive ag­gres­sively, on the as­sump­tion that cars run­ning the stan­dard al­gorithm will move out of the way to avoid an ac­ci­dent?

(I call this “driv­ing like a New Yorker.” New York­ers will know what I mean.)

• That’s like driv­ing with­out a li­cense. Ob­vi­ously the driver (soft­ware) has to be li­censed to drive the car, just as per­sons are. Soft­ware that op­er­ates deadly ma­chin­ery has to be de­vel­oped in spe­cific ways, cer­tified, and so on and so forth, for how many decades already? (Quite a few)

• I have been re­view­ing FUE hair trans­plants, and I would like LWers’ opinion. I’m ac­tu­ally sur­prised this isn’t cov­ered, as it seems rele­vant to many users.

As far as I can tell, the down­sides are:

• Mild scar­ring on the back of the head

• Doesn’t pre­vent con­tinued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis

• Cost

• Mild pain/​has­sle in the ini­tial weeks.

• Pos­si­bil­ity of find­ing a dodgy surgeon

The scar­ring is ba­si­cally cov­ered if you have a few two days’ hair growth there and I am fine with that as a long-term solu­tion. he con­tinued hair loss is po­ten­tially dealt with by a re­peated trans­plant and more cer­tainly dealt with by get­ting the ini­tial trans­plant “all over”, i.e. thick­en­ing hair, rather than just mov­ing the hair­line for­ward. But it is the area I am most un­cer­tain about. I should add that I am 29 with male pat­tern bald­ness on both sides of my fam­ily, Nor­wood level 4, and have seen hair loss sta­bil­ised (I have been tak­ing prope­cia for the last year).

Ig­nor­ing the cost, my ques­tions are:

• Is any­one aware of any other prob­lems be­sides these?

• Do you think this solu­tion works?

• Any ideas on how to pick the right sur­geon (us­ing some­one in Sin­ga­pore most prob­a­bly)?

• This is quite far down the page, even though I posted it a few hours ago. Is that an in­tended effect of the up­vot­ing/​down­vot­ing sys­tem? (it may well be—I don’t un­der­stand how the al­gorithm as­signs com­ment rank­ings)

• Just be­low and to the right of the post there’s a choice of which al­gorithm to use for sort­ing com­ments. I don’t re­mem­ber what the de­fault is, but I do know that at least some of them sort by votes (pos­si­bly with other fac­tors). I nor­mally use the sort­ing “Old” (i.e. old­est first) and then your com­ment is near thhe bot­tom of the page since so many were posted be­fore it.

• The al­gorithm is a com­pli­cated mix of re­cency and score, but on an open thread that only lasts a week, re­cency is fairly uniform, so it’s pretty much just score.

• I’m look­ing into Bayesian Rea­son­ing and try­ing to get a ba­sic han­dle on it and how it differs from tra­di­tional think­ing. When I read about how it (ap­par­ently) takes into ac­count var­i­ous ex­pla­na­tions for ob­served things once they are ob­served, I was im­me­di­ately re­minded of Richard Feyn­man’s opinion of Fly­ing Saucers. Is Feyn­man giv­ing an ex­am­ple of proper Bayesian think­ing here?

• It’s cer­tainly in the right spirit. He’s rea­son­ing back­wards in the same way Bayesian rea­son­ing does: here’s what I see; here’s what I know about pos­si­ble mechanisms for how that could be ob­served and their prior prob­a­bil­ities; so here what I think is most likely to be re­ally go­ing on.

• Since peo­ple were pretty en­courag­ing about the quest to do one’s part to help hu­man­ity, I have a fol­low-up ques­tion. (Hope it’s okay to post twice on the same open thread...)

Per­haps this is a false di­chotomy. If so, just let me know. I’m ba­si­cally won­der­ing if it’s more worth­while to work on tran­si­tion­ing to al­ter­na­tive/​re­new­able en­ergy sources (i.e. we need to de­velop so­lar power or what­ever else be­fore all the oil and coal run out, and to avoid any po­ten­tial dis­as­trous cli­mate change effects) or to work on chang­ing hu­man na­ture it­self to bet­ter ad­dress the afore­men­tioned en­ergy prob­lem in terms of bet­ter judg­ment and de­ci­sion-mak­ing. Ba­si­cally, it seems like hu­man­ity may de­stroy it­self (if not via cli­mate change, then some­thing else) if it doesn’t first ad­dress its defi­cien­cies.

How­ever, since en­ergy/​cli­mate is­sues seem pretty press­ing and chang­ing hu­man judg­ment is al­most purely spec­u­la­tive (I know CFAR is work­ing on that sort of thing, but I’m talk­ing about more ge­netic or neu­rolog­i­cal changes), civ­i­liza­tion may be­come too un­sta­ble be­fore it can take ad­van­tage from any gains from cog­ni­tive en­hance­ment and such.On the other hand, cli­mate change/​en­ergy is­sues may not end up be­ing that big of a deal, so it’s bet­ter to just fo­cus on im­prov­ing hu­man­ity to ad­dress other hor­rible is­sues as well, like in­equal­ity, psy­cho­pathic be­hav­ior, etc.

Of course, so­ciety as a whole should (and does) work on both of these things. But one in­di­vi­d­ual can re­ally only pick one to make a siz­able im­pact—or at the very least, one at a time. Which do you guys think may be more effec­tive to work on?

[NOTE: I’m perfectly will­ing to ad­mit that I may be com­pletely wrong about cli­mate change and en­ergy is­sues, and that col­lec­tive hu­man judg­ment is in fact as good as it needs to be, and so I’m wor­ry­ing about noth­ing and can rest easy donat­ing to malaria char­i­ties or what­ever.]

• Of course, so­ciety as a whole should (and does) work on both of these things. But one in­di­vi­d­ual can re­ally only pick one to make a siz­able im­pact—or at the very least, one at a time. Which do you guys think may be more effec­tive to work on?

The core ques­tion is: “What kind of im­pact do you ex­pect to make if you work on ei­ther is­sue?”

Do you think there work to be done in the space of so­lar power de­vel­op­ment that other peo­ple than your­self aren’t effec­tively do­ing? Do you think there work to be done in terms of bet­ter judg­ment and de­ci­sion-mak­ing that other peo­ple aren’t already do­ing?

we need to de­velop so­lar power or what­ever else be­fore all the oil and coal run out,

The prob­lem with coal isn’t that it’s go­ing to run out but that it kills hun­dred of thou­sands of peo­ple via pol­lu­tion and that it cre­ates cli­mate change.

I know CFAR is work­ing on that sort of thing, but I’m talk­ing about more ge­netic or neu­rolog­i­cal changes)

Why? To me it seems much more effec­tive to fo­cus on more cog­ni­tive is­sues when you want to im­prove hu­man judg­ment. Devel­op­ing train­ing to help peo­ple cal­ibrate them­selves against un­cer­tainty seems to have a much higher re­turn than try­ing to do fMRI stud­ies or brain im­plants.

• The core ques­tion is: “What kind of im­pact do you ex­pect to make if you work on ei­ther is­sue?”

Do you think there work to be done in the space of so­lar power de­vel­op­ment that other peo­ple than your­self aren’t effec­tively do­ing? Do you think there work to be done in terms of bet­ter judg­ment and de­ci­sion-mak­ing that other peo­ple aren’t already do­ing?

I’m fa­mil­iar with ques­tions like these (speci­fi­cally, from 80000 hours), and I think it’s fair to say that I prob­a­bly wouldn’t make a sub­stan­tive con­tri­bu­tion to any field, those in­cluded. Given that like­li­hood, I’m re­ally just try­ing to de­ter­mine what I feel is most im­por­tant so I can feel like I’m work­ing on some­thing im­por­tant, even if I only end up tak­ing a job over some­one else who could have done it equally well.

That said, I would hope to lo­cate a “gap” where some­thing was not be­ing done that should be, and then try to fill that gap, such as vol­un­teer­ing my time for some­thing. But there’s no ba­sis for me to sur­mise at this point which is­sue I would be able to con­tribute more to (for in­stance, I’m not a so­lar en­g­ineer).

To me it seems much more effec­tive to fo­cus on more cog­ni­tive is­sues when you want to im­prove hu­man judg­ment. Devel­op­ing train­ing to help peo­ple cal­ibrate them­selves against un­cer­tainty seems to have a much higher re­turn than try­ing to do fMRI stud­ies or brain im­plants.

At the mo­ment, yes, but it seems like it has limited po­ten­tial. I think of it a bit like boot­strap­ping: a judg­ment-im­paired per­son (or an en­tire so­ciety) will likely make er­rors in de­ter­min­ing how to im­prove their judg­ment, and the im­prove­ment seems slight and tem­po­rary com­pared to more fun­da­men­tal, per­ma­nent changes in neu­ro­chem­istry. I also think of it a bit like peo­ple’s at­tempts to lose weight and stay fit. Yes, there are a lot of cog­ni­tive and be­hav­ioral changes peo­ple can make to fa­cil­i­tate that, but for many (most?) peo­ple, it re­mains a con­stant strug­gle—one that many peo­ple are los­ing. But if we could hack things like that, “temp­ta­tion” or “slip­ping” wouldn’t be an is­sue.

The prob­lem with coal isn’t that it’s go­ing to run out but that it kills hun­dred of thou­sands of peo­ple via pol­lu­tion and that it cre­ates cli­mate change.

From what I’ve gath­ered from my read­ing, the jury is kind of out on how dis­as­trous cli­mate change is go­ing to be. Es­ti­mates seem to range from catas­trophic to even slightly benefi­cial. You seem to think it will definitely be catas­trophic. What have you come across that is cer­tain about this?

• The econ­omy is quite ca­pa­ble of deal­ing with finite re­sources. If you have land with oil on it, you will only drill if the price of oil is in­creas­ing more slowly than in­ter­est. If this is the case, then drilling for oil and us­ing the value gen­er­ated by it for some kind of in­vest­ment is more helpful than just sav­ing the oil.

Cli­mate change is still an is­sue of course. The econ­omy will only work that out if we tax en­ergy in pro­por­tion to its ex­ter­nal­ities.

We should still keep in mind that cli­mate change is a prob­lem that will hap­pen in the fu­ture, and we need to look at the much lower pre­sent value of the cost. If we have to spend 10% of our econ­omy on mak­ing it twice as good a hun­dred years from now, it’s most likely not worth it.

• I am not sure if this de­serves it’s own post. I figured I would post here and then add it to dis­cus­sion if there is suffi­cient in­ter­est.

I re­cently started read­ing Learn You A Haskell For Great Good. This is the first time I have at­tempted to learn a func­tional lan­guage, and I am only a be­gin­ner in Im­per­a­tive lan­guages (Java). I am look­ing for some ex­er­cises that could go along with the e-book. Ideally, the ex­er­cises would en­courage learn­ing new ma­te­rial in a similar or­der to how the book is pre­sented. I am happy to sub­sti­tute/​com­pli­ment with a differ­ent re­source as well, if it con­tains prob­lems that al­low one to prac­tice struc­turally. If you know of any such ex­er­cises, I would ap­pre­ci­ate a link to them. I am aware that Pro­ject Euler is of­ten ad­vised; does it effec­tively teach pro­gram­ming skills, or just prob­lem solv­ing? (Then again, I am not en­tirely sure if there is a differ­ence at this point in my ed­u­ca­tion).

Thanks for the help!

• Awe­some, thanks so much! If you were to recom­mend one of these re­sources to be­gin with, which would it be?

• Awe­some, thanks so much!

Happy to help!

If you were to recom­mend one of these re­sources to be­gin with, which would it be?

I like both Pro­ject Euler and 99 Haskell prob­lems a lot. They’re great for build­ing suc­cess spirals.

• Why are you com­mit­ted to that book? SICP is well-tested in­tro­duc­tory text­book with ex­ten­sive ex­er­cises. Added: I meant to say that it is func­tional.

• I’m not. The rea­son I picked it up was be­cause it hap­pens to be the book recom­mended in MIRI’s course sug­ges­tions, but I am not par­tic­u­larly at­tached to it. Look­ing again, it seems they do ac­tu­ally recom­mend SICP on less­wrong, and Learny­oua­haskell on in­tel­li­gence.org.

Thanks for the sug­ges­tion.

• Would you pre­fer that one per­son be hor­ribly tor­tured for eter­nity with­out hope or rest, or that 3^^^3 peo­ple die?

• One per­son be­ing hor­ribly tor­tured for eter­nity is equiv­a­lent to that one per­son be­ing copied in­finite times and hav­ing each copy tor­tured for the rest of their life. Death is bet­ter than a life­time of hor­rible tor­ture, and 3^^^3, de­spite be­ing big­ger than a whole lot of num­bers, is still smaller than in­finity.

• What if the 3^^^3 peo­ple were one im­mor­tal per­son?

• Well then the an­swer is still ob­vi­ously death, and that fact has be­come more im­me­di­ately in­tu­itive—prob­a­bly even those who dis­agreed with my as­sess­ment of the origi­nal ques­tion would agree with my choice given the sce­nario “an im­mor­tal per­son is tor­tured for­ever or an oth­er­wise-im­mor­tal per­son dies”

• Be­ing hor­ribly tor­tured is worse than death, so I’d pick death.

• I would so­licit bids from the two groups. I imag­ine that the 3^^^3 peo­ple would be able to pay more to save their lives than the 1 per­son would be able to pay to avoid in­finite tor­ture. Plus, once I make the de­ci­sion, if I sen­tence the 1 per­son to in­finite tor­ture I only have to worry about their friends/​fam­ily and I have 3^^^3 al­lies who will help defend me against re­tri­bu­tion. Other­wise, the situ­a­tion is re­versed and I think its likely I’ll be mur­dered or im­pris­oned if I kill that many peo­ple. Of course, if the sce­nario is differ­ent, like the 3^^^3 peo­ple are in a differ­ent galaxy (not that that many peo­ple could fit in a galaxy) and the 1 per­son is my wife, I’ll definitely wipe out all those ass­holes to save my wife. I’d even let them all suffer in­finite tor­ture just to keep my wife from ex­pe­rienc­ing a dust speck in her eye. It is valen­tine’s day af­ter all!

• Modafinil is pre­scrip­tion-only in the US, so to get it you have to do ille­gal things. How­ever, I note that (pre­sum­ably due to some leg­is­la­tive over­sight?) the re­lated drug Adrafinil is un­reg­u­lated, you can buy it right off Ama­zon. Does any­one know how Adrafinil and Modafinil com­pare in terms of effec­tive­ness and safety?

• No, you don’t have to do ille­gal things. Another op­tion is to con­vince your doc­tor to give you a pre­scrip­tion. I think peo­ple on LW greatly over­es­ti­mate the difficulty of this.

• Some info on get­ting a pre­scrip­tion here: http://​​www.bul­let­proofexec.com/​​q-a-why-i-use-modafinil-provigil/​​

I think ADD/​ADHD will likely be a harder sell; my im­pres­sion is that peo­ple are already falsely claiming that in or­der to get Ad­der­all etc.

• I don’t even mean to sug­gest ly­ing. I mean some­thing sim­ple like “I think this drug might help me con­cen­trate.”

A for­mal di­ag­no­sis of ADD or nar­colepsy is carte blanche for am­phetamine pre­scrip­tion. Be­cause it is highly sched­uled and, more­over, has a big black mar­ket, doc­tors guard this di­ag­no­sis care­fully. Whereas, modafinil is lightly sched­uled and doesn’t have a black mar­ket (not driven by pre­scrip­tions), so they are less ner­vous about giv­ing it out in ADD-ish situ­a­tions.

But doc­tors very much do not like it when a new pa­tient comes in ask­ing for a spe­cific drug.

• Adrafinil has ad­di­tional down­stream metabo­lites be­sides just modafinil, but I don’t know ex­actly what they are. Some claim it is harder on the liver im­ply­ing some of the metabo­lites are mildly toxic, but that’s not re­ally say­ing much. Lots of stuff we eat is mildly toxic. Adrafinil is gen­er­ally well tol­er­ated and if your goal is find­ing out the effects of modafinil on your sys­tem and you can’t get modafinil it­self I would say go for it. If you then de­cided to take moda long term I would say do more re­search.

IANAD. Re­search thor­oughly and con­sult with a doc­tor if you have any med­i­cal con­di­tions or are tak­ing any med­i­ca­tions.

• Andy Weir’s “The Mar­tian” is ab­solutely fuck­ing brilli­ant ra­tio­nal­ist fic­tion, and it was pub­lished in pa­per book for­mat a few days ago.

I pre-or­dered it be­cause I love his short story The Egg, not know­ing I’d get a su­per-ra­tio­nal­ist pro­tag­o­nist in a rad­i­cal piece of sci­ence porn that down­right wor­ships space travel. Also, fart jokes. I love it, and if you’re an LW type of guy, you prob­a­bly will too.