# Stupid Questions December 2014

This thread is for ask­ing any ques­tions that might seem ob­vi­ous, tan­gen­tial, silly or what-have-you. Don’t be shy, ev­ery­one has holes in their knowl­edge, though the fewer and the smaller we can make them, the bet­ter.

Please be re­spect­ful of other peo­ple’s ad­mit­ting ig­no­rance and don’t mock them for it, as they’re do­ing a no­ble thing.

• It seems like we suck at us­ing scales “from one to ten”. Video game re­views nearly always give a 7-10 rat­ing. Com­pe­ti­tions with scores from judges seem to always give num­bers be­tween eight and ten, un­less you crash or fall, and get a five or six. If I tell some­one my mood is a 510, they seem to think I’m hav­ing a bad day. That is, we seem to com­press things into the last few num­bers of the scale. Does any­body know why this hap­pens? Pos­si­ble ex­pla­na­tions that come to mind in­clude:

• Peo­ple are scor­ing with refer­ence to the high end, where “noth­ing is wrong”, and they do not want to la­bel things as more than two or three points worse than perfect

• Peo­ple are think­ing in terms of grades, where 75% is a C. Peo­ple think most things are not worse than a C grade (or maybe this is just an­other ex­am­ple of the pat­tern I’m see­ing)

• I’m suc­cumb­ing to con­fir­ma­tion bias and this isn’t a real pattern

• I’m suc­cumb­ing to con­fir­ma­tion bias and this isn’t a real pattern

No, this is definitely a real pat­tern. YouTube switched from a 5-star rat­ing sys­tem to a like/​dis­like sys­tem when they no­ticed, and videogames are no­to­ri­ous for rank in­fla­tion.

• Rot­tenTo­ma­toes has much broader rat­ings. The cur­rent box office hits range from 7% to 94%. This is be­cause they ag­gre­gate bi­nary “pos­i­tive” and “nega­tive” re­views. As jaime2000 notes, Youtube has switched to a similar rat­ing sys­tem and it seems to keep things very sen­si­tive.

• Par­tial ex­pla­na­tion: we in­ter­pret these scales as go­ing from worst pos­si­ble to best pos­si­ble, and

• games that get as far as be­ing on sale and get­ting re­views are usu­ally at least pretty good be­cause oth­er­wise there’d be no point sel­l­ing them and no point re­view­ing them

• peo­ple en­ter­ing com­pe­ti­tions are usu­ally at least pretty good be­cause oth­er­wise they wouldn’t be there

• a typ­i­cal day is ac­tu­ally quite a bit closer to best pos­si­ble than worst pos­si­ble, be­cause there are so many at-least-kinda-plau­si­ble ways for it to go badly

One rea­son why this is only a par­tial ex­pla­na­tion is that “pos­si­ble” ob­vi­ously re­ally means some­thing like “at least semi-plau­si­ble” and what’s at least semi-plau­si­ble de­pends on con­text and whim. But, e.g., sup­pose we take it to mean some­thing like: take past his­tory, dis­card out­liers at both ends, and ex­pand the range slightly. Then I bet what you find is that

• most games that go on sale and at­tract enough at­ten­tion to get re­viewed are broadly of com­pa­rable qual­ity

• but a non-neg­ligible frac­tion are quite a lot worse be­cause of some se­ri­ous failing in de­sign or man­age­ment or something

• most perfor­mances in com­pe­ti­tions at a given level are broadly of com­pa­rable qual­ity

• but a non-neg­ligible frac­tion are quite a lot worse be­cause the com­peti­tor made a mis­take of some kind

• most of a given per­son’s days are roughly equally satis­fac­tory

• but a non-neg­ligible frac­tion are quite a lot worse be­cause of ill­ness, work stress, ar­gu­ment with a fam­ily mem­ber, etc.

so that in or­der for a scale to be able to cover (say) 99% of cases it needs to ex­tend quite a bit fur­ther down­ward than up­ward rel­a­tive to the me­dian case.

• a typ­i­cal day is ac­tu­ally quite a bit closer to best pos­si­ble than worst pos­si­ble, be­cause there are so many at-least-kinda-plau­si­ble ways for it to go badly

Think about it in therms of prob­a­bil­ity space. If somthign is ba­si­cally func­tional, then there are a near- in­finite num­ber of ways for it to be worse, but a finite num­ber of ways for it to get bet­ter.

http://​​xkcd.com/​​883/​​

• Peo­ple are think­ing in terms of grades, where 75% is a C. Peo­ple think most things are not worse than a C grade (or maybe this is just an­other ex­am­ple of the pat­tern I’m see­ing)

I don’t think it’s this. Belgium doesn’t use let­ter-grad­ing and still suc­cumbs to the prob­lem you men­tioned in ar­eas out­side the class­room.

• What do they use in­stead?

• Points out of a max­i­mum. The teacher is sup­posed to de­cide in ad­vance how much points a test will be worth (5, 10, 20 and 25 be­ing com­mon op­tions, but I’ve also had tests where I scored 17,2627) and then de­cides how much points a ques­tion will be worth. You need to get half of the max­i­mum or more for a pass­ing grade.

That’s in high school. In uni­ver­sity ev­ery­thing is scored out of a max­i­mum of 20 points.

• Peo­ple are think­ing in terms of grades

That’s not an ex­pla­na­tion, just a symp­tom of the prob­lem. Peo­ple of mediocre tal­ent and high tal­ent both get A—that’s part of the rea­son why we have to use stan­dard­ized tests with a higher ceiling.

My in­tu­ition is that the top few notches are satis­fic­ing, whereas all lower rat­ings are vary­ing de­grees of non-satis­fic­ing. The de­gree to which ev­ery­thing tends to cluster at the top rep­re­sents the de­gree to which ev­ery­thing is satis­fac­tory for prac­ti­cal pur­poses. In situ­a­tions where the ma­jor­ity of the rated things are not satis­fac­tory (like the Put­nam—noth­ing less than a cor­rect proof is truly satis­fac­tory), the rat­ings will cluster near the bot­tom.

For ex­am­ple, com­pare mo­tels to ho­tels. Mo­tels always have fewer stars, be­cause mo­tels in gen­eral are worse. Whereas, say, video games will tend to cluster at the top be­cause video games in gen­eral are satis­fac­to­rily fun.

Or, think Hu­man­i­ties vs. Eng­ineer­ing grades. Hu­man­i­ties stu­dents in gen­eral satisfy the re­quire­ments to be his­to­ri­ans and writ­ers or liberal-arts-ed­u­cated-white-col­lar work­ers more than Eng­ineer­ing stu­dents satisfy the re­quire­ments to be en­g­ineers.

• That’s not an ex­pla­na­tion, just a symp­tom of the prob­lem.

This is what I was try­ing to con­vey when I said it might be an­other ex­am­ple of the prob­lem.

I think it’s rea­son­able, in many con­texts, to say that achiev­ing 75% of the high­est pos­si­ble score on an exam should earn you what most peo­ple think of as a C grade (that is, good enough to pro­ceed with the next part of your ed­u­ca­tion, but not good enough to be com­pet­i­tive).

I would say that games are differ­ent. There is not, as far as I know, a quan­ti­ta­tive rubric for scor­ing a game. A 610 rat­ing on a game does not in­di­cate that the game meets 60% of the re­quire­ments for a perfect game. It re­ally just means that it’s similar in qual­ity to other games that have re­ceived the same score, and usu­ally a 610 game is pretty lousy. I found a his­togram of scores on meta­critic:

http://​​www.gi­ant­bomb.com/​​pro­file/​​dry_car­ton/​​blog/​​meta­critic-score-dis­tri­bu­tion-graphs/​​82409/​​

The peak of the dis­tri­bu­tions seems to be around 80%, while I’d eye­ball the me­dian to be around 70-75%. There is a long tail of bad games. You may be right that this dis­tri­bu­tion does, in some sense, re­flect the ac­tual dis­tri­bu­tion of game qual­ity. My com­plaint is that this scor­ing sys­tem is good at re­solv­ing bad games from truly awful games from com­i­cally ter­rible games, but it is bad at re­solv­ing a good game from a mediocre game.

What I think it should be is a per­centile-based score, like Lu­mifer de­scribes:

Con­sider this ex­am­ple: I come up to you and ask “So, how was the movie?”. You an­swer “I give it a 6 out of 10″. Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifur­cate re­al­ity.

In branch 1 you then add “The dis­tri­bu­tion of my rat­ings fol­lows the dis­tri­bu­tion of movie qual­ity, savvy?” and let’s say I’m suffi­ciently statis­ti­cally savvy to un­der­stand that. But… does it help me? I don’t know the dis­tri­bu­tion of movie qual­ity. it’s prob­a­bly bell-shaped, maybe, but not quite nor­mal if only be­cause it has to be bounded, I have no idea if its skewed, etc.

In branch 2 you then add “The rat­ing of 6 means I rate the movie to be in the sixth decile”. Ah, that’s much bet­ter. I now know that out of 10 movies that you’ve seen five were prob­a­bly worse and three were prob­a­bly bet­ter. That, to me, is a more use­ful piece of in­for­ma­tion.

Then again, maybe it’s difficult to dis­cern a differ­ence in qual­ity be­tween a 60th per­centile game and an 80th per­centile game.

• This is what I was try­ing to con­vey when I said it might be an­other ex­am­ple of the prob­lem.

Oh right, I didn’t read care­fully sorry.

• Math com­pe­ti­tions of­ten have the op­po­site prob­lem. The Put­nam com­pe­ti­tion, for ex­am­ple, of­ten has a me­dian score of 0 or 1 out of 120.

I’m not sure this is a good thing. Par­ti­ci­pat­ing in a math com­pe­ti­tion and get­ting 0 points is pretty dis­cour­ag­ing, in a field where self-es­teem is already an is­sue.

• In­ter­est­ingly enough, the scores on in­di­vi­d­ual ques­tions are ex­tremely bi­modal. They’re the­o­ret­i­cally out of 10 but the num­bers be­tween 3 and 7 are never used.

• This is ex­actly why in my fam­ily we use +2/​-2. 0 re­ally does feel like av­er­age in a way 5-6/​10 or 35 doesn’t.

• Well here is an ar­ti­cle by Me­gan McAr­dle that talk­ing about how in­sider-out­sider dy­nam­ics can lead to this kind of rank in­fla­tion.

• I’ve no­ticed the same thing. Part of it might be that re­view­ers are re­luc­tant to alienate fans of [thing be­ing re­viewed]. Another ex­pla­na­tion is that they are in­tu­itively norm­ing against a wider de­gree of things than they ac­tu­ally re­view. For ex­am­ple, I was buy­ing a smart­phone re­cently, and a lot of lower-end de­vices I was con­sid­er­ing had few re­views, but fa­mous high-end brands (like iPhone Galaxy S, etc.) are re­viewed by pretty much ev­ery­one.

Play­ing devil’s ad­vo­cate, it might be that there are more per­ceiv­able de­grees of bad­ness/​more ways to fail than there are of good­ness, so we need a wider range of num­bers to de­scribe and fairly rank the failures.

• In medicine we try to make peo­ple rate their symp­toms, like pain, from one to ten. It’s pretty much never un­der 5. Of course there’s a se­lec­tion effect and peo­ple don’t like to look like whin­ers but I’m not con­vinced these fully ex­plain the situ­a­tion.

In Fin­land the low­est grade you can get from pri­mary ed­u­ca­tion to high school is 4 so that prob­a­bly af­fects the situ­a­tion too.

• In medicine we try to make peo­ple rate their symp­toms, like pain, from one to ten. It’s pretty much never un­der 5.

How do you then in­ter­pret their re­sponses? Do you com­pare only the re­sponses of the same per­son at differ­ent times, or be­tween per­sons (or to guide ini­tial treat­ment)? Do you have a refer­ence scale that trans­lates self-re­ported pain to some­thing with an ob­jec­tive refer­ent?

• Do you com­pare only the re­sponses of the same per­son at differ­ent times

Yes. There’s too much vari­a­tion be­tween per­sons. I also think there’s vari­a­tion be­tween types of pain and vari­a­tion de­pend­ing on whether there are other symp­toms. There are no ob­jec­tive spe­cific refer­ents but peo­ple who are in ac­tual se­ri­ous pain usu­ally look like it, are tachy­cardic, hy­per­ten­sive, ag­gres­sive, sweat­ing, writhing or very still de­pend­ing on what type of pain were talk­ing about. Real pain is also ag­gra­vated by rele­vant man­ual ex­am­i­na­tions.

• In medicine we try to make peo­ple rate their symp­toms, like pain, from one to ten. It’s pretty much never un­der 5.

This is ac­tu­ally what ini­tially got me think­ing about this. I read a half-satire thing about peo­ple mi­sus­ing pain scales. Since my only source for the claim that peo­ple do this was a some­what satiri­cal ar­ti­cle, I didn’t bring it up ini­tially.

I was sur­prised when I heard that peo­ple do this, be­cause I figured most peo­ple get­ting asked that ques­tion aren’t in near as much pain as they could be, and they don’t have much to gain by in­flat­ing their an­swer. When I’ve been asked to give an an­swer on the pain scale, I’ve al­most always felt like I’m much closer to no pain than to “the worst pain I can imag­ine” (which is what I was told a ten is), and I can imag­ine be­ing in such awful pain that I can’t an­swer the ques­tion. I think I an­swered seven one time when I had a bone stick­ing through my skin (which ac­tu­ally hurt less than I might have thought).

• most peo­ple get­ting asked that ques­tion aren’t in near as much pain as they could be, and they don’t have much to gain by in­flat­ing their an­swer.

Maybe they think that by in­flat­ing their an­swer they gain, on the mar­gin, bet­ter /​ more in­ten­sive /​ more prompt med­i­cal ser­vice. Espe­cially in an ER set­ting where they may in­tuit them­selves to be com­pet­ing against other pa­tients be­ing triaged and asked the same ques­tion, they might per­ceive them­selves (con­sciously or not) to be in an arms race where the per­son who claims to be ex­pe­rienc­ing the most pain gets treated first.

• I tried to change out the 10 rat­ing for a z-score rat­ing in my own con­ver­sa­tions. It failed due to my so­cial cir­cles not be­ing fa­mil­iar with the nor­mal bell curve.

• If you wanted to max­i­mize the in­for­ma­tional con­tent of your rat­ings, wouldn’t you try to mimick a uniform dis­tri­bu­tion?

• The in­tent was to com­mu­ni­cate one piece of in­for­ma­tion with­out con­fu­sion: where on the mea­sure­ment spec­trum the item fits rel­a­tive to oth­ers in its group. As op­posed to de­liv­er­ing as much in­for­ma­tion as pos­si­ble, for which there are more nu­anced sys­tems.

Most things I am rat­ing do not have a uniform dis­tri­bu­tion, I tried to fol­low a nor­mal dis­tri­bu­tion be­cause it would fit the greater ma­jor­ity of cases. We lose in­for­ma­tion and make as­sump­tions when we mea­sure data on the wrong dis­tri­bu­tion, did you fit to uniform by vol­ume or by value? It was an­other source of con­fu­sion.

As men­tioned, this method did fail. I changed my meth­ods to say­ing ‘bet­ter than 90% of the items in its group­ing’ and had mod­er­ate suc­cess. While solv­ing the uniform/​nor­mal/​Chi-squared dis­tri­bu­tion prob­lem it is still too long winded for my tastes.

• Most things I am rat­ing do not have a uniform distribution

The dis­tri­bu­tion of your rat­ings does not need to fol­low the dis­tri­bu­tion of what you are rat­ing. For max­i­mum in­for­ma­tion your (in­te­ger) rat­ing should point to a quan­tile—e.g. if you’re rat­ing on a 1-10 scale your rat­ing should match the decile into which the thing be­ing rated falls. And if your rat­ings cor­re­spond to quan­tiles, the rat­ings them­selves are uniformly dis­tributed.

• We have differ­ent goals. I want to my rat­ing to re­flect the items rel­a­tive po­si­tion in its group, you want a rat­ing to re­flect the items value in­de­pen­dent of the group.

Is this ac­cu­rate?

• Doesn’t seem so. If you rate by quin­tiles your rat­ing effec­tively in­di­cates the rank of the bucket to which the thing-be­ing-rated be­longs. This re­flects “the item’s rel­a­tive po­si­tion in its group”.

If you want your rat­ing to re­flect not a rank but some­thing ex­ter­nal, you can set up a va­ri­ety of sys­tems, but I would ex­pect that for max in­for­ma­tion your rat­ing would have to point a quin­tile of that ex­ter­nal mea­sure of the “value in­de­pen­dent of the group”.

• Try­ing to stab at the heart of the is­sue: I want the dis­tri­bu­tion of the rat­ings to fol­low the dis­tri­bu­tion of the rated be­cause when look­ing at the group this pro­vides an ad­di­tional piece of in­for­ma­tion.

• Well, at this point the is­sue be­comes who’s look­ing at your rat­ing. This “ad­di­tional piece of in­for­ma­tion” ex­ists only for peo­ple who have a suffi­ciently large sam­ple of your pre­vi­ous rat­ings so they un­der­stand where the lat­est rat­ing fits in the over­all shape of all your rat­ings.

Con­sider this ex­am­ple: I come up to you and ask “So, how was the movie?”. You an­swer “I give it a 6 out of 10″. Fine. I have some vague idea of what you mean. Now we wave a magic wand and bifur­cate re­al­ity.

In branch 1 you then add “The dis­tri­bu­tion of my rat­ings fol­lows the dis­tri­bu­tion of movie qual­ity, savvy?” and let’s say I’m suffi­ciently statis­ti­cally savvy to un­der­stand that. But… does it help me? I don’t know the dis­tri­bu­tion of movie qual­ity. it’s prob­a­bly bell-shaped, maybe, but not quite nor­mal if only be­cause it has to be bounded, I have no idea if its skewed, etc.

In branch 2 you then add “The rat­ing of 6 means I rate the movie to be in the sixth decile”. Ah, that’s much bet­ter. I now know that out of 10 movies that you’ve seen five were prob­a­bly worse and three were prob­a­bly bet­ter. That, to me, is a more use­ful piece of in­for­ma­tion.

• I un­der­stand and con­cede to the bet­ter logic. This pro­vides greater in­sight on why the origi­nal at­tempt to use these rat­ings failed.

• Quite of­ten the differ­ence be­tween the top 10 per­cent is higher than the differ­ence of the peo­ple be­tween 45% and 55%.

IQ scales have more peo­ple in the mid­dle than on the edges.

• As far as I re­mem­ber, IQs are nor­mal­ized ranks so to an­swer the ques­tion which 10% is “wider” you need to define by which mea­sure.

• I think it’s the C thing. I have no ev­i­dence for this.

• Is there any plau­si­ble way the earth could be moved away from the sun and into an or­bit which would keep the earth hab­it­able when the sun be­comes a red gi­ant?

• Ac­cord­ing to http://​​arxiv.org/​​abs/​​as­tro-ph/​​0503520 we would need to be able to boost our cur­rent or­bital ra­dius to about 7 AU.

This would cor­re­spond to a change in spe­cific or­bital en­ergy of 132712440018/​(2(1 AU)) to 132712440018 /​ (2(7 AU)). (where the 12-digit con­stant is the stan­dard grav­i­ta­tional pa­ram­e­ter of the sun. This is like 5.6 10^10 in Joules /​ Kilo­gram, or about 3.4 10^34 Joules when we re­store the re­duced mass of the earth/​sun (which I’m ap­prox­i­mat­ing as just the mass of the earth).

Wolfra­malpha helpfully sup­plies that this is 28 times the to­tal en­ergy re­leased by the sun in 1 year.

Or, if you like, it’s equiv­a­lent to the to­tal mass en­ergy of ~3.7 * 10^18 Kilo­grams of mat­ter (about 1.5% the mass of the as­ter­oid Vespa).

So un­til we’re able to har­ness and con­trol en­ergy on the or­der of mag­ni­tude of the to­tal en­er­getic out­put of the sun for mul­ti­ple years, we won’t be able to do this any time soon.

There might be an ex­ceed­ingly clever way to do this by play­ing with or­bits of nearby as­ter­oids to per­turb the or­bit of the earth over long timescales, but the change in en­ergy we’re talk­ing about here is pretty huge.

• I think you have some­thing there. You could de­sign a com­plex, but at least metastable or­bit for an as­ter­oid sized ob­ject that, in each pe­riod, would fly by both Earth and, say, Jupiter. Be­cause it is metastable, only very small course cor­rec­tions would be nec­es­sary to keep it go­ing, and it could be ar­ranged such that at ev­ery pass Earth gets pushed out just a lit­tle bit, and Jupiter pul­led in. With the right sized as­ter­oid, it seems fea­si­ble that this pro­cess could yield the de­sired re­sults af­ter billions of years.

• Ig­nor­ing the con­cept of “can we ap­ply that much delta-V to a planet?”, I’d be in­ter­ested to know whether it’s be­lieved that there ex­ists a “Goldilocks zone” suit­able for life at all stages of a star’s life. In­tu­itively it seems like there should be, I’m not sure.

Of course, it should be pointed out that the com­mon un­der­stand­ing of “when the sun be­comes a red gi­ant” may be a bit flawed; the sun will cool and ex­pand, then col­lapse. On a hu­man time scale, it will spend a lot of that time as a red gi­ant, but if you sim­ply took the Earth when its or­bit started to be crowded by the in­ner edge of the Goldilocks zone and put it in a new or­bit, that new or­bit wouldn’t be any­where close to an eter­nally safe one. In­deed, I sus­pect that the out­er­most of the or­bits re­quired for the gi­ant-stage sun would be too far from the sun at the time we’d first need to move the Earth.

• The sun’s lu­minos­ity will rise by around 300X as it turns into a gi­ant. If we wish to keep the same en­ergy flux onto the earth at that point, we must in­crease the earth’s or­bit a fac­tor of sqrt(300) = 17X. The to­tal en­ergy of the earth’s cur­rent or­bit is 2.65E33 J. We must re­duce this to 117 of its cur­rent value. or re­duce it by (16/​17)*2.65E33 J = 2.5E33 J. The cur­rent to­tal an­nual en­ergy pro­duc­tion in the world is about 5E17 J. The sun will be a red gi­ant in about 7.6E9 years. So we would need about a mil­lion times cur­rent global en­ergy pro­duc­tion run­ning full time into rocket mo­tors to push the earth out to a safe or­bit by the time the sun has ex­panded.

But it is worse than that. The Sun ac­tu­ally ex­pands over a scant 5 mil­lion years near the end of that 7.6E9 years. So to avoid freez­ing for billions of years be­cause we have started mov­ing away from the sun too soon, we es­sen­tially will need a billion times cur­rent en­ergy pro­duc­tion run­ning into rocket en­g­ines for those 5 mil­lion years of so­lar ex­pan­sion. But the good news is we have 7.6E9 billion years to figure out how to do that.

If we use plasma rock­ets which push re­ac­tion mass out at 1% the speed of light, then we will need a to­tal of about 6E16 kg re­ac­tion mass, or about 0.000001% of the earth’s to­tal mass. The to­tal mass of wa­ter on the earth is about 1E21 kg so we could do all of this us­ing wa­ter as re­ac­tion mass and still have 99.99% of the wa­ter left when we are done.

• I won­der what the ex­haust plume of an en­g­ine like that would look like, and how far away from it you’d have to be stand­ing to still be ca­pa­ble of look­ing at any­thing af­ter a sec­ond or two.

• Not “when the sun be­comes a red gi­ant”, be­cause red gi­ants are vari­able on a much too short time scale, but, as oth­ers men­tioned, we can prob­a­bly keep the earth in a hab­it­able zone for an­other 5 billion years or so. We have more than enough hy­dro­gen on earth to provide the nec­es­sary po­ten­tial en­ergy in­crease with fu­sion-based propul­sion, though build­ing some­thing like a 100 petaWatt en­g­ine is prob­le­matic at this point, (for com­par­i­son, it is a sig­nifi­cant frac­tion of the to­tal so­lar ra­di­a­tion hit­ting the earth).

EDIT: I sus­pect that ter­raform­ing Mars (and/​or cool­ing down the Earth more effi­ciently when the Sun gets brighter) would re­quire less en­ergy than mov­ing the Earth to the Mars or­bit. My calcu­la­tions could be off, though, hope­fully some­one can do them in­de­pen­dently.

• Only ma­jor prob­lem I know of with ter­raform­ing Mars is how to give it a mag­netic field. We’d have to some­how re-melt the in­te­rior of the planet. Other­wise, we could just put up with con­stant in­tense so­lar ra­di­a­tion, and at­mo­sphere off-gassing into space. Maybe if we built a big fu­sion re­ac­tor in the mid­dle of the planet...?

• I re­call es­ti­mat­ing the power re­quired to run an equa­to­rial su­per­con­duct­ing ring a few me­ters thick 1 km or so un­der the Mars sur­face with enough cur­rent to simu­late Earth-like mag­netic field. If I re­call cor­rectly, it would re­quire about the cur­rent level of power gen­er­a­tion on Earth to ramp it up over a cen­tury or so to the de­sired level. Then what­ever is re­quired to main­tain it (mostly cool­ing the ring), which is very lit­tle. Of course, an ac­ci­dent in­ter­rupt­ing the cur­rent flow would be an epic dis­aster.

• Wouldn’t it be more effi­cient to use that en­ergy to de­stroy Mars and build start build­ing a Dyson swarm from the de­bris?

• Let’s do a quick es­ti­mate. De­stroy­ing a Mars-like planet re­quires ex­pend­ing the equiv­a­lent of its grav­i­ta­tional self-en­ergy, ~GM^2/​R, which is about 10^32J (which we could eas­ily ob­tain from a comet 10 kn in ra­dius… con­sist­ing of an­ti­mat­ter!) For com­par­i­son, the Earth’s mag­netic field has about 10^26J of en­ergy, a mil­lion times less. I leave it to you to draw the con­clu­sions.

• I’m cu­ri­ous about the thought pro­cess that led to this be­ing asked in the “stupid ques­tions” thread rather than the “very ad­vanced the­o­ret­i­cal spec­u­la­tion of fu­ture tech­nol­ogy” thread. =P

As a more se­ri­ous an­swer: Any­thing that would effec­tively give us a means to al­ter mass and/​or the effects of grav­ity in some way (if there turns out to be a differ­ence) would help a lot.

• I wasn’t sure there was a way to do it within cur­rent physics.

Now we get to the hard ques­tion: sup­pos­ing we (broadly in­ter­preted, it will prob­a­bly be a suc­ces­sor species) want to move the earth out­wards us­ing those lit­tle grav­i­ta­tional nudges, how do we get civ­i­liza­tions with a suffi­ciently long at­ten­tion span?

• [...] how do we get civ­i­liza­tions with a suffi­ciently long at­ten­tion span?

I heard Ri­talin has a solu­tion. Couldn’t pay at­ten­tion long enough to ver­ify. ba-dum tish

On a se­ri­ous note, isn’t the whole kil­ling-the-Earth-for-our-chil­dren thing a rather in­ter­est­ing sce­nario? I’ve never seen it men­tioned in my game the­ory-re­lated read­ing, and I find that to be some­what sad. I’m pretty sure a proper mod­el­ing of the game sce­nario would cover both cli­mate change and eaten-by-red-gi­ant.

• I don’t see the con­nec­tion to kil­ling the earth for our chil­dren. Mov­ing the earth out­wards is an effort to save the earth for our far fu­ture selves and our chil­dren.

• I think “for our chil­dren” means “as far as our chil­dren are con­cerned” and failing to move the earth’s or­bit so it doesn’t get eaten by the sun (de­spite be­ing able to do it) would qual­ify as “kil­ling the earth for our chil­dren”. (The more usual refer­ents be­ing things like re­source de­ple­tion and pol­lu­tion with po­ten­tially dis­as­trous long-term effects.)

• Thanks. That makes sense.

• If we haven’t got­ten one by then, we’re doomed. Or at least, we don’t get a very good planet. We could still have space-sta­tions or live on planets where we have to bring our own at­mo­sphere.

• Yes, I saw an ar­ti­cle a few years ago a back of the en­velope es­ti­mate that sug­gested this would be doable if one could turn mass on the moon more or less di­rectly to en­ergy and use the moon as a grav­i­ta­tional tug to slowly move Earth out of the way. You can change mass al­most di­rectly into en­ergy by feed­ing the mass into a few small­ish black­holes.

• How do they pro­pose to move the black­holes? Noth­ing can touch a black­hole, right?

• Black holes feel grav­ity just like any other mas­sive body. And they can be elec­tri­cally charged. So you can move them around with strong enough grav­i­ta­tional and/​or elec­tric fields.

• It can, as long as you don’t mind that you won’t get it back when you’re done. You have to con­stantly fuel the black hole any­way. Just throw the fuel in from the op­po­site di­rec­tion that you want the black hole to go.

• Throw­ing mass into a black hole is harder than it sounds. Con­ve­niently sized black holes that you ac­tu­ally would have a chance at mov­ing around are ex­tremely small, much smaller than atoms, I be­lieve. I think they would just sit there with­out eat­ing much, de­spite stre­nous efforts at feed­ing them. The cross-sec­tion is way too small.

To make mat­ters worse, such holes would emit a lot of Hawk­ing ra­di­a­tion, which would a) in­terfere with try­ing to feed them, and b) quickly evap­o­rate them end­ing in an in­tense flash of gamma rays.

• The prob­lem is throw­ing mass into other mass hard enough to make a black hole in the first place.

Hawk­ing ra­di­a­tion isn’t a big deal. In fact, the prob­lem is mak­ing a black hole small enough to get a sig­nifi­cant amount of it. An atom-sized black hole has around a tenth of a watt of Hawk­ing ra­di­a­tion. I think it might be pos­si­ble to get ex­tra en­ergy from it. From what I un­der­stand, Hawk­ing ra­di­a­tion is just what doesn’t fall back in. If you en­close the black hole, you might be able to ab­sorb some of this en­ergy.

• Yes, mak­ing them would be in­cred­ibly hard, and be­cause of their rel­a­tively short life­times, it would be ex­tremely sur­pris­ing to find any ly­ing around some­where. Atom sized black holes would be very heavy and not pro­duce much Hawk­ing rea­di­a­tion, as you say. Smaller ones would pro­duce more Hawk­ing ra­di­a­tion, be even harder to feed, and evap­o­rate much faster.

• I don’t re­ally know if it’s plau­si­ble, but Larry Niven’s far-fu­ture fic­tion A World Out of Time (the novel, not the origi­nal short story of the same name) deals with ex­actly this prob­lem.

His solu­tion is a “fu­sion can­dle”: build a huge dou­ble-ended fu­sion tube, put it in the at­mo­sphere of a gas gi­ant, and light it up. The thrust down­wards keeps the tube float­ing in the at­mo­sphere. The thrust up­wards pro­vides an en­g­ine to push the gas gi­ant around. In the book, they pushed Uranus to Earth, and then moved it out­wards again, grav­i­ta­tion­ally pul­ling the Earth along.

• This is a fas­ci­nat­ing ques­tion. Very spec­u­la­tively, I could imag­ine some­how us­ing en­ergy gained by push­ing other ob­jects closer to the Sun, to move the Earth away from the Sun. Like some sort of im­mense elas­tic band stretch­ing be­tween Mars and Earth, pul­ling Earth “up” and Mars “down”.

• That is es­sen­tially what would hap­pen if you used grav­i­ta­tional as­sis­tance and or­bited as­ter­oids be­tween Mars and Earth.

• Would it be pos­si­ble to slow down or stop the rise of sea level (due to global warm­ing) by pump­ing wa­ter out of the oceans and onto the con­ti­nents?

• We could re­ally use a new Aral sea, but in­tu­itively I’d ex­pected that this would be a tiny dent in the depth of the oceans. So, to the maths:

Wikipe­dia claims that from 1960 to 1998 the vol­ume of the Aral sea dropped from its 1960 amount of 1,100 km^3 by 80%.

I’m go­ing to give that an­other 5% for more loss since then, as the South Aral Sea has now lost its east­ern half en­i­trely.

This gives ~1100 * .85 = 935km^3 of wa­ter that we’re look­ing to re­place.

The Earth is ~500m km^2 in sur­face area, ap­prox. 70% of which is wa­ter = 350m km^2 in wa­ter.

935 km^3 over an area of 350m km^2 comes to a depth of 2.6 mm.

This is mas­sively larger that I would have pre­dicted, and it gets bet­ter. The cur­rent sal­inity of the Aral Sea is 100g/​l which is way higher than that of sea­wa­ter at 35g/​l, so we could pretty much pump the wa­ter straight in still with net en­vi­ron­men­tal gain. In­fact this is a solu­tion to the crisis that has been pre­vi­ously pro­posed, al­though it looks like most peo­ple would rather dilute the sea­wa­ter first.

To acheive the de­sired re­sult of 1 inch drop in sea level, we only need to find 9 equiv­a­lent pro­jects around the world. Sadly, the only other one I know of is Lake Chad, which is sig­nifi­cantly smaller than the Aral Sea. How­ever, since the loss of the Aral Sea is due to over-in­ten­sive use of the wa­ter for farm­ing, the gives us an idea of how much wa­ter can be con­tained on­land in plants: I would ex­pect that we might be able to get this amount again if we un­der­took a de­sal­i­na­tion/​ir­ri­ga­tion pro­gram in the Sa­hara.

• Dead Sea and Sal­ton Sea leap to mind as good pro­jects.

Also could we store more wa­ter in the at­mo­sphere? If we just poured wa­ter into a desert like the Sa­hara, most of it would evap­o­rate be­fore it flowed back to the sea. This would seem to raise the av­er­age mois­ture con­tent of the at­mo­sphere. Sure even­tu­ally it gets rained back down, but this would seem to be a fea­ture more than a bug for a world that keeps look­ing for more fresh wa­ter. In­deed my mind is cur­rently in­vent­ing in­ter­est­ing meth­ods for mov­ing the wa­ter around us­ing purely the heat from the sun as an en­ergy source.

• How­ever, since the loss of the Aral Sea is due to over-in­ten­sive use of the wa­ter for farm­ing, the gives us an idea of how much wa­ter can be con­tained on­land in plants

Isn’t it more of an in­di­ca­tion of how much wa­ter can be con­tained in the Aral Sea basin? The plants don’t need to con­tain all of the miss­ing Aral Sea wa­ter at once, they just need to be wa­tered faster than the Sea is be­ing re­filled by rain­fall. How much wa­ter does rain­fall sup­ply ev­ery year, as a per­centage of the Sea’s to­tal vol­ume?

• I recom­mend googling “geo­eng­ineer­ing global warm­ing” and read­ing some of the top hits. There are nu­mer­ous pro­pos­als for re­duc­ing or re­vers­ing global warm­ing which are as­tound­ingly less ex­pen­sive than re­duc­ing car­bon diox­ide emis­sions, and also much more likely to be effec­tive.

To your di­rect ques­tion about stor­ing more wa­ter on land, this would be a geo­eng­ineer­ing pro­ject. Some straight­for­ward ap­proaches to do­ing it:

Use rain­fall as your “pump” in or­der to save hav­ing to build mas­sive en­ergy us­ing wa­ter pumps. Without any effort on our part, na­ture nat­u­ally lifts wa­ter a km or more above sea level and then drops it, much of it dropped onto land. That wa­ter gen­er­ally is fun­neled back to the ocean in rivers. With just the con­struct­ino of walls, some rivers might be pre­vented from drain­ing into the ocean. Large ar­eas would be flooded by the river, stor­ing wa­ter other than in the ocean.

Use grav­ity as your pump. THere are many large lo­ca­tions on earth than are be­low sea level. Aquifers that took no net en­ergy to do pump­ing could be built that would es­sen­tially grav­ity-feed ocean wa­ter into these ar­eas. Th­ese ar­eas can be hun­dreds of me­ters be­low sea level, so if even 1% of the earth’s sur­face is 100 m be­low sea level, then the ocean’s could be low­ered by a bit more than 1 m by filling these de­pres­sions with ocean wa­ter.

Of course ei­ther one of these ap­proaches will cause mas­sive other changes, al­though prob­a­bly in a pos­i­tive di­rec­tion as far as cli­mate is con­cerned. More wa­ter sur­face on the planet should mean more evap­o­ra­tion of wa­ter which reates more clouds which re­flects more en­ergy from the sun, low­er­ing the heat­ing of the earth. But of course a non-triv­ial anal­y­sis might yield a rich de­tail of effects worth pon­der­ing.

In the past fea­tures like the Sal­ton sea and the Dead sea have been filled by fresh-wa­ter rivers, es­sen­tially mean­ing that rain was used as the pump to fill them. The de­mand for fresh wa­ter has stopped these fea­tures from be­ing filled. It seems to me that an aquifer to re­fill these fea­tures with salt wa­ter from the ocean would be rel­a­tively be­nign in im­pact, since in na­ture these fea­tures have been ful­ler of salt wa­ter in the past, and so the im­pact of that wa­ter might be blessed by hu­man­ity as “nat­u­ral” in­stead of cursed by hu­man­ity as “man made.”

• Where does the wa­ter go? As­sum­ing you want to re­duce sea level by a 12 inch us­ing this mechanism, you have to do the equiv­a­lent of cov­er­ing the en­tire ETA: land area of earth in a full inch of wa­ter (what’s worse, sea­wa­ter; you’d want to de­sal­i­nate it). Even as­sum­ing you can find room on land for all this wa­ter and the pump ca­pac­ity to dis­place it all, what’s to stop it from wash­ing right back out to sea? Some of it can be used to re­fill aquifers, but the ca­pac­ity of those is triv­ial next to that of the oceans. Some of it can be stored as ice and snow, but global warm­ing will re­duce (ac­tu­ally, has already quite visi­bly re­duced) land glaci­a­tion; even if you can some­how in­duce the wa­ter to freeze, that heat you ex­tract from it will have to go some­where and un­less you can dump it out of the at­mo­sphere en­tirely it will just con­tribute to the warm­ing. The rest of the wa­ter will just flood the ex­ist­ing rivers in its mad rush to do what nearly all con­ti­nen­tal wa­ter is always do­ing any­how: flow­ing to sea.

• Clearly, the solu­tion is to build a space ele­va­tor and ship wa­ter into or­bit. We lower the sea lev­els, the wa­ter is there if we need it later, and in the mean­time we get to en­joy the pretty rings.

(No, I’m not se­ri­ous.)

• in the mean­time we get to en­joy the pretty rings.

Now I’m cu­ri­ous how much en­ergy it would take to set up a sta­ble ring or­bit made of ice crys­tals for Earth, or if that would be im­pos­si­ble with­out sta­tion­keep­ing cor­rec­tions.

• How long will ice sur­vive in Earth’s or­bit, any­way?

• I think it would de­pend on the or­bit? Ob­vi­ously it would need to be in an or­bit that does not col­lide with our ar­tifi­cial satel­lites, and it would need to be high enough to make at­mo­spheric drag neg­ligible, but that leave a lot of po­ten­tial or­bits. I don’t think of any rea­son ice would go away with any par­tic­u­lar haste from any of them, but I’m not an ex­pert in this area.

Or­bital de­cay aside, why might ice (once placed into an at-the-time sta­ble or­bit) not sur­vive?

• why might ice (once placed into an at-the-time sta­ble or­bit) not sur­vive?

Sun.

So­lar ra­di­a­tion at 1 AU is about 1.3KW/​sq.m. Ice that is not per­ma­nently in the shade will dis­ap­pear rather rapidly, I would think.

• I would think it would lose heat to space fast enough, but maybe not. I know heat dis­si­pa­tion is a ma­jor con­cern for space­craft, but those are usu­ally gen­er­at­ing their own heat rather than just try­ing to dump what they pick up from the sun. What would hap­pen to the ice /​ wa­ter? It’s not like it can just evap­o­rate into the at­mo­sphere...

• It’s not like it can just evap­o­rate into the at­mo­sphere...

Vapour doesn’t need an at­mo­sphere to take it up. Empty space does just as well.

So, how long would a snow­ball in high or­bit last? Sounds like a ques­tion for xkcd. A brief at­tempt at a lower bound that is prob­a­bly a sub­stan­tial un­der­es­ti­mate:

How much en­ergy has to be pumped in per kilo­gram to turn ice at what­ever the “tem­per­a­ture” is in or­bit into wa­ter vapour? Call that E. Let S be the so­lar in­so­la­tion of 1.3 kW/​m^2. Imag­ine the ice is a spher­i­cal cow, er, a rec­t­an­gu­lar block di­rectly fac­ing the sun. Ac­cord­ing to Wikipe­dia the albedo of sea ice is in the range 0.5 to 0.7. Take that as 0.6, so the frac­tion of en­ergy re­tained is A = 0.4. The den­sity of ice is D = 916.7 kg/​m^3. Ig­nore ra­di­a­tive cool­ing, con­duc­tion to the cold side of the ice­berg, and time spent in the Earth’s shadow, and as­sume that the wa­ter vapour in­stantly van­ishes. Then the sur­face will ab­late at a rate of SA/​ED m/​s. Equiv­a­lently, ED/​86400SA days per me­tre.

For sim­plic­ity I’ll take the ice to be at freez­ing point. Then:

E = 334 kJ/​kg to melt + 420 kJ/​kg to reach boiling point + 2260 kJ/​kg to boil = 3014 kJ/​kg.

For a lower start­ing tem­per­a­ture, in­crease E ac­cord­ingly.

3014 916.7 /​ (86400 1.3 * 0.4) = 61 days per me­tre. Not all that long, but mean­while, you’ve cre­ated a haz­ard for space flight and for the sky­hook.

I sus­pect that ig­nor­ing ra­di­a­tive cool­ing will be the largest source of er­ror here, but this isn’t a black body, so I don’t know how closely the Ste­fan-Boltz­mann law will ap­ply, and I haven’t calcu­lated the re­sults if it did. (ETA: The black body tem­per­a­ture of the Moon is just un­der freez­ing.)

(ETA: fixed an er­ror in the calcu­la­tion of E, whereby I had 4200 in­stead of 420 kJ/​kg to reach boiling point. Also, past­ing in all the sig­nifi­cant figures from the sources doesn’t mean this is claimed to be any­thing more than a rough es­ti­mate.)

• to reach boiling point

This is vac­uum—all liquid wa­ter will boil im­me­di­ately, at zero Cel­sius. Be­sides I’m sure there will be some sub­li­ma­tion of ice di­rectly to wa­ter va­por.

In fact, look­ing at wa­ter’s phase di­a­gram, in high vac­uum liquid wa­ter just doesn’t ex­ist so I think ice will sim­ply sub­li­mate with­out the in­ter­me­di­ate liquid stage.

• Right, I for­got the effect of pres­sure. So E will be differ­ent, per­haps very differ­ent. What will it be?

• Here is the proper math. This is ex­pressed in terms of ice tem­per­a­ture, though, so we’ll need to figure out how much the so­lar flux would heat the outer layer of ice first.

• One pos­si­bil­ity would be to re­place the ice caps by hand. Run a heated pipeline from the ocean to the ice­caps, pump wa­ter there, and let it freeze on its own. I don’t know how well that would work, and I sus­pect you’re bet­ter off just let­ting sea lev­els rise. If you need the land that bad, just make float­ing plat­forms.

Edit: Re­place “ice caps” with “An­tar­tica”. Ad­ding ice to the north­ern ice­cap, or even the south­ern one out where it’s float­ing, won’t al­ter the sea level since float­ing ob­jects dis­place their mass in wa­ter.

• This should be a what if ques­tion. I’d like to see what Ran­dall would do with it.

• Well, this is not pump­ing, but it might be much more effi­cient: As I un­der­stand, the po­lar ice caps are in an equil­ibrium be­tween snow­fall and runoff. If you could some­how wall in a large por­tion of po­lar ice, such that it can­not flow away, it might rise to a much higher level and se­quester enough wa­ter to make a differ­ence in sea lev­els. A su­per-large ver­sion of a hy­dro­elec­tric dam, in effect, for ice.

It might also help to have a very high wall around the patch to keep air from cir­cu­lat­ing, keep­ing the cold po­lar air where it is and re­duce evap­o­ra­tion/​sub­li­ma­tion.

• Is there a causal link be­tween be­ing rel­a­tively lonely and iso­lated dur­ing school years and (higher chance of) end­ing up a more in­tel­li­gent, less shal­low, more suc­cess­ful adult?

Imag­ine that you have a pre-school child who has so­cial­iza­tion prob­lems, finds it difficult to do any­thing in a group of other kids, to ac­quire friends, etc., but cog­ni­tively the kid’s fine. If noth­ing changes, the kid is look­ing at be­ing shunned or mocked as weird through­out school. You work hard on over­com­ing the so­cial is­sues, maybe you go with the kid to a ther­a­pist, you ar­range play-dates, you play-act so­cial sce­nar­ios with them..

Then your friend comes up to have a heart-to-heart talk with you. Look, your friend says. You were a nerd at school. I was a nerd at school. We each had one or two friends at best and never hung out with pop­u­lar kids. We were never part of any crowd. In­stead we read books un­der our desks dur­ing les­sons and read SF nov­els dur­ing the breaks and read sci­ence en­cy­clo­pe­dias dur­ing din­ner at home, and started pro­gram­ming at 10, and and and. Now you’re work­ing so hard to give your kid a full so­cial life. You barely had any, are you sure now you’d rather you had it oth­er­wise? Let me be frank. You have a smart kid. It’s nor­mal for a smart kid to be kind of lonely through­out school, and never hang out with lots of other kids, and read books in­stead. It builds sub­stance. Hav­ing a lousy so­cial life is not the failure sce­nario. The failure sce­nario is to have a very full and happy school ex­pe­rience and end up a ditzy ado­les­cent. You should worry about that much much more, and dis­tribute your efforts ac­cord­ingly.

Is your friend com­pletely as­i­nine, or do they have a point?

• Seems to me that very high in­tel­li­gence can cause prob­lems with so­cial­iza­tion: you are differ­ent from your peers, so it is more difficult for you to model them, and for them to model you. You see each other as “weird”. (Similar prob­lem for very low in­tel­li­gence.) In­tel­li­gence causes loneli­ness, not the other way round.

But this de­pends on the en­vi­ron­ment. If you are highly in­tel­li­gent per­son sur­rounded by enough highly in­tel­li­gent peo­ple, then you do have a com­pany of in­tel­lec­tual peers, and you will not feel alone.

I am not sure about the re­la­tion be­tween read­ing many books and be­ing “less shal­low”. Do in­tel­li­gent kids sur­rounded by in­tel­li­gent kids also read a lot?

• All of this is very true (for me, any­way—typ­i­cal mind fal­lacy and all that). High in­tel­li­gence does seem to cause so­cial iso­la­tion in most situ­a­tions. How­ever, I also agree with this:

But this de­pends on the en­vi­ron­ment. If you are highly in­tel­li­gent per­son sur­rounded by enough highly in­tel­li­gent peo­ple, then you do have a com­pany of in­tel­lec­tual peers, and you will not feel alone.

High in­tel­li­gence does not in­trin­si­cally have a nega­tive effect on your so­cial skills. Rather, I feel that it’s the lack of peers that does that. Lack of peers leads to lack of re­lata­bil­ity leads to lack of so­cial­iza­tion leads to lack of prac­tice leads to (even­tu­ally) poor so­cial skills. Worse yet, even­tu­ally that starts feel­ing like the norm to you; it no longer feels strange to be the only one with­out any real friends. When you do find a suit­able so­cial group, on the other hand, I can tes­tify from ex­pe­rience that the feel­ing is ab­solutely ex­hil­arat­ing. That’s pretty much the main rea­son I’m glad I found Less Wrong.

• It is not true that peo­ple can­not—or do not—in­ter­act suc­cess­fully with peo­ple that are less in­tel­li­gent than they are. Many chil­dren get along well with their younger siblings. Many adults love be­ing kinder­garten teach­ers… Or feel highly en­gaged work­ing in the de­men­tia wing of the rest home. Many peo­ple of all in­tel­li­gence lev­els love hav­ing very dumb pets. Th­ese are not peo­ple (or be­ings) that you re­late to be­cause of their ‘re­lata­bil­ity’ in the sense that they are like you, but be­cause they are mean­ingful to you. And in­ter­act­ing with peo­ple build so­cial skills ap­pro­pri­ate to those peo­ple—which may not be very gen­er­al­iz­able when you are prac­tic­ing in­ter­act­ing with kinder­garten stu­dents, but is cer­tainly a use­ful skill when you are in­ter­act­ing with av­er­age peo­ple.

I per­son­ally would think that the prob­lem un­der dis­cus­sion is not re­lated to in­tel­li­gence, but in try­ing to help an in­tro­vert iden­tify the most fulfilling in­ter­per­sonal bonds with­out mak­ing them more so­cial in a gen­eral sense. How­ever, I don’t know the kid in ques­tion, so I can’t say.

• My friend isn’t ob­vi­ously-to-me wrong, but their ar­gu­ment is un­con­vinc­ing to me.

It’s nor­mal for a smart kid to be kind of lonely—if true, that’s sad, and by de­fault we should try to fix it.

It builds sub­stance—cita­tion neded. It seems like it could just as eas­ily build in­se­cu­rity, re­sent­ment, etc.

Lousy so­cial life—this is a failure mode. It might not be the worst one, but it seems like the most likely one, so de­serv­ing of at­ten­tion.

FWIW, I’m an adult who was kind of lonely as a kid, and on the mar­gin I think that hav­ing a more ac­tive so­cial life then would have had pos­i­tive effects on me now.

• It’s nor­mal for a smart kid to be kind of lonely—if true, that’s sad, and by de­fault we should try to fix it.

True, but it may be one of those prob­lems that’s just not fix­able with­out se­ri­ously re­struc­tur­ing the school sys­tem, es­pe­cially if some­thing like Villiam_Bur’s the­ory is true.

It builds sub­stance—cita­tion neded. It seems like it could just as eas­ily build in­se­cu­rity, re­sent­ment, etc.

Speak­ing from ex­pe­rience, I can tell you that I know a lot more than any of my peers (I’m 16), and prac­ti­cally all of that is due to the read­ing I did and am still do­ing. That read­ing was a di­rect re­sult of my iso­la­tion and would likely not have oc­curred had I been more so­cially ac­cepted. I should add that I have never once felt re­sent­ment or in­se­cu­rity due to this, though I have de­vel­oped a slight sense of su­pe­ri­or­ity. (That last part is some­thing I am work­ing to fix.)

Lousy so­cial life—this is a failure mode. It might not be the worst one, but it seems like the most likely one, so de­serv­ing of at­ten­tion.

I sup­pose this one de­pends on how you define a “failure mode”. I have never viewed my lack of so­cial life as a bad thing or even a hin­drance, and it doesn’t seem like it will have many long-term effects ei­ther—it’s not like I’ll be reg­u­larly in­ter­act­ing with my cur­rent peers for the rest of my life.

Again, this de­pends on how you define “ditzy”. Based on my ob­ser­va­tions of a typ­i­cal high school stu­dent at my age, I would not hes­i­tate to clas­sify over 90% of them as “ditzy”, if by “ditzy” you mean “play­ing so­cial sta­tus games that will have lit­tle im­pact later on in life”. I shud­der at the thought of ever be­com­ing like that, which to me sounds like a much worse prospect than not hav­ing much of a so­cial life.

FWIW, I’m an adult who was kind of lonely as a kid, and on the mar­gin I think that hav­ing a more ac­tive so­cial life then would have had pos­i­tive effects on me now.

I see. Well, to each his own. I my­self can­not imag­ine grow­ing up with any­thing other than the child­hood I did, but that may just be lack of imag­i­na­tion on my part. Who knows; maybe I would have turned out bet­ter than I did if I had had more so­cial in­ter­ac­tion dur­ing child­hood. Then again, I might not have. Without con­crete data, it’s re­ally hard to say.

• It builds sub­stance—cita­tion neded. It seems like it could just as eas­ily build in­se­cu­rity, re­sent­ment, etc.

Speak­ing from ex­pe­rience, I can tell you that I know a lot more than any of my peers (I’m 16), and prac­ti­cally all of that is due to the read­ing I did and am still do­ing. That read­ing was a di­rect re­sult of my iso­la­tion and would likely not have oc­curred had I been more so­cially ac­cepted. I should add that I have never once felt re­sent­ment or in­se­cu­rity due to this, though I have de­vel­oped a slight sense of su­pe­ri­or­ity. (That last part is some­thing I am work­ing to fix.)

Read­ing a ton as a teen was very helpful to me also, but I think I would have still done it if I had a rich so­cial life of peo­ple who were also smart and en­joyed read­ing. Ul­ti­mately be­ing around peers who challenge me is more mo­ti­vat­ing than be­ing iso­lated; I don’t want to be the one drag­ging be­hind.

I do feel that I had to learn a fair amount of ba­sic so­cial skills through de­liber­ately watch­ing and tak­ing apart, rather than just learn­ing through do­ing—mak­ing me some­what the so­cial equiv­a­lent of some­one who has learned a for­eign lan­guage through study rather than by grow­ing up a na­tive speaker; I have the pat­tern of strengths and weak­nesses as­so­ci­ated with the differ­ent ap­proach.

• There may be a choice be­tween a lot of time think­ing/​learn­ing vs. a lot of time so­cial­iz­ing.

It seems to me that a lot of fa­mous cre­ative peo­ple were child­hood in­val­ids, though I haven’t heard of any such from re­cent decades. It may be that the right level of in­val­idism isn’t com­mon any more.

• Here is Paul Gra­ham’s es­say on the sub­ject.

• I think I re­mem­ber read­ing that fa­mous in­ven­tors were likely to be iso­lated due to ill­ness as chil­dren. I think it’s un­likely that in­tel­li­gence is de­creased by be­ing well-so­cial­ized, but it seems pos­si­ble to me that peo­ple who are very well-so­cial­ized might find them­selves think­ing of fewer origi­nal ideas.

• Can any­one link a deep dis­cus­sion, in­clud­ing en­ergy and time re­quire­ments, is­sues with space­ship shield­ing from ra­di­a­tion and col­li­sions, etc., that would be in­volved in in­ter­stel­lar travel? I ask be­cause I am won­der­ing whether this is sub­stan­tially more difficult than we of­ten imag­ine, and per­haps a bot­tle­neck in the Drake Equation

• tl;dr: It is definitely more difficult than most peo­ple think, be­cause most peo­ple’s thoughts(even sci­en­tifi­cally ed­u­cated ones) are heav­ily in­fluenced by sci-fi, which is al­most in­vari­ably premised on hav­ing easy in­ter­stel­lar trans­port. Even the au­thors like Clarke with difficult in­ter­stel­lar trans­port as­sume that the ob­vi­ous prob­lems(e.g., light­speed) re­main, but the non-ob­vi­ous prob­lems(e.g., what hap­pens when some­thing breaks when you’re two light-years from the near­est macro­scopic ob­ject) dis­ap­pear.

• Some com­ments on this from Charles Stross. Not op­ti­mistic about the prospects. Some­what quan­ti­ta­tive, at the back-of-en­velope level of de­tail.

• Pro­ject Icarus seems like a de­cent place to start.

• You might want to check out Cen­tauri Dreams, best blog ever and ded­i­cated to this is­sue.

• A fair bit of this is ei­ther cited or calcu­lated within “Eter­nity in six hours.” See also my in­ter­view with one of its au­thors, and this re­view by Nick Beck­stead.

• Are there any good trust, value, or rep­u­ta­tion met­rics in the open source space? I’ve re­cently es­tab­lished a small in­ter­nal-use Dis­course fo­rum and been rather ap­palled by the limi­ta­tions of what is in­tended to be a next-gen­er­a­tion sys­tem (sta­tus flag, num­ber of posts, tag­ging), and from a quick overview most com­peti­tors don’t seem to be much stronger. Even fairly spe­cial­ist fora only seem marginally more ca­pa­ble.

This is ob­vi­ously a re­ally hard prob­lem and con­flux of many other hard prob­lems, but it seems odd that there are so many ob­vi­ous im­prove­ments available.

((In­spired some­what by my frus­tra­tion with Karma, but I’m hon­estly more in­ter­ested in its rele­vance for out­side situ­a­tions.))

• Tan­gen­tially, is it pos­si­ble for a good rep­u­ta­tion met­ric to sur­vive at­tacks in real life?

Imag­ine that you be­come e.g. a fa­mous com­puter pro­gram­mer. But al­though you are a celebrity among free soft­ware peo­ple, you fail to con­vert this fame to money. So must keep a day job at a com­puter com­pany which pro­duces shitty soft­ware.

One day your boss will re­al­ize that you have high pres­tige in the given met­ric, and the com­pany has low pres­tige. So the boss will ask you to “recom­mend” the com­pany on your so­cial net­work page (which would in­crease the com­pany pres­tige and hope­fully in­crease the profit; might de­crease your pres­tige as a side effect). Maybe this would be ille­gal, but let’s sup­pose it isn’t, or that you are not in a po­si­tion to re­fuse. Or you could imag­ine a more dra­matic situ­a­tion: you are a widely re­spected poli­ti­cal or eco­nom­i­cal ex­pert, it is 12 hours be­fore elec­tion, and a poli­ti­cal party has kid­napped your fam­ily and threat­ens to kill them un­less you “recom­mend” this party, which ac­cord­ing to their model would help them win the elec­tion.

In other words, even a digi­tal sys­tem that works well could be vuln­er­a­ble to at­tacks from out­side of the sys­tem, where oth­er­wise trust­wor­thy peo­ple are forced to act against their will. A pos­si­ble defense would be if peo­ple could some­how hide their votes; e.g. your boss might know that you have high pres­tige and the com­pany has low pres­tige, but has no meth­ods to ver­ify whether you have “recom­mended” the com­pany or not (so you could just lie that you did). But if we make ev­ery­thing se­cret, is there a way to ver­ify whether the sys­tem is re­ally work­ing as de­scribed? (The owner of the sys­tem could just add 9000 trust points to his fa­vorite poli­ti­cal party and no one would ever find out.)

I sus­pect this is all con­fused and I am ask­ing a wrong ques­tion. So feel free to an­swer to ques­tion I should have asked.

• There are si­mul­ta­neously a large num­ber of laws pro­hibit­ing em­ploy­ers from re­tal­i­at­ing against per­sons for vot­ing, and a num­ber of ac­cu­sa­tions of re­tal­i­a­tion for vot­ing. So this isn’t a the­o­ret­i­cal is­sue. I’m not sure it’s dis­tinct from other meth­ods of com­pro­mis­ing trusted users—the effects are similar whether the com­pro­mised node was beaten with a wrench, got brain-eaten, or just trusted Microsoft with their Cer­tifi­cates—but it’s a good demon­stra­tion that you sim­ply can’t trust any node in­side a net­work.

(There’s some in­ter­est­ing over­lap with MIRI’s value sta­bil­ity ques­tions, but they’re prob­a­bly out­side the scope of this thread and pos­si­bly only metaphor-level.)

In­ter­est­ingly, there are some se­cu­rity met­rics de­signed with the as­sump­tion that some num­ber of their nodes will be com­pro­mised, and with some re­sis­tance to such at­tacks. I’ve not seen this ex­panded to rep­u­ta­tion met­rics, though, and there are tech­ni­cal limi­ta­tions. TOR, for ex­am­ple, can only re­sist about a third of its nodes be­ing com­pro­mised, and pos­si­bly fewer than that. Other se­tups have higher the­o­ret­i­cal re­sis­tance, but are de­pen­dent on cen­tral high-value nodes that trade re­sis­tance against to vuln­er­a­bil­ity against spoofing.

It seems like there’s some value in clos­ing the gap be­tween car­rier wave and sig­nal in rep­u­ta­tion sys­tems, rather than a dis­crete rep­u­ta­tion sys­tem, but my sketched out im­ple­men­ta­tions be­come com­pu­ta­tion­ally in­tractable quickly.

• I don’t have a solu­tion for you, but a re­lated prob­a­bly-un­solv­able prob­lem is what some friends of mine call “cash­ing in your rep­u­ta­tion cap­i­tal”: hav­ing done the work to build up a rep­u­ta­tion (for trust­wor­thi­ness, in par­tic­u­lar), you be­tray it in a prof­itable way and run.

… oth­er­wise trust­wor­thy peo­ple are forced to act against their will. … But if we make ev­ery­thing se­cret, is there a way to ver­ify whether the sys­tem is re­ally work­ing as de­scribed?

This is a prob­lem in elec­tions. In the US (I be­lieve de­pend­ing on state) there are rules which are in­tended to pre­vent some­one from be­ing able to provide proof that they have voted a par­tic­u­lar way (to make co­er­cion fu­tile), and the ques­tion then is whether the vote count­ing is ac­cu­rate. I would sug­gest that the topic of de­sign­ing fair elec­tions con­tains the an­swer to your ques­tion in­so­far as an an­swer ex­ists.

• In the US (I be­lieve de­pend­ing on state) there are rules which are in­tended to pre­vent some­one from be­ing able to provide proof that they have voted a par­tic­u­lar way (to make co­er­cion fu­tile),

And then there are ab­sen­tee bal­lots which po­ten­tially make said laws a joke.

• Are there any good trust, value, or rep­u­ta­tion metrics

The first prob­lem is defin­ing what do you want to mea­sure. “Trust” and “rep­u­ta­tion” are two-ar­gu­ment func­tions and “value” is no­to­ri­ously vague.

• For clar­ity, I meant “trust” and “rep­u­ta­tion” in the tech­ni­cal senses, where “trust” is au­then­ti­ca­tion, and where “rep­u­ta­tion” is an as­sess­ment or group of as­sess­ments for (ideally trusted) user rat­ings of an­other user.

But good point, es­pe­cially for value sys­tems.

• I am still con­fused. When you say that trust is au­then­ti­ca­tion, what is it that you au­then­ti­cate? Do you mean trust in the same sense as “web of trust” in PGP-type crypto sys­tems?

For rep­u­ta­tion as an as­sess­ment of user rat­ings, you can ob­vi­ously build a bunch of var­i­ous met­rics, but the real ques­tion is which one is the best. And that ques­tion im­plies an­other one: Best for what?

Note that weed­ing out idiots, sock­pup­pets, and trolls is much eas­ier than con­struct­ing a use­ful-for-ev­ery­one rank­ing of le­gi­t­i­mate users. Differ­ent peo­ple will ex­pect and want your rank­ings to do differ­ent things.

• what is it that you au­then­ti­cate? Do you mean trust in the same sense as “web of trust” in PGP-type crypto sys­tems?

For starters, a sys­tem to be sure that a user or ser­vice is the same user or ser­vice it was pre­vi­ously. Web of trusts /​or/​ a cen­tral au­thor­ity would work, but hon­estly we run into limits even be­fore the gap be­tween elec­tronic wor­lds and meatspace. PGP would be nice, but PGP it­self is closed-source, and nei­ther PGP nor OpenPGP/​GPG are user-ac­cessible enough to even sur­vive in the e-mail sphere they were origi­nally in­tended to op­er­ate. SSL al­lows for server au­then­ti­ca­tion (ig­nor­ing the tech­ni­cal is­sues), but isn’t great for user au­then­ti­ca­tion.

I’m not aware of any gen­er­al­ized im­ple­men­ta­tion for other use, and the clos­est pre­cur­sors (key­chain man­age­ment in Mur­mur/​Mum­ble server con­trol?) are both limited and in­tended to be ap­pli­ca­tion-spe­cific. But at the same time, I rec­og­nize that I don’t fol­low the se­cu­rity or open-source wor­lds as much as I should.

For rep­u­ta­tion as an as­sess­ment of user rat­ings, you can ob­vi­ously build a bunch of var­i­ous met­rics, but the real ques­tion is which one is the best. And that ques­tion im­plies an­other one: Best for what?

Oh, yeah. It’s not an easy prob­lem to solve Right.

I’m more in­ter­ested in if any­one’s try­ing to solve it. I can see a lot of is­sues with a user-based rep­u­ta­tion even in ad­di­tion to the ob­vi­ous limi­ta­tion and trade­offs that fubarobfusco pro­vides—a visi­ble met­ric is more prone to be­ing gamed but ob­scur­ing the met­ric re­duces its util­ity as a feed­back for ‘good’ post­ing, value drift with­out a defined root ver­sus pos­si­ble clo­sure with­out, so on.

What sur­prises me is that there are so few at­tempts to im­prove the sys­tem be­yond the ba­sics. IP.Board, vBul­letin, and ph­pBoard plu­g­ins are usu­ally pretty similar—the best I’ve seen merely lets you dis­able them on a per-sub­fora ba­sis rather than globally, and they oth­er­wise use a sin­gle point score. Red­dit uses the same Karma sys­tem whether you’re an­swer­ing a com­plex sci­en­tific ques­tion or mak­ing a bad joke. LessWrong im­proves on that only by al­low­ing users to see how con­tentious a com­ment’s scor­ing. Dis­course uses count of posts and tags, al­most em­bar­rass­ingly min­i­mal­is­tic. I’ve seen a few sys­tems that make mod­er­a­tor and ad­min ‘likes’ count for more. I think that’s about the fan­ciest.

I don’t ex­pect them to have an im­ple­men­ta­tion that matches my de­sires, but I’m re­ally sur­prised that there’s no at­tempts to run multi-di­men­sional rep­u­ta­tion sys­tems, to weigh votes by length of post or age of poster, spel­lcheck or cap­i­tal­iza­tions thresh­olds. Th­ese might even be /​bad/​ de­ci­sions, but usu­ally you see some­one mak­ing them.

I ex­pect Twit­ter or FaceBook have some­thing com­plex un­der­neath the hood, but if they do, they’re not talk­ing about the speci­fics and not do­ing a very good job. Maybe its their dom­i­nance in the so­cial de­vel­op­ment com­mu­nity, but I dunno.

• For starters, a sys­tem to be sure that a user or ser­vice is the same user or ser­vice it was pre­vi­ously.

That seems to be pretty triv­ial. What’s wrong with a user­name/​pass­word combo (be­sides all the usual things) or, if you want to get a bit more so­phis­ti­cated, with hav­ing the user gen­er­ate a pri­vate key for him­self?

You don’t need a web of trust or any cen­tral au­thor­ity to ver­ify that the user named X is in pos­ses­sion of a pri­vate key which the user named X had be­fore.

I’m more in­ter­ested in if any­one’s try­ing to solve it.

Well, again, the crit­i­cal ques­tion is: What are you re­ally try­ing to achieve?

If you want the on­line equiv­a­lent of the meatspace rep­u­ta­tion, well, first meatspace rep­u­ta­tion does not ex­ist as one con­ve­nient num­ber, and sec­ond it’s still a two-ar­gu­ment func­tion.

there’s no at­tempts to run multi-di­men­sional rep­u­ta­tion sys­tems, to weigh votes by length of post or age of poster, spel­lcheck or cap­i­tal­iza­tions thresh­olds.

Once again, with feel­ing :-D—to which pur­pose? Gen­er­ally speak­ing, if you run a fo­rum all you need is a way to filter out idiots and trolls. Your reg­u­lar users will figure out rep­u­ta­tion on their own and their con­clu­sions will be all differ­ent. You can build an au­to­mated sys­tem to suit your fancy, but there’s no guaran­tee (and, ac­tu­ally, a pretty solid bet) that it won’t suit other peo­ple well.

I ex­pect Twit­ter or FaceBook have some­thing com­plex un­der­neath the hood

Why would Twit­ter or FB bother as­sign­ing rep­u­ta­tion to users? They want to filter out bad ac­tors and max­i­mize their eye­balls and their rev­enue which gen­er­ally means keep­ing users suffi­ciently happy and well-mea­sured.

• That seems to be pretty triv­ial. What’s wrong with a user­name/​pass­word combo (be­sides all the usual things)

“All the usual things” are many, and some of them are quite wrong in­deed.

If you need solid long-term au­then­ti­ca­tion, out­source it to some­one whose busi­ness de­pends on do­ing it right. Google for in­stance is re­ally quite good at de­tect­ing unau­tho­rized use of an ac­count (i.e. your Gmail get­ting hacked). It’s bet­ter (for a num­ber of rea­sons) not to be be­holden to a sin­gle au­then­ti­ca­tion provider, though, which is why there are things like OpenID Con­nect that let users au­then­ti­cate us­ing Google, Face­book, or var­i­ous other sources.

On the other hand, if you need au­tho­riza­tion with­out (much) au­then­ti­ca­tion — for in­stance, to let anony­mous users delete their own posts, but not other peo­ple’s — maybe you want trip­codes.

And if you need to de­tect sock pup­pets (one per­son pre­tend­ing to be sev­eral peo­ple), you may have an easy time or you may be in hard ma­chine-learn­ing ter­ri­tory. (See the ob­vi­ous re­cent thread for more.) Some ser­vices — like Wikipe­dia — seem to at­tract some re­ally ded­i­cated pup­peteers.

• What’s wrong with a user­name/​pass­word combo (be­sides all the usual things) or, if you want to get a bit more so­phis­ti­cated, with hav­ing the user gen­er­ate a pri­vate key for him­self?

In ad­di­tion to the usual prob­lems, which are pretty se­ri­ous to start with, you’re rely­ing on the client. To bor­row from in­for­ma­tion se­cu­rity, the client is in the hands of the en­emy. Sock­pup­pet (sybil in trust net­works) at­tacks, where en­tity pre­tends to be many differ­ent users (aka sock­pup­pets), and im­per­son­ation at­tacks, where a user pre­tends to be some­one they are not, are both well-doc­u­mented and ex­cep­tion­ally com­mon. Every fo­rum pack­age I can find re­lies on so­cial taboos or sim­ply ig­nor­ing the prob­lem, fol­lowed by di­rect hu­man ad­minis­tra­tor in­ter­ven­tion, and most don’t even make ad­minis­tra­tor in­ter­ven­tion easy.

There are also very few sites that have in­te­grated sup­port for pri­vate-key-like tech­nolo­gies, and most fo­rum pack­ages are not read­ily com­pat­i­ble with even all pass­word man­agers.

This isn’t a prob­lem that can be perfectly solved, true. But right now it’s not even got bandaids.

Once again, with feel­ing :-D—to which pur­pose? Gen­er­ally speak­ing, if you run a fo­rum all you need is a way to filter out idiots and trolls. Your reg­u­lar users will figure out rep­u­ta­tion on their own and their con­clu­sions will be all differ­ent.

“Nor­mal” so­cial rep­u­ta­tion runs into pretty sig­nifi­cant is­sues as soon as your group size ex­ceeds even fairly small groups—I can imag­ine folk who could han­dle a cou­ple thou­sand names, but it’s com­mon for a site to have or­ders of mag­ni­tude more users. Th­ese sys­tems can provide use­ful tools for notic­ing and han­dling mat­ters that are much more ev­i­dent in pure data than in “ex­pert judg­ments”. But these are rel­a­tively minor benefits.

At a deeper level, a well-formed rep­u­ta­tion sys­tem should en­courage ‘good’ post­ing (post­ing that matches the ex­pressed de­sires of the fo­rum com­mu­nity) and dis­cour­age ‘bad’ posts (post­ing that goes against the ex­pressed de­sires of the fo­rum com­mu­nity), as well as re­duce in­cen­tives to­ward me-too or this-is-wrong-stop re­sponses.

This isn’t with­out trade-offs : you’ll im­plic­itly make the fo­rum’s cul­ture drift more slowly, and en­courage sur­viv­ing dis­sen­ters to be con­trar­i­ans for whom the rep­u­ta­tion sys­tem doesn’t mat­ter. But the ex­ist­ing rep­u­ta­tion sys­tems don’t let you make that trade-off, and in­stead you have to de­cide whether to use a far more naive sys­tem that is very vuln­er­a­ble to at­tack.

You can build an au­to­mated sys­tem to suit your fancy, but there’s no guaran­tee (and, ac­tu­ally, a pretty solid bet) that it won’t suit other peo­ple well.

To some ex­tent—spell-check and cap­i­tal­iza­tion ex­pec­ta­tions for a writ­ing com­mu­nity will be differ­ent than that of a video game or chem­istry fo­rum, help fo­rums will ex­pect shorter-lifes­pan users than the me­dian com­mu­nity—but a siz­able num­ber of these as­pects are com­mon to nearly all com­mu­ni­ties.

Why would Twit­ter or FB bother as­sign­ing rep­u­ta­tion to users? They want to filter out bad ac­tors and max­i­mize their eye­balls and their rev­enue which gen­er­ally means keep­ing users suffi­ciently happy and well-mea­sured.

They have in­cen­tives to­ward keep­ing users. “Bad” posters are tau­tolog­i­cally a dis­in­cen­tive for most users (ex­cep­tions: some folk do show re­vealed prefer­ences for hear­ing from ter­rible peo­ple).

• the client is in the hands of the enemy

Yes, of course, but if we start to talk in these terms, the first in line is the stan­dard ques­tion: What is your threat model?

I also don’t think there’s a good solu­tion to sock­pup­petry short of manda­tory bio­met­rics.

But the ex­ist­ing rep­u­ta­tion sys­tems don’t let you make that trade-off

Why not? The trade-off is in the de­tails of how much rep­u­ta­tion mat­ters. There is a large space be­tween rep­u­ta­tion be­ing just a num­ber that’s not used any­where and rep­u­ta­tion de­ter­min­ing what, how, and when can you post.

very vuln­er­a­ble to attack

“Bad” posters are tau­tolog­i­cally a dis­in­cen­tive for most users

Not if you can triv­ially easy block/​ig­nore them which is the case for Twit­ter and FB.

• What is your threat model?

An at­tacker cre­ates a large num­ber of nodes and over­whelms any sig­nal in the ini­tial sys­tem.

For the spe­cific ex­am­ple of a red­dit-based fo­rum, it’s triv­ial for an at­tacker to make up a siz­able pro­por­tion of as­signed rep­u­ta­tion points through the use of sock­pup­pets. It is only mod­er­ately difficult for an at­tacker to au­to­mate the time-con­sum­ing por­tions of this pro­cess.

I also don’t think there’s a good solu­tion to sock­pup­petry short of manda­tory bio­met­rics.

10% of the prob­lem is hard. That does not ex­plain the small amount of work done on the other 90%. The vast ma­jor­ity of sock­pup­pets aren’t that com­pli­cated: most don’t use VPNs or anonymiz­ers, most don’t use large stylis­tic vari­a­tion, and many even use the same browser from one per­sona to the next. It’s also com­mon for a sock­pup­pets to have cer­tain net­work at­tributes in com­mon with their origi­nal per­sona. Full au­thor­ship anal­y­sis has both struc­tural (pri­mar­ily train­ing bias) and prag­matic (CPU time) limi­ta­tions that would make it un­fea­si­ble for large fo­rums...

But there are a num­ber of fairly sim­ple steps to fight sock­pup­pets that com­put­ers han­dle bet­ter than hu­mans, and yet still re­quire of­ten-un­pleas­ant man­ual work to check.

Why not? The trade-off is in the de­tails of how much rep­u­ta­tion mat­ters. There is a large space be­tween rep­u­ta­tion be­ing just a num­ber that’s not used any­where and rep­u­ta­tion de­ter­min­ing what, how, and when can you post.

Yes, but there aren’t open-source sys­tems that ex­ist and have doc­u­men­ta­tion which do these things be­yond the most ba­sic level. At most, there are sim­ple rep­u­ta­tion sys­tems where a small amount has an im­pact on site func­tion­al­ity, such as this site. But Red­dit’s code­base does not al­low up­votes to be limited or weighed based on the age of ac­count, does not have , and would re­quire pretty sig­nifi­cant work to change any of these at­tributes. (The main site at least acts against some of the more overt mass-down­vot­ing by act­ing against down­votes ap­plied to the pro­file page, but this doesn’t seem pre­sent here?)

Not if you can triv­ially easy block/​ig­nore them which is the case for Twit­ter and FB.

If a large enough per­centage of out­side user con­tent is “bad”, users be­gin to treat that space as ad­ver­tis­ing and ig­nore it. Many fo­rums also don’t make it easy to block users (see : here), and al­most none han­dle block­ing even the most overt of sock­pup­pets well.

• An at­tacker cre­ates a large num­ber of nodes and over­whelms any sig­nal in the ini­tial sys­tem.

For the spe­cific ex­am­ple of a red­dit-based fo­rum, it’s triv­ial for an at­tacker to make up a siz­able pro­por­tion of as­signed rep­u­ta­tion points through the use of sock­pup­pets. It is only mod­er­ately difficult for an at­tacker to au­to­mate the time-con­sum­ing por­tions of this pro­cess.

Limit the abil­ity of low karma users to up­vote.

• You seem to want to build a mas­sive sledge­ham­mer-wield­ing mech to solve the prob­lem of fruit flies on a ba­nana.

So the at­tacker ex­pends a not in­con­sid­er­able amount of effort to build his sock­pup­pet army and achieves sky-high karma on a fo­rum. And..? It’s not like you can sell karma or even gain re­spect for your posts from other than new­bies. What would be the point?

Not to men­tion that there is a lot of em­piri­cal ev­i­dence out there—for­mal rep­u­ta­tion sys­tems on fo­rums go back at least as far as early Slash­dot and y’know? they kinda work. They don’t achieve any­thing spec­tac­u­lar, but they also tend not have mas­sive failure modes. Once the sock­pup­pet gen­eral gains the at­ten­tion of an ad­min or at least a mod­er­a­tor, his army is use­less.

You want to write a library which will at­tempt to iden­tify sock­pup­pets through some kind of mul­ti­fac­tor anal­y­sis? Sure, that would be a nice thing to have—as long as it’s rea­son­able about things. One of the prob­lems with au­to­mated defense mechanisms is that they can be of­ten used as DOS tools if the ad­min is not care­ful.

If a large enough per­centage of out­side user con­tent is “bad”

That still ac­tu­ally is the case for Twit­ter and FB.

• Inre: Face­book/​Twit­ter:

TL;DR I think Twit­ter Face­book et al do have some­thing com­plex, but it is out­side the hood rather than un­der it. (I guess they could have both.)

The “friend­ing” sys­tem takes ad­van­tage of hu­man’s built-in rep­u­ta­tion sys­tem. When I look at X’s user page, it tells me that W, Y, and Z also fol­low/​”friended” X. Then when I make my judge­ment of X, X leaches some amount of “free” “rep­u­ta­tion points” from Z’s “rep­u­ta­tion”. Of course, if W, Y, and Z all have bad rep­u­ta­tions, that is re­flected. Maybe W and Z have good rep­u­ta­tions, but Y does not—now I’m not sure what X’s rep­u­ta­tion should be like and need to look at X more closely.

Of course, this doesn’t scale be­yond a cou­ple hun­dred peo­ple.

• I don’t know of one. I doubt that ev­ery­one wants the same sort of thing out of such a met­ric. Just off the top of my head, some pos­si­ble con­flicts:

• Is a post good be­cause it at­tracts a lot of re­sponses? Then a flame­bait post that riles peo­ple into an un­pro­duc­tive squab­ble is a good post.

• Is a post good be­cause it leads to in­creased read­er­ship? Then spam­ming other fo­rums to pro­mote a post makes it a bet­ter post, and post­ing porn (or some­thing else ir­rele­vant that at­tracts at­ten­tion) is re­ally very good.

• Is a post good be­cause a lot of users up­vote it? Then peo­ple who cre­ate sock-pup­pet ac­counts to up­vote them­selves are bet­ter posters; as are peo­ple who re­cruit their friends to mass-up­vote their posts.

• Is a post good be­cause the mod­er­a­tor ap­proves of it? Then as the fo­rum be­comes more pop­u­lar, if the mod­er­a­tor has no ad­di­tional time to re­view posts, a diminish­ing frac­tion of posts are good.

The old wiki-oid site Every­thing2 ex­plic­itly as­signs “lev­els” to users, based on how pop­u­lar their posts are. Users who have proven them­selves have the abil­ity to sig­nal-boost posts they like with a su­per-up­vote.

It seems to me that some­thing analo­gous to PageRank would be an in­ter­est­ing ap­proach: the es­ti­mated qual­ity of a post is speci­fi­cally an es­ti­mate of how likely a high-qual­ity fo­rum mem­ber is to ap­pre­ci­ate that post. Long-term high-qual­ity posters’ up­votes should prob­a­bly count for a lot more than new­com­ers’ votes. And mod­er­a­tors or other cen­tral, core-team users should prob­a­bly be able to man­u­ally ad­just a poster’s qual­ity score to com­pen­sate for things like a formerly-good poster go­ing off the deep end, the rev­e­la­tion that some­one is a troll or sabo­teur, or (in the pos­i­tive di­rec­tion) some­one of known-good offline rep­u­ta­tion join­ing the fo­rum.

• You may be in­ter­ested in the new sys­tem called Dissent

• Can any­body give me a good de­scrip­tion of the term “meta­phys­i­cal” or “meta­physics” in a way that is likely to stick in my head and be ap­pli­ca­ble to fu­ture con­tem­pla­tions and con­ver­sa­tions? I have tried to read a few defi­ni­tions and de­scrip­tions, but I’ve never been able to re­ally grok any of them and even when I thought I had a work­ing defi­ni­tion it slipped out of my head when I tried to use it later. Right now its de­fault func­tion in my brain is, when ut­tered, to raise a flag that sig­nifies “I can’t tell if this per­son is speak­ing at a level sig­nifi­cantly above my com­pre­hen­sion or is just spout­ing bul­lshit, but ei­ther way I’m not likely to make sense of what they’re say­ing” and there­fore tends to just kind of kill the men­tal pro­cess that that was try­ing to fol­low what some­body was say­ing to me /​ what I was read­ing.

Given how of­ten it comes up, and of­ten from peo­ple I re­spect, I’m pretty sure that’s not the cor­rect be­hav­ior Figured it’s worth ask­ing here. In case it wasn’t ob­vi­ous, I have vir­tu­ally no back­ground in philos­o­phy (though I’ve been look­ing to change that).

• Me­ta­physics: what’s out there? Episte­mol­ogy: how do I learn about it? Ethics: what should I do with it?

Ba­si­cally, think of any ques­tions that are of the form “what’s there in the world”, “what is the world made of”, and now take away ac­tual sci­ence. What’s left is meta­physics. “Is the world real or a fig­ment of my imag­i­na­tion?”, “is there such a thing as a soul?”, “is there such a thing as the color blue, as op­posed to ob­jects that are blue or not blue?”, “is there life af­ter death?”, “are there higher be­ings?”, “can in­finity ex­ist?”, etc. etc.

Note that “meta­phys­i­cal” also tends to be used as a feel-good word, mean­ing some­thing like “nobly philo­soph­i­cal, con­cerned with ques­tions of a higher na­ture than the ev­ery­day and the mun­dane”.

• Me­ta­physics: what’s out there?

Isn’t that on­tol­ogy? What’s the differ­ence?

• “On­tol­ogy” is firmly ded­i­cated to “ex­ist or doesn’t ex­ist”. Me­ta­physics is more broadly “what’s the world like?” and in­cludes on­tol­ogy as a cen­tral sub­field.

Whether there is free will is a meta­phys­i­cal ques­tion, but not, I think, an on­tolog­i­cal one (at least not nec­es­sar­ily). “Free will” is not a thing or a cat­e­gory or a prop­erty, it’s a claim that in some broad as­pects the world is like this and not like that.

Whether such things as de­sires or in­ten­tions ex­ist or are made-up fic­tions is an on­tolog­i­cal ques­tion.

• Thanks! I’ve seen many times the state­ment that on­tol­ogy is strictly in­cluded in meta­physics, but this is the first time I’ve seen an ex­am­ple of some­thing that’s in the set-the­o­retic differ­ence.

• On­tol­ogy is a sub­dis­ci­pline of meta­physics.

Is the many-world hy­poth­e­sis true? Might be a meta­phys­i­cal ques­tion that not di­rectly on­tol­ogy.

• A con­fu­sion of mine: How is episte­mol­ogy a sep­a­rate thing? Or is that just a flag for “We’re go­ing to go meta-level” and ap­plied to some par­tic­u­lar topic.

E.g. I read a bit of Kant about ex­pe­rience, which I sup­pose is meta­physics (right?) but it seems like if he’s mak­ing any pos­i­tive claim, the de­bate about the claim is go­ing to be about the ar­gu­ments for the claim, which is set­tled via episte­mol­ogy?

• Hmm, I would dis­agree. If you have a meta­phys­i­cal claim, then ar­gu­ments for or against this claim are not nor­mally episte­molog­i­cal; they’re just ar­gu­ments.

Think of episte­mol­ogy as “be­ing meta about knowl­edge, all the time, and noth­ing else”.

What does it mean to know some­thing? How can we know some­thing? What’s the differ­ence be­tween “know­ing” a defi­ni­tion and “know­ing” a the­o­rem? Are there state­ments such that to know them true, you need no in­put from the out­side world at all? (Kant’s an­a­lytic vs syn­thetic dis­tinc­tion). Is 2+2=4 one such? If you know some­thing is true, but it turns out later it was false, did you ac­tu­ally “know” it? (many mil­lions of words have been writ­ten on this ques­tion alone).

Now, take some meta­phys­i­cal claim, and let’s take an es­pe­cially grand one, say “God is in­finite and om­nip­re­sent” or some­thing. You could ar­gue for or against that claim with­out ever go­ing into episte­mol­ogy. You could maybe ar­gue that the idea of God as ab­solute perfec­tion more or less re­quires Him to be pre­sent ev­ery­where, in the small­est atom and the re­motest star, at all times be­cause oth­er­wise it would be short of perfec­tion, or some­thing like this. Or you could say that if God is pre­sent ev­ery­where, that’s the same as if He was pre­sent nowhere, be­cause pres­ence man­i­fests by the differ­ence be­tween pres­ence and ab­sence.

But of course if you are a mod­ern per­son and es­pe­cially one in­clined to sci­en­tific think­ing, you would likely re­spond to all this “Hey, what does it even mean to say all this or for me to ar­gue this? How would I know if God is om­nip­re­sent or not om­nip­re­sent, what would change in the world for me to per­ceive it? Without some sort of episte­molog­i­cal un­der­pin­ning to this claim, what’s the differ­ence be­tween it and a string of empty words?”

And then you would be pro­ceed­ing in the tra­di­tion started by Descartes, who ar­guably moved the cen­ter of philo­soph­i­cal think­ing from meta­physics to episte­mol­ogy in what’s called the “episte­molog­i­cal turn”, later boosted in the 20th cen­tury by the “lin­gus­tic turn” (at­tributed among oth­ers to Wittgen­stein).

Me­ta­physics: X, amirite? Episte­molog­i­cal turn: What does it even mean to know X? Lin­guis­tic turn: What does it even mean to say X?

• Thanks. That’s still not even a lit­tle in­tu­itive to me, but it’s a Mon­day and I had to be up ab­surdly early, so if it makes any sense to me right now (and it does), I have hope that I’ll be able to in­ter­nal­ize it even if I always need to think about it a bit. We’ll see, prob­a­bly no sooner than to­mor­row though (sleeeeeeeeeep...).

I sus­pect that part of my prob­lem is that I keep try­ing to de­com­pose “meta­physics” into “physics about/​de­scribing/​in the area of physics” and my brain helpfully points out that not only is it ques­tion­able whether that makes any sense to be­gin with, it al­most never makes any sense what­so­ever in con­text. If I just need to in­stall a lin­guis­tic over­ride for that word, I can do it, but I want to know what the over­ride is sup­posed to be be­fore I go to the effort.

The feel-good-word mean­ing seems likely to be a close rel­a­tive of the flag-state­ment-as-bul­lshit mean­ing. That feels like a men­tal trap, though. The prob­lem is, at least half the “con­crete” ex­am­ples that I’ve seen in this thread also seem likely to have lit­tle to no util­ity (cer­tainly not enough to jus­tify think­ing about it for any length of time). Episte­mol­ogy and ethics have ob­vi­ous value, but it seems meta­physics comes up all the time in philo­soph­i­cal dis­cus­sion too.

• This is in no way an an­swer to your ac­tual ques­tion (Ana­toly’s is good) but it might amuse you.

“Meta” in Greek means some­thing like “af­ter” (but also “beside”, “among”, and var­i­ous other things). So there is a

Com­mon mis­ap­pre­hen­sion: meta­physics is so called be­cause it goes be­yond physics—it’s mode ab­stract, more sub­tle, more ele­vated, more fun­da­men­tal, etc.

This turns out not to be quite where the word comes from, so there is a

Com­mon re­sponse”: ac­tu­ally, it’s all be­cause Aris­to­tle wrote a book called “Physics” and an­other, for which he left no ti­tle, that was com­monly shelved af­ter the “Physics”—meta ta Phusika* -- and was com­monly called the “Me­ta­physics”. And the top­ics treated in that book came to be called by that name. So the “meta” in the name re­ally has noth­ing at all to do with the re­la­tion­ship be­tween the sub­jects.

But ac­tu­ally it’s a bit more com­pli­cated than that; here’s the

Truth (so far as I un­der­stand it): in­deed Aris­to­tle wrote those books, and in­deed the “Me­ta­physics” is con­cerned with, well, meta­physics, and in­deed the “Me­ta­physics” is called that be­cause it comes “af­ter the Physics”. But the ear­liest sources we have sug­gest that the rea­son why the Me­ta­physics came af­ter the Physics is that Aris­to­tle thought it was im­por­tant for physics to be taught first. So ac­tu­ally it’s not far off to say that meta­physics is so called be­cause it goes be­yond physics, at least in the sense of be­ing a more ad­vanced topic (in Aris­to­tle’s time).

• In my ex­pe­rience peo­ple use “meta­physics” to re­fer to philo­soph­i­cal ex­plo­ra­tion of what kinds of things ex­ist and what the na­ture, be­hav­ior, etc. of those things is.

This is usu­ally treated as dis­tinct from sci­en­tific/​ex­per­i­men­tal ex­plo­ra­tion of what kinds of things ex­ist and what the na­ture, be­hav­ior, etc. of those things is, al­though those lines are blurry. So, for ex­am­ple, when Yud­kowsky cites Bar­bour dis­cussing the con­figu­ra­tion spaces un­der­ly­ing ex­pe­rienced re­al­ity, there will be some dis­agree­ment/​con­fu­sion about whether this is a con­ver­sa­tion about physics or meta­physics, and it’s not clear that there’s a fact of the mat­ter.

This is also usu­ally treated as dis­tinct from ex­plo­ra­tion of ob­jects and ex­pe­riences that pre­sent them­selves to our senses and our in­tu­itive rea­son­ing… e.g. shoes and ducks and choco­late cake. As a con­se­quence, de­scribing a thought or wor­ld­view or other cog­ni­tive act as “meta­phys­i­cal” can be­come a sta­tus ma­neu­ver… a way of dis­t­in­guish­ing it from ob­ject-level cog­ni­tion in an im­plied con­text where more ob­ject-level (aka “su­perfi­cial”) cog­ni­tion is seen as less so­phis­ti­cated or deep or oth­er­wise less valuable.

Some peo­ple also use “meta­phys­i­cal” to re­fer to a class of events also some­times referred to as “mys­ti­cal,” “oc­cult,” “su­per­nat­u­ral,” etc. Some­times this us­age is con­sis­tent with the above—that is, some­times peo­ple are ar­tic­u­lat­ing a model of the world in which those events can best be un­der­stood by un­der­stand­ing the re­al­ity which un­der­lies our ex­pe­rience of the world.

Other times it’s at best metaphor­i­cal, or just out­right bul­lshit.

As far as cor­rect be­hav­ior goes… ask­ing peo­ple to taboo “meta­phys­i­cal” is of­ten helpful.

• The ra­tio­nal­ist taboo is one of the tools I have most en­joyed learn­ing and found most use­ful in face-to-face con­ver­sa­tions since dis­cov­er­ing the Se­quences. Un­for­tu­nately, it’s not prac­ti­cal when deal­ing with mass-broad­cast or time-shifted ma­te­rial, which makes it of limited use in deal­ing with the sce­nar­ios where I most fre­quently en­counter the con­cept of meta­physics.

I tend to (over)re­act poorly to sta­tus ma­neu­vers, which is prob­a­bly part of why I’ve had a hard time with the word; it gets used in an in­for­ma­tion-free way suffi­ciently of­ten that I’m tempted to just always shelve it there, and that in turn leads me to dis­count or even ig­nore the en­tire thought which con­tained it. This is a bias I’m ac­tively try­ing to brain­hack away, and I’m now tempted to go find some of my philo­soph­i­cally-in­clined so­cial cir­cle and see if I can avoid that au­to­matic re­ac­tion at least where this spe­cific word is con­cerned (and then taboo it any­how, for the sake of com­mu­ni­ca­tion be­ing in­for­ma­tive).

I still haven’t fully in­ter­nal­ized the con­cept, but I’m get­ting closer. “The kinds of things that ex­ist, and their na­tures” is some­thing I can see a use for, and hope­fully I can make it stick in my head this time.

• it gets used in an in­for­ma­tion-free way suffi­ciently of­ten that I’m tempted to just always shelve it there, and that in turn leads me to dis­count or even ig­nore the en­tire thought which con­tained it.

This seems like a broader con­cern, and one worth ad­dress­ing. Peo­ple drop con­tent-free words into their speech/​writ­ing all the time, ei­ther as filler or as “lef­tovers” from pre­cur­sor sen­tences.

What hap­pens if you treat it as an empty mod­ifier, like “re­ally” or “to­tally”?

• Leav­ing aside the fact that, by de­fault, I don’t con­sider “to­tally” to be con­tent-free (I’m aware a lot of peo­ple use it that way, but I still of­ten need to con­sciously dis­card the word when I en­counter it), that still seems like at best it only works when used as a mod­ifier. It doesn’t help if some­body is ac­tu­ally talk­ing about meta­physics. I’ll keep it in mind as a backup op­tion, though; “if I can’t pro­cess that sen­tence when I in­clude all the words they said, and one of them is ‘meta­phys­i­cal’, what hap­pens if I drop that word?”

• True, false, or nei­ther?: It is cur­rently an open/​con­tro­ver­sial/​spec­u­la­tive ques­tion in physics whether time is dis­cretized.

• The Wikipe­dia ar­ti­cle on Planck time says:

The­o­ret­i­cally, this is the small­est time mea­sure­ment that will ever be pos­si­ble, roughly 10^−43 sec­onds. Within the frame­work of the laws of physics as we un­der­stand them to­day, for times less than one Planck time apart, we can nei­ther mea­sure nor de­tect any change.

How­ever, the ar­ti­cle on Chronon says:

The Planck time is a the­o­ret­i­cal lower-bound on the length of time that could ex­ist be­tween two con­nected events, but it is not a quan­ti­za­tion of time it­self since there is no re­quire­ment that the time be­tween two events be sep­a­rated by a dis­crete num­ber of Planck times.

• So, if I un­der­stand this rightly-

Any two events must take place at least one Plank time apart. But so long as they do, it can be any num­ber of plank times—even, say, pi. Right?

• Many things in our best mod­els of physics are dis­crete, but as far as I know, our co­or­di­nates (time, space, or four-di­men­sional space-time co­or­di­nates) are never dis­crete. Even some­thing like quan­tum field the­ory, which treats things in a non-in­tu­itively dis­crete way does not do this. For ex­am­ple, we might view the pro­cess of an elec­tron scat­ter­ing off an­other elec­tron as an ex­change of many dis­crete pho­tons be­tween the two elec­trons, but it is all writ­ten in terms of in­te­grals or deriva­tives, rather than differ­ences or sums.

• I have a con­stant im­pres­sion that ev­ery­one around me is more com­pe­tent than me at ev­ery­thing. Does it ac­tu­ally mean that I am, or is there some sort of strong psy­cholog­i­cal effect that can cre­ate that im­pres­sion, even if it is not ac­tu­ally true? If there is, is it a prob­lem you should see your ther­a­pist about?

• Re­minds me of some­thing Scott said once:

And when I tried to an­a­lyzed my cer­tainty that – even de­spite the whole mul­ti­ple in­tel­li­gences thing – I couldn’t pos­si­bly be as good as them, it boiled down to some­thing like this: they were tal­ented at hard things, but I was only tal­ented at easy things.

It took me about ten years to figure out the flaw in this ar­gu­ment, by the way.

Let’s say that you and I de­cide to play pool. We agree to play eight-ball, best of five games. Our per­cep­tion is that what fol­lows is a con­test to see who will do some­thing called win­ning.

But I don’t see it that way. I always imag­ine the out­come of eight-ball to be pre­de­ter­mined, to about 95% cer­tainty, based on who has prac­ticed that spe­cific skill the most over his life­time. The re­main­ing 5% is mostly luck, and play­ing a best of five se­ries elimi­nates most of the luck too.

I’ve spent a ridicu­lous num­ber of hours play­ing pool, mostly as a kid. I’m not proud of that fact. Al­most any other ac­tivity would have been more use­ful. As a re­sult of my wasted youth, years later I can beat 99% of the pub­lic at eight-ball. But I can’t en­joy that sort of so-called vic­tory. It doesn’t feel like “win­ning” any­thing.

It feels as mean­ingful as if my op­po­nent and I had kept logs of the hours we each had spent play­ing pool over our life­times and sim­ply com­pared. It feels re­dun­dant to play the ac­tual games.

• This re­minds me of my crite­ria for learn­ing: “You have un­der­stood some­thing when it ap­pears to be easy.” The math­e­mat­i­ci­ans call this state ‘triv­ial’. It has be­come easy be­cause you trained the topic un­til the key as­pects be­came part of your un­con­scious com­pe­tence. Then it ap­pears to your­self as easy—be­cause you no longer need to think about it.

• De­spite ex­ter­nal ev­i­dence of their com­pe­tence, those with the syn­drome re­main con­vinced that they are frauds and do not de­serve the suc­cess they have achieved. Proof of suc­cess is dis­missed as luck, timing, or as a re­sult of de­ceiv­ing oth­ers into think­ing they are more in­tel­li­gent and com­pe­tent than they be­lieve them­selves to be.

Psy­cholog­i­cal re­search done in the early 1980s es­ti­mated that two out of five suc­cess­ful peo­ple con­sider them­selves frauds and other stud­ies have found that 70 per­cent of all peo­ple feel like im­pos­tors at one time or an­other. It is not con­sid­ered a psy­cholog­i­cal di­s­or­der, and is not among the con­di­tions de­scribed in the Di­ag­nos­tic and Statis­ti­cal Man­ual of Men­tal Di­sor­ders.

• Err, that’s not it. I am no more suc­cess­ful than them. Or, at least, I kinda feel that ev­ery­one else is more suc­cess­ful than me as well.

• Con­sider that maybe you might be wrong about the im­poster syn­drome. As a per­son with­out it—its hard to know how you think/​feel and how you con­cluded that you couldn’t have it. But maybe its worth ask­ing—How would some­one con­vince you to change your mind on this topic?

• By en­ter­ing some im­por­tant situ­a­tion where my and his com­par­a­tive ad­van­tage in some sort of com­pe­tence comes into play, and los­ing.

• what if you de­vel­oped a few bad heuris­tics about how other suc­cess­ful peo­ple were not in­her­ently more suc­cess­ful but just got lucky (or some other ex­ter­nal grant­ing of suc­cess) as they went along; whereas your hard-earned suc­cesses were due to suc­cess­ful per­sonal skills… Hard earned, per­son­ally achieved suc­cess.

its prob­a­bly pos­si­ble to see a ther­a­pist about it; but I would sug­gest you can work your own way around it (con­sider it a challenge that can be over­come with the cor­rect growth mind­set)

• I think peo­ple are quick to challenge this type of im­pres­sion be­cause it pat­tern matches to known cog­ni­tive dis­tor­tions in­volved in things like de­pres­sion, or known in­se­cu­ri­ties in cer­tain com­pet­i­tive situ­a­tions.

For ex­am­ple, con­sider that most ev­ery­one will struc­ture their lives such that their weak­nesses are down­played and their pos­i­tive fea­tures are more promi­nent. This can hap­pen ei­ther by choice of ac­tivity (e.g. the stereo­typ­i­cal geek avoids so­cial games) or by more overt com­mu­ni­ca­tion fil­ter­ing (e.g. most peo­ple don’t talk about their anger prob­lems). Ac­cord­ingly, it’s never hard to find in­for­ma­tion that con­firms your own rel­a­tive in­com­pe­tence, if there’s some emo­tional ten­dency to look for it.

Aside from that, a great ques­tion is “to what ends am I mak­ing this com­par­i­son?” I find it un­likely that you have a purely aca­demic in­ter­est in the ques­tion of your rel­a­tive com­pe­tence.

First, it can of­ten be use­ful to know your rel­a­tive com­pe­tence in a spe­cific com­pet­i­tive do­main. But even here, this in­for­ma­tion is only one part of your de­ci­sion pro­cess: You may be okay with e.g. choos­ing a lower ex­pected rank in one ca­reer over a higher rank in an­other be­cause you en­joy the work more, or find it more com­pat­i­ble with your val­ues, or be­cause it pays bet­ters, or leaves more time for you fam­ily, or you’re risk averse, or it’s more al­tru­is­tic, etc. But know­ing your likely rank along some di­men­sion will tell you a bit about the likely pay-offs of com­pet­ing along that di­men­sion.

But what is the use of mak­ing an across-the-board self-com­par­i­son?

Sup­pose you con­structed some gen­eral mea­sure of com­pe­tence across all do­mains. Sup­pose you found out you were be­low av­er­age (or even above av­er­age). Then what? It seems you’re in still in the same situ­a­tion as be­fore: You still must choose how to spend your time. The gen­eral self-com­par­i­son mea­sure is noth­ing more than the ag­gre­gate of your ex­pected rel­a­tive ranks on spe­cific sub-do­mains, which are more rele­vant to any spe­cific choice. And as I said above, your ex­pected rank in some area is far from the only bit of in­for­ma­tion you care about.

As an aside, a pos­i­tive use for a self-com­par­i­son is to provide a role-model. If you find your­self un­fa­vor­ably com­pared to al­most ev­ery­one, con­sider your­self lucky that you have so many role-mod­els to choose from! Since you are prob­a­bly like other peo­ple in most re­spects, you can ex­pect to find low-hang­ing fruit in many ar­eas where you have poor rel­a­tive perfor­mance.

But if you find (as many peo­ple will) that you’ve hit the point of diminish­ing re­turns re­gard­ing the time you spend com­par­ing your­self to oth­ers, per­haps you can sim­ply rec­og­nize this and re­al­ize that it’s nei­ther cow­ardly nor avoidant to spend your men­tal en­ergy el­se­where.

• Pos­si­bly par­allel—I’ve had a feel­ing for a long time that some­thing bad was about to hap­pen. Rel­a­tively re­cently, I’ve come to be­lieve that this isn’t nec­es­sar­ily an ac­cu­rate in­tu­ition about the world, it’s mus­cle tight­ness in my ab­domen. It’s prob­a­bly part of a larger pat­tern, since just let­ting go in the area where I feel it doesn’t make much differ­ence.

I be­lieve that pat­terns of mus­cle ten­sion and emo­tions are re­lated and tend to main­tain each other.

It’s ex­tremely un­likely that ev­ery­one is more com­pe­tent than you at ev­ery­thing. If noth­ing else, your writ­ing is bet­ter than that of a high pro­por­tion of peo­ple on the in­ter­net. Also, a lot of peo­ple have painful men­tal habits and have no idea that they have a prob­lem.

More gen­er­ally, you could ex­plore the idea of ev­ery­one be­ing more com­pe­tent than you at ev­ery­thing. Is there ev­i­dence for this? Ev­i­dence against it? Is it likely that you’re at the bot­tom of abil­ity at ev­ery­thing?

This sounds to me like some­thing worth tak­ing to a ther­a­pist, bear­ing in mind that you may have to try more than one ther­a­pist to find one that’s a good fit.

I be­lieve there’s strong psy­cholog­i­cal effect which can cre­ate that im­pres­sion—grow­ing up around peo­ple who ex­pect you to be in­com­pe­tent. Now that I think about it, there may be ge­netic vuln­er­a­bil­ity in­volved, too.

Pos­si­bly worth ex­plor­ing: free monthly Fel­denkrais ex­er­cise—this are pat­terns of gen­tle move­ment which pro­duce deep re­lax­ation and eas­ier move­ment. The rea­son I think you can get some ev­i­dence about your situ­a­tion by try­ing Fel­denkrais is that, if you find your be­lief about other peo­ple be­ing more com­pe­tent at ev­ery­thing goes away, even briefly, than you have some ev­i­dence that the be­lief is ha­bit­ual.

Nancy, I be­lieve you are de­scribing anx­iety. That you are anx­ious, that if you went to a psy­chol­o­gist for ther­apy and you were cov­ered by in­surance that they would list your di­ag­no­sis on the re­im­burse­ment form as “gen­er­al­ized anx­iety di­s­or­der.”

I say this not as a psy­chol­o­gist but as some­one who was anx­ious much of his life. For me it was worth do­ing reg­u­lar talk­ing ther­apy and (it seems to me) hack­ing my anx­iety lev­els slowly down­ward through di­rected in­tro­spec­tion. I am still more timid than I would like in situ­a­tions where, for ex­am­ple, I might be very di­rect tel­ling a woman (of the ap­pro­pri­ate sex) I love her, or putting my own ideas for­ward force­fully at work. But all of these things I do bet­ter now than I did in the past, and I don’t con­sider my self-ad­just­ment to be finished yet.

Any­way, If you haven’t named what is hap­pen­ing to you as “anx­iety,” it might be helpful to con­sider that some of what has been learned about anx­iety over time might be in­ter­est­ing to you, that peo­ple who are dis­cussing anx­iety may of­ten be dis­cussing some­thing rele­vant to you.

• If noth­ing else, your writ­ing is bet­ter than that of a high pro­por­tion of peo­ple on the in­ter­net.

Do you know me?

More gen­er­ally, you could ex­plore the idea of ev­ery­one be­ing more com­pe­tent than you at ev­ery­thing. Is there ev­i­dence for this? Ev­i­dence against it? Is it likely that you’re at the bot­tom of abil­ity at ev­ery­thing?

I find a lot of ev­i­dence for it, but I am not sure I am not be­ing se­lec­tive. For ex­am­ple, I am the only one in my peer group that never did any ex­tra-cur­ricu­lar ac­tivi­ties at school. While ev­ery­one had some­thing like sports or hob­bies, I seemed to only study at school an waste all my other time sur­fing the in­ter­net and play­ing the same video games over and over.

• The idea that play­ing an in­stru­ment is a hobby while play­ing a video game isn’t is com­pletely cul­tural. It says some­thing about val­ues but lit­tle about com­pe­tence.

• One im­por­tant differ­ence is that video games are op­ti­mized to be fun while mu­si­cal in­stru­ments aren’t. There­fore, play­ing an in­stru­ment can sig­nal dis­ci­pline in a way that play­ing a game can’t.

• I’m not sure that’s true. There’s se­lec­tion pres­sure on mu­si­cal in­stru­ments to make them fun to use. Most of the cor­re­spond­ing train­ing also mostly isn’t op­ti­mised for learn­ing but for fun.

• There’s also se­lec­tion pres­sure on in­stru­ments to make them pleas­ant to listen to. There’s no cor­re­spond­ing con­straint on video games.

• There’s no cor­re­spond­ing con­straint on video games.

In an age of eS­ports I’m not sure that’s true. Quite a lot of games are not bal­anced to make them fun for the av­er­age player but bal­anced for high level tour­na­ment play.

• Hav­ing a back­ground be­lief that you’re worse than ev­ery­one at ev­ery­thing prob­a­bly low­ered your ini­ti­a­tive.

• I seemed to only study at school an waste all my other time sur­fing the in­ter­net and play­ing the same video games over and over.

Ob­vi­ous ques­tion: Are you bet­ter at those games than other peo­ple? (On av­er­age, don’t com­pare your­self to the elite.)

How easy did study­ing come to you?

• At THOSE games? Yes. I can com­plete about half of Amer­i­can McGee’s Alice blind­folded. Other games? Gen­eral gam­ing? No. Or, okay, I am bet­ter than non-gamers, but my kinda-gamer peers are crub-stomp­ing me at mul­ti­player in ev­ery game.

Study­ing—very easy. Now, when I am a uni­ver­sity stu­dent—quite hard.

• Study­ing—very easy. Now, when I am a uni­ver­sity stu­dent—quite hard.

Seems like you fell prey to the clas­sic sce­nario of “be­ing in­tel­li­gent enough to breeze through high school and all I ended up with is a crappy work ethic.”

Univer­sity is as good of a place as any to fix this prob­lem. First of all, I en­courage you to do all the things peo­ple tell you you should do, but most peo­ple don’t: Read up be­fore classes, re­view af­ter classes, read the ex­tra ma­te­rial, ask your pro­fes­sors ques­tions or help, sched­ule pe­ri­odic re­view ses­sions of the stuff you’re sup­posed to know… You’ll re­gret not do­ing those things when you get your de­gree but don’t feel very com­pe­tent about your knowl­edge. Try to make a habit out of this and it’ll get eas­ier in other as­pects of your life.

And try new things. This is prob­a­bly a cliché in the LW-sphere by now, but re­ally try a lot of new things.

• Thanks. Still, should I take it as “yes, you are less com­pe­tent than peo­ple around you”?

• Maybe just less dis­ci­plined than you need to be. “Less com­pe­tent” is too con­fus­ingly rel­a­tive to mean any­thing solid.

• Well, here’s a con­fus­ing part. I didn’t tell the whole truth in par­ent post, there are ac­tu­ally two ar­eas that I am prob­a­bly more com­pe­tent than peers, in which oth­ers openly envy me in­stead of the other way around. One is the abil­ity to speak English (a for­eign lan­guage, most my peers wouldn’t be able to ask this ques­tion here), an­other is dis­ci­pline. Every­one ac­tu­ally en­vies me for al­most never pro­cras­ti­nat­ing, never for­get­ting any­thing, etc. Are we talk­ing about differ­ent dis­ci­plines here?

• If you already have dis­ci­pline, what ex­actly is the difficulty you’re find­ing to study now as com­pared to pre­vi­ous years?

• Some­times, I just have trou­ble un­der­stand­ing the sub­ject ar­eas. I am go­ing to take Mathi­asZa­man’s ad­vice: I always used my dis­ci­pline to com­plete in time and with qual­ity what needs to be com­pleted, but not into any­thing ex­tra. Mostly, though, it is (so­cial) anx­iety—I can’t ap­proach a pro­fes­sor with any­thing un­less I have a pack of com­pan­ions back­ing me up, or can’t start a pro­ject un­less a friend con­firms that I cor­rectly un­der­stand what it is that has to be done. And my com­pan­ions have awful dis­ci­pline, worst of any­one I ever worked with (which is not many). So I end up, for ex­am­ple, prepar­ing all as­sign­ments in time, but hand them in only long af­ter the time is due, when a friend has pre­pared them. I am work­ing on that prob­lem, and it be­comes less se­vere as the time goes.

• I agree; group as­sign­ments are the worst. Is there any way you can get the uni­ver­sity to let you take unique tests for the themes you already mas­ter?

• First of all: I don’t agree that group as­sign­ments are bad. Those prob­lems are my prob­lems, and most com­plex tasks in real life re­ally benefit from, or re­quire, col­lab­o­ra­tion. I think that uni­ver­si­ties should have more group as­sign­ments and pro­jects, even if it would mean I’ll drop out.

Se­cond, I wasn’t talk­ing about group as­sign­ments in my post. I was talk­ing about be­ing too anx­ious to work on your own per­sonal as­sign­ment, un­less a friend has already done it and can provide con­fir­ma­tion.

• So it seems like you can solve the prob­lems… but then you are some­how frozen by fear that maybe your solu­tion is not cor­rect. Un­til some­one else con­firms that it is cor­rect, and then you are able to con­tinue. Solv­ing the prob­lem is not a prob­lem; giv­ing it to the teacher is.

On the in­tel­lec­tual level, you should up­date the prior prob­a­bil­ity that your solu­tions are cor­rect.

On the emo­tional level… what ex­actly is this hor­rible out­come your imag­i­na­tion shows you if you would give the pro­fes­sor a wrong solu­tion?

It is prob­a­bly some­thing that feels stupid if you try to ex­plain it. (Maybe you imag­ine the pro­fes­sor scream­ing at you loudly, and the whole uni­ver­sity laugh­ing at you. It’s not re­al­is­tic, but it may feel so.) But that’s ex­actly the point. On some level, some­thing stupid hap­pens in your mind, be­cause oth­er­wise you wouldn’t have this ir­ra­tional prob­lem. It doesn’t make sense, but it’s there in your head, in­fluenc­ing your emo­tions and ac­tions. So the proper way is to de­scribe your silent hor­rible vi­sion ex­plic­itly, as speci­fi­cally as you can (bring it from the dark­ness to light), un­til your own mind fi­nally no­tices that it re­ally was stupid.

• I have no trou­ble imag­in­ing all the hor­rible out­comes, be­cause I did get into trou­ble sev­eral times in similar sce­nar­ios, where get­ting con­fir­ma­tion from a friend would have saved me. For ex­am­ple, a cou­ple of hours af­ter giv­ing my work to a teacher, I re­mem­bered that my friend wasn’t there, even though he was ready. I in­quired him about it, and it then turned out that I gave it to the wrong teacher, and get­ting all my hand-crafted draw­ings back ended up be­ing a very time and effort con­sum­ing task.

• Read­ing that it sounds like your core is­sue is around low self con­fi­dence.

Tak­ing an IQ test might help to dis­pell the idea that you are be­low av­er­age. You might be un­der the LW IQ av­er­age IQ of 140 but you are prob­a­bly well above 100 which is the av­er­age in so­ciety.

• I can guess that my IQ has three digits. It’s just that it doesn’t en­able me to do things bet­ter than oth­ers. Ex­cept solv­ing iq tests, I guess.

• It seems that you have a de­cent IQ. Ad­di­tion­ally you seem to be con­scious and can avoid pro­cras­ti­na­tion which is a very, very valuable char­ac­ter­is­tic.

On the other hand you have is­sues with self es­teem. As far as I un­der­stand IQ test­ing gets used by real psy­chol­o­gists in cases like this.

Tak­ing David Burns CBT book, “The Feel­ing Good Hand­book” and do­ing the ex­er­cises ev­ery day for 15 min­utes would likely do a lot for you, es­pe­cially if you can get your­self to do the ex­er­cises reg­u­larly.

I also sup­port Nancy’s sug­ges­tion of Fel­denkrais.

• Another stupid ques­tion to boot: will all this make me more con­tent with my cur­rent situ­a­tion? While not be­ing a pleas­ant feel­ing, my dis­con­tent with my com­pe­tence does serve as a mo­ti­va­tor to ac­tu­ally study. I wouldn’t have asked this ques­tion here and wouldn’t re­ceive all the ad­vice if I were less com­pe­tent than ev­ery­one else and okay with it.

• That’s a re­ally in­ter­est­ing ques­tion, and I don’t have an an­swer to it. Do you have any ideas about how your life might be differ­ent in pos­i­tive ways if you didn’t think you were less com­pe­tent than ev­ery­one about ev­ery­thing? Is there any­thing you’d like to do just be­cause it’s im­por­tant to you?

• Do you have any ideas about how your life might be differ­ent in pos­i­tive ways if you didn’t think you were less com­pe­tent than ev­ery­one about ev­ery­thing?

Not any­thing spe­cific.

Is there any­thing you’d like to do just be­cause it’s im­por­tant to you?

I have goals and val­ues be­yond be­ing con­tent or happy, but they are more than a cou­ple of in­fer­en­tial steps away from my day-to-day rou­tine, and I don’t have that in­ner fire thingy that would bridge the gap. So, more of­ten than not, they are not the main com­po­nent of my ac­tual mo­ti­va­tion. Also, I am afraid of pos­si­bil­ity of hav­ing my val­ues changed.

• I don’t think I know you, but I’m not that great at re­mem­ber­ing peo­ple. I made the claim about your writ­ing be­cause I’ve spent a lot of time on­line.

I’m sure you’re be­ing se­lec­tive about the peo­ple you’re com­par­ing your­self to.

• There are two sep­a­rate is­sues: morale man­age­ment and be­ing cal­ibrated about your own abil­ities.

I think the best way to be well-cal­ibrated is to ap­prox­i­mate pager­ank—to get a sense of your com­pe­tence, don’t ask your­self, av­er­age the ex­tracted opinion of oth­ers that are con­sid­ered com­pe­tent and have no in­cen­tives to mis­lead you (this last bit is tricky, also the ex­tract­ing pro­cess may have to be slightly in­di­rect).

Mo­rale is hard, and per­son spe­cific. My ex­pe­rience is that in long term pro­jects/​goals, morale be­comes a se­ri­ous prob­lem long be­fore the situ­a­tion ac­tu­ally be­comes bad. I think hav­ing “wolver­ine morale” (“You know what Mr. Griz­zly? You look like a wuss, I can to­tally take you!”) is a huge chunk of suc­cess, big­ger than raw abil­ity.

• think hav­ing “wolver­ine morale” (“You know what Mr. Griz­zly? You look like a wuss, I can to­tally take you!”) is a huge chunk of suc­cess, big­ger than raw abil­ity.

Is Zucker­berg’s “Move fast, break things” similar/​re­lated?

• Look up the im­poster syn­drome. And try not to au­to­mat­i­cally say; “I don’t have it be­cause I never did any­thing of note­wor­thy­ness”

---Oh dang; some­one else got to it first.

How did you go with your opinions of im­poster syn­drome now?

• I some­times have a similar ex­pe­rience, and when I do, it is al­most always sim­ply an effect of my own stan­dards of com­pe­tence be­ing higher than those around me.

Imag­ine, some sort of prob­lem arises in the pres­ence of a small group. The mem­bers of that group look at each other, and who­ever sig­nals the most con­fi­dence gets first crack at the prob­lem. But this more-con­fi­dent per­son then does not re­veal any knowl­edge or skill that the oth­ers do not pos­sess, be­cause said con­fi­dence was en­tirely do to higher will­ing­ness to po­ten­tially make the prob­lem worse through trial and er­ror.

So, in this sce­nario, feel­ing less com­pe­tent does not mean you are less com­pe­tent; it means you are more risk-ad­verse. Do you have a gen­er­al­ized par­a­lyz­ing fear of mak­ing the prob­lem worse? If so, wel­come to the club. If not, nev­er­mind.

• I per­son­ally am a fan of talk­ing ther­apy. If you are think­ing some­thing is worth ask­ing a ther­a­pist about, it is worth ask­ing a ther­a­pist about. But be­yond the gen­er­al­ities, think­ing you are not good enough is ab­solutely right in the tar­gets of the kinds of things it can be helpful to dis­cuss with a ther­a­pist.

Con­sider the propo­si­tions: 1) ev­ery­one is more com­pe­tent than you at ev­ery­thing and 2) you can carry on a co­her­ent con­ver­sa­tion on less­wrong I am pretty sure that these are mu­tu­ally ex­clu­sive propo­si­tions. I’m pretty sure just from read­ing some of your com­ments that you are more com­pe­tent than plenty of other peo­ple at a rea­son­able range of in­tel­lec­tual pur­suits.

Any­thing you can talk to a ther­a­pist about you can talk to your friends about. Do they think you are less com­pe­tent than ev­ery­body else? They might point out to you in a dis­cus­sion some fairly ob­vi­ous ev­i­dence for or against this propo­si­tion that you are over­look­ing.

• I asked my friends around. Most were un­able to point out a sin­gle thing I am good at, ex­cept speak­ing English very well for a for­eign lan­guage, and hav­ing a good willpower. One said “hmmm, maybe math?” (as it turned out, he was fast-talked by the math bab­ble that was au­raing around me for some time af­ter hav­ing read Godel, Escher, Bach), and sev­eral pointed out that I am hand­some (while a nice perk, I don’t what that to be my defin­ing profi­ciency).

• Origi­nally you ex­pressed con­cern that all other peo­ple were bet­ter than you at all the things you might do.

But here you find out from your friends that for each thing you do there are other peo­ple around you who do it bet­ter.

In a world with 6 billion peo­ple, es­sen­tially ev­ery one of us can find peo­ple who are bet­ter at what we are good at than we are. So join the club. What works is to take some plea­sure in do­ing things.

Only you can im­prove your un­der­stand­ing of the world, for in­stance. No one in the world is bet­ter at in­creas­ing your un­der­stand­ing of the world than you are. I read com­ments here and post “an­swers” here to in­crease my un­der­stand­ing of the world. It doesn’t mat­ter that other peo­ple here are bet­ter at an­swer­ing these ques­tions, or that other peo­ple here have a bet­ter un­der­stand­ing of the world than I do. I want to in­crease my un­der­stand­ing of the world and I am the only per­son in the world who can do that.

I also wish to un­der­stand tak­ing plea­sure and joy from the world and work to in­crease my plea­sure and joy in the world. No one can do that for me bet­ter than I can. You might take more joy than me in kiss­ing that girl over there. Still, I will kiss her if I can be­cause hav­ing you kiss her gives me much less joy and plea­sure than kiss­ing her my­self, even if I am get­ting less joy from kiss­ing here than you would get for your­self if you kissed her .

The con­cern you ex­press to only par­ti­ci­pate in things where you are bet­ter than ev­ery­body else is just a re­sult of your evolu­tion as a hu­man be­ing. The genes that make you think be­ing bet­ter than oth­ers around you have, in the past, caused your an­ces­tors to find effec­tive and ca­pa­ble mates, able to keep their chil­dren al­ive and able to pro­duce chil­dren who would find effec­tive and ca­pa­ble mates. But your genes are just your genes they are not the “truth of the world.” You can make the choice to do things be­cause you want the ex­pe­rience of do­ing them, and you will find you are bet­ter than any­body else in the world by far at giv­ing your­self ex­pe­riences.

• Pos­si­ble, but un­likely. We’re all just wing­ing it and as oth­ers have pointed out, im­pos­tor syn­drome is a thing.

• I sup­pose that the prob­lem emerged only be­cause you com­mu­ni­cate only with peo­ple of your sort and level of aware­ness, try to go on a trip to some ru­ral village or start con­ver­sa­tions with taxists, dish­wash­ers, jan­i­tors, cooks, se­cu­rity guards etc.

• Is that ba­si­cally a self-con­fi­dence prob­lem?

• Is it? I don’t know.

• Well, does it im­pact what you are will­ing to do or try? Or it’s just an ab­stract “I wish I were as cool” feel­ing?

If you imag­ine your­self lack­ing that per­cep­tion (e.g. imag­ine ev­ery­one’s IQ—ex­cept yours—drop­ping by 20 points), would the things you do in life change?

• Guesses here. I would be tak­ing up more risks in ar­eas where suc­cess de­pends on com­pe­ti­tion. I would be­come less con­form­ing, more ar­ro­gant and cyn­i­cal. I would care less about pro­duc­ing good art, and good things in gen­eral. I would try less to im­prove my so­cial skills, em­pa­thy and net­work­ing, and fo­cus more on self-suffi­ciency. I wouldn’t have asked this ques­tion here, on LW.

• I fre­quently feel similar and I haven’t found a good way to deal with those feel­ings, but it’s im­plau­si­ble that ev­ery­one around you is more com­pe­tent at ev­ery­thing. Some things to take into ac­count:

• Who are you com­par­ing your­self to? Peers? Every­one you meet? Suc­cess­ful peo­ple?

• What traits are you com­par­ing? It’s un­likely that some­one who is, for ex­am­ple, bet­ter at math than you are is also su­pe­rior in ev­ery other area.

• Maybe you haven’t found your ad­van­tage or a way to ex­ploit this.

• Maybe you haven’t spend enough time on one thing to get re­ally good at it.

Long shot: Do you think you might have ADHD?.pdf) (pdf warn­ing) Alter­na­tively, go over the di­ag­nos­tic criteria

• Your link is bro­ken be­cause it has paren­the­ses in the URL. Es­cape them with back­slashes to un­break it.

• Who are you com­par­ing your­self to? Peers? Every­one you meet? Suc­cess­ful peo­ple?

Peers.

What traits are you com­par­ing? It’s un­likely that some­one who is, for ex­am­ple, bet­ter at math than you are is also su­pe­rior in ev­ery other area.

It be­ing un­likely and still seem­ing to hap­pen is the rea­son I asked this ques­tion.

Maybe you haven’t found your ad­van­tage or a way to ex­ploit this.

Maybe you haven’t spend enough time on one thing to get re­ally good at it.

Maybe. And ev­ery­one else did, thus deny­ing me of com­pet­i­tive ad­van­tage?

• Ok I have one meta-level su­per-stupid ques­tion . Would it be pos­si­ble to im­prove some as­pects of the LessWrong web­page? Like mak­ing it more read­able for mo­bile de­vices? Every time I read LW in the tram while go­ing to work I go in­sane try­ing to hit su­per-small links on the web­site. As I work in Web de­vel­op­ment/​UI de­sign, I would vol­un­teer to work on this. I think in gen­eral that the LW web­site is a bit out­dated in terms of both de­sign and func­tion­al­ity, but I pre­sume that this is not con­sid­ered a pri­or­ity. How­ever a bet­ter read­abil­ity on mo­bile screens would be a pos­i­tive con­tri­bu­tion to its pur­pose.

• In dietary and health ar­ti­cles they of­ten speak about “pro­cessed food”. What ex­actly is pro­cessed food and what is un­pro­cessed food?

• Defi­ni­tions will vary de­pend­ing on the pu­rity ob­ses­sion of the speaker :-) but as a rough guide, most things in cans, jars, boxes, bot­tles, and car­tons will be pro­cessed. Things that are, more or less, just raw plants and an­i­mals (or parts of them) will be un­pro­cessed.

There are bound­ary cases about which peo­ple ar­gue—e.g. is pas­teur­ized milk a pro­cessed food? -- but for most things in a food store it’s pretty clear what’s what.

• Thanks! That does make sense.

• Any­thing that you could have picked from the plant your­self (a pear, a car­rot, a berry) AND has not been sprin­kled with con­ser­vants/​pes­ti­cides/​shiny gloss is un­pro­cessed. If it comes in a pack­age and looks noth­ing like what na­ture gives (noo­dles, cook­ies, jell-o), it’s been pro­cessed.

Raw milk also counts as un­pro­cessed, but in the 21st cen­tury there’s no ex­cuse to be drink­ing raw milk.

• in the 21st cen­tury there’s no ex­cuse to be drink­ing raw milk

That’s de­bat­able—some peo­ple be­lieve raw milk to be very benefi­cial.

• Do you have any sources that quan­tify the risk?

• Oh, I’m sure the gov­ern­ment wants you to be­lieve raw milk is the devil :-)

In re­al­ity I think it de­pends, in par­tic­u­lar on how good your im­mune sys­tem is. If you’re im­muno­com­pro­mised, it’s prob­a­bly wise to avoid raw milk (as well as, say, raw let­tuce in sal­ads). On the other hand, if your im­mune sys­tem is ca­pa­ble, I’ve seen no data that raw milk pre­sents an un­ac­cept­able risk—of course how much risk is un­ac­cept­able varies by per­son.

• More rele­vant may be your sup­ply chain. If you have given your cow all re­quired shots and drink the milk within a day—and with­out mix­ing it with the milk of dozens of other cows—you are go­ing to be a lot bet­ter off than if you stop off at a ran­dom road­side stand and buy a gal­lon of raw milk.

• So, it doesn’t make sense to talk about pro­cessed meats, if you can’t pick them from plants?

If I roast my car­rot, does it be­come pro­cessed?

• I’m as­sum­ing you value your health and thus don’t eat any raw meat, so all of it is go­ing to be pro­cessed—if only at your own kitchen.

By the same stan­dard, a roasted car­rot is, tech­ni­cally speak­ing, “pro­cessed.” How­ever, what food geeks usu­ally think of when they say “pro­cessed” in­volves a mas­sive in­dus­trial plant where your food is filled with ad­di­tives to com­pen­sate for all the vi­tam­ins it loses af­ter be­ing crushed and de­hy­drated. Too of­ten it ends up with an in­hu­man amount of salt and/​or sugar added to it, too.

• Is it pos­si­ble even in prin­ci­ple to perform a “con­scious­ness trans­fer” from one hu­man body to an­other? On the same prin­ci­ple as mind up­load­ing, only the mind ends up in an­other biolog­i­cal body rather than a com­puter. Can you trans­fer “soft­ware” from one brain to an­other in a purely in­for­ma­tional way, while pre­serv­ing the anatom­i­cal in­tegrity of the sec­ond or­ganism? If so, would the re­cip­i­ent or­ganism come from a fully al­ive and func­tional hu­man who would be ba­si­cally kil­led for this pur­pose? Or bred for this pur­pose? Or would it re­quire a com­plete brain trans­plant? (If so, how would neu­ral struc­tures found in the sec­ond body heal & con­nect with the trans­planted brain so that a func­tional cen­tral ner­vous sys­tem re­sults?) Wouldn’t the per­son whose con­scious­ness is be­ing trans­ferred ex­pe­rience some sort of per­son­al­ity change due to “in­hab­it­ing” a struc­turally differ­ent brain or body?

Is this whole hy­poth­e­sis just an ar­ti­fact of rem­i­nis­cent in­tro­jected mind-body du­al­ism, not com­pat­i­ble with mod­ern sci­ence? Does the sci­ence world even know enough about con­scious­ness and the brain to be able to an­swer this ques­tion?

I’m ask­ing this be­cause ever since I found out about ems and mind up­load­ing, hav­ing minds moved to bod­ies rather than com­put­ers seemed to me a more ap­peal­ing hy­po­thet­i­cal solu­tion to the prob­lem of death/​mor­tal­ity. Un­for­tu­nately, I lack the nec­es­sary back­ground knowl­edge to think co­her­ently about this idea, so I figured there are many peo­ple on LW who don’t, and could ex­plain to me whether this whole idea makes sense.

• I don’t think any­body has hard ev­i­dence of an­swers to any of those ques­tions yet (though I’d be fas­ci­nated to learn oth­er­wise) but I can offer some con­jec­tures:

Pos­si­ble in prin­ci­ple? Yes. I see no ev­i­dence that sen­tience and iden­tity are any­thing other than in­for­ma­tion stored in the ner­vous sys­tem, and in the­ory the cog­ni­tive por­tion of a ner­vous sys­tem an or­gan and could be trans­planted like any other.

Pre­serv­ing anatom­i­cal in­tegrity? Not with any­thing like cur­rent sci­ence. We can take non-in­tru­sive brain scans, but they’re pretty low-re­s­olu­tion and (so far as I know) strictly read-only. Even sim­ply stim­u­lat­ing parts of the brain isn’t enough to ba­si­cally re-write it in such a way that it be­comes an­other per­son’s brain.

Need to kill donors? To the best of my knowl­edge, it’s the­o­ret­i­cally pos­si­ble to ba­si­cally ma­ture a hu­man body in­clud­ing a po­ten­tially-func­tional brain, while keep­ing that brain in a veg­e­ta­tive state the en­tire time. Of course, that’s still a po­ten­tial hu­man—the veg­e­ta­tive­ness needs to be re­versible for this to be use­ful—so the ethics are still highly ques­tion­able. It’s prob­a­bly pos­si­ble to do it with­out a full brain at all, which seems less evil if you can some­how do it my some mechanism other than what amounts to a pre-na­tal full lobotomy, but would re­quire the phys­i­cal brain trans­plant op­tion for trans­fer­ence.

Nerves con­nect­ing and heal­ing? Nerves can re­pair them­selves, though it’s usu­ally ex­tremely slow. Stem cell ther­a­pies have po­ten­tial here, though. Con­nect­ing the brain to the rest of the body is a lot of nerves, but they’re pretty much all sen­sory and mo­tor nerves so far as I know; the brain it­self is fairly self-contained

Per­son­al­ity change? That de­pends on how differ­ent the new body is from the old, I would guess. The ob­vi­ously-prefer­able body is a clone, for many rea­sons in­clud­ing avoid­ing the need to avoid im­mune sys­tem re­jec­tion of the new brain. Per­son­al­ity is always go­ing to be some­what ex­ter­nally-driven, so I wouldn’t ex­pect some­body trans­ferred from a 90-year-old body to a 20-year-old one to have the same per­son­al­ity re­gard­less of any other in­for­ma­tion be­cause the body will just be younger. On the other hand, if you use a clone body that’s the same age as the trans­feree, it wouldn’t shock me if the per­son­al­ity didn’t ac­tu­ally change sig­nifi­cantly; it should ba­si­cally feel like go­ing un­der for surgery and then com­ing out again with noth­ing changed.

Now, mind you, I’m no brain sur­geon (or med­i­cal pro­fes­sional of any sort), nor have I stud­ied any sig­nifi­cant amount of psy­chol­ogy. Nor am I a philoso­pher (see my ques­tion above). How­ever, I don’t re­ally see how the mind could be any­thing ex­cept a char­ac­ter­is­tic of the body. Al­ter­ing (in­ten­tion­ally or oth­er­wise) the part of the body re­spon­si­ble for thought al­ters the mind. Our cur­rent at­tempted maps of the mind don’t come close to fully rep­re­sent­ing the ter­ri­tory, but I firmly be­lieve it is map­pable. Whether an ex­ist­ing one is re-map­pable I can’t say, but the idea of trans­plant­ing a brain has been ex­plored in sci­ence fic­tion for decades, and in the­ory I see ne log­i­cal rea­son why it couldn’t work.

• To the best of my knowl­edge, it’s the­o­ret­i­cally pos­si­ble to ba­si­cally ma­ture a hu­man body in­clud­ing a po­ten­tially-func­tional brain, while keep­ing that brain in a veg­e­ta­tive state the en­tire time.

I don’t think this is cur­rently pos­si­ble. The body just wouldn’t work. A large part of the ‘wiring’ dur­ing in­fant and child­hood is con­nect­ing body parts and func­tions with higher and higher level con­cepts. Think about toi­let train­ing. You aren’t even aware of how it works but it nonethe­less some­how con­nects large scale plan­ning (how ur­gent is it, when and where are toi­lets) to the ac­tual con­trol of the or­gans. Con­sid­er­ing how differnt minds (in­clud­ing the con­nec­tion to the body) are I think the min­i­mum re­quire­ment (short of signu­lar­ity-level in­ter­ven­tions) is an iden­ti­cal twin.

That said I think the ex­ist­ing tech­niques for trans­fer­ring mo­tion from one brain to an­other com­bined with ad­vanced hyp­no­sis and drugs could con­ceiv­ably de­vel­oped to a point where it is pos­si­ble to trans­fer not­i­ca­ble parts of your iden­tity over to an­other body—at least over an ex­tended pe­riod of time where the new brain ‘learn’ to be you. To also trans­fer mem­ory is cam­pa­rably easy. Whether the re­sult can be called ‘you’ or is suffi­ciently al­ike to you is an­other ques­tion.

• Need to kill donors? To the best of my knowl­edge, it’s the­o­ret­i­cally pos­si­ble to ba­si­cally ma­ture a hu­man body in­clud­ing a po­ten­tially-func­tional brain, while keep­ing that brain in a veg­e­ta­tive state the en­tire time. Of course, that’s still a po­ten­tial hu­man—the veg­e­ta­tive­ness needs to be re­versible for this to be use­ful—so the ethics are still highly ques­tion­able.

That’s how I pic­tured it, yes. At this point I wouldn’t con­cern my­self with the ethics of it, be­cause, if our tech­nol­ogy ad­vances this much, then sim­ply the fact that hu­man­ity can perform such a feat is an ex­tremely pos­i­tive thing, and prob­a­bly the end of death as we know it. What wor­ries me more is that this wouldn’t re­sult in a func­tional ma­ture in­di­vi­d­ual. For in­stance: in or­der to de­velop the mus­cu­lar sys­tem, the body’s skele­tal mus­cles would have to ex­pe­rience some sort of stress, i.e. be used. If you grow the or­ganism in a jar from birth to con­scious­ness trans­fer (as is prob­a­bly most eth­i­cal), it wouldn’t have moved at all its en­tire life up to that point, and would there­fore have ex­tremely weak mus­cu­la­ture. What to do in the mean­time, elec­tri­cally stim­u­late the mus­cles? Maybe, but it prob­a­bly wouldn’t have re­sults com­pa­rable to nat­u­ral us­age. Be­sides, there are prob­a­bly many other body sub­sys­tems that would suffer similarly with­out much you could do about it. See Gun­nar Zarncke’s com­ment be­low.

On the other hand, if you use a clone body that’s the same age as the trans­feree, it wouldn’t shock me if the per­son­al­ity didn’t ac­tu­ally change sig­nifi­cantly; it should ba­si­cally feel like go­ing un­der for surgery and then com­ing out again with noth­ing changed.

Yes, but I imag­ine most uses to be re­lated to re­ju­ve­na­tion. It would mean that the ge­netic info re­quired for clon­ing would have to be gath­ered ba­si­cally at birth (and the clon­ing pro­cess be­gun shortly there­after), and there would still be a 9-month age differ­ence. There’s lit­tle point in grow­ing a backup clone for an or­ganism so soon af­ter birth. An age differ­ence of 20 years be­tween per­son and clone seems more rea­son­able.

• Can you trans­fer “soft­ware” from one brain to an­other in a purely in­for­ma­tional way, while pre­serv­ing the anatom­i­cal in­tegrity of the sec­ond or­ganism?

This can already be done via the senses. This also trans­fers con­scious­ness of the con­tent that is be­ing trans­ferred. What would con­scious­ness with­out con­tent look like?

• In or­der to provide a definite an­swer to this ques­tion, we’d need to know how the brain pro­duces con­scious­ness and per­son­al­ity, as well as the ex­act mechanism of the up­load(e.g., can it rewire synapses?).

• Not ex­actly true; we prob­a­bly don’t need to know how con­scious­ness arises. We would cer­tainly have to rewire synapses to match the origi­nal brain, and it is likely that if we ex­actly repli­cate brain struc­ture neu­ron by neu­ron, synapse by synapse, we would still not know where con­scious­ness lies, but would have a con­scious du­pli­cate of the origi­nal.

Alter­na­tively you could hy­poth­e­size a soul, but that seems like worry for worry’s sake.

The flip side to this is that there is no mea­surable differ­ence be­tween ‘some­one who is you and feels con­scious’ and ‘some­one who is ex­actly like you in ev­ery way but does not feel con­scious (but will con­tinue to claim that e does)’. Even if you iden­ti­fied a men­tal state on a brain scan that you felt cer­tain that was caus­ing the ex­pe­rience of con­scious­ness, in or­der to ap­prox­i­mate a proof of this you would have to be able to mea­sure a group of sub­jects that are nearly iden­ti­cal ex­cept not ex­pe­rienc­ing con­scious­ness, a group that has not yet been found in na­ture.

• There no such thing as “purely in­for­ma­tional” when it comes to brains.

I’m ask­ing this be­cause ever since I found out about ems and mind up­load­ing, hav­ing minds moved to bod­ies rather than com­put­ers seemed to me a more ap­peal­ing hy­po­thet­i­cal solu­tion to the prob­lem of death/​mor­tal­ity.

If you want to fo­cus on that prob­lem it’s likely eas­ier to sim­ply fix up what­ever is wrong in the body you are start­ing with than do­ing com­plex up­load­ing.

• There no such thing as “purely in­for­ma­tional” when it comes to brains.

It’s good to know, but can you elab­o­rate more on this in the con­text of the grand­par­ent com­ment? Per­haps with an anal­ogy to com­put­ers.

If you want to fo­cus on that prob­lem it’s likely eas­ier to sim­ply fix up what­ever is wrong in the body you are start­ing with than do­ing com­plex up­load­ing.

It oc­curred to me too, but I’m not sure this is the definite con­clu­sion. Fully heal­ing an ag­ing or­ganism suffer­ing from at least one se­vere dis­ease, while more rea­son­ably closer to cur­rent med­i­cal tech­nol­ogy, wouldn’t leave the pa­tient in as good a state as sim­ply mov­ing to a 20-year-old body.

• It’s good to know, but can you elab­o­rate more on this in the con­text of the grand­par­ent com­ment? Per­haps with an anal­ogy to com­put­ers.

Brains are no com­put­ers.

Fully heal­ing an ag­ing or­ganism suffer­ing from at least one se­vere dis­ease, while more rea­son­ably closer to cur­rent med­i­cal tech­nol­ogy, wouldn’t leave the pa­tient in as good a state as sim­ply mov­ing to a 20-year-old body.

Of course you wouldn’t only heal one sev­er­age dis­ease. You would also lengthen telomeres and do all sorts of other things that re­duce ag­ing effects.

• Sup­pose all the mem­o­ries in one per­son were wiped and re­placed with your mem­o­ries. I be­lieve the new body would claim to be you. It would in­tro­spect as you might now, and find your mem­o­ries as its own, and say “I am Dahlen in a new body.”

But would it be you? If the copy­ing had been non-de­struc­tive, then Dahlen in the old body still ex­ists and would “know” on meet­ing Dahlen in the new body that Dahlen in the new body was re­ally some­one else who just got all Dahlen’s mem­o­ries up to that point.

Mean­while, Dahlen in the new body would have ca­pa­bil­ities, moods, re­ac­tions, which would de­pend on the sub­strate more than the mem­o­ries. The func­tional parts of the brain, the wiring-other-than-mem­o­ries as it were, would be differ­ent in the new body. Dahlen in the new body would prob­a­bly be­have in ways that were similar to how the old body with its old mem­o­ries be­haved. It would still think it was Dahlen, but as Dahlen in the old body might think, that would just be its opinion and ob­vi­ously it is mis­taken.

As to up­load­ing, it is more than the brain that needs to be em­u­lated. We have hor­monal sys­tems that me­di­ate fear and joy and prob­a­bly a broad range of other feel­ings. I have a sense of my body that I am in some sense con­stantly aware of which would have to be simu­lated and would prob­a­bly be differ­ent in an em of me than it is in me, just as it would be differ­ent if my mem­o­ries were put in an­other body.

Would any­body other than Dahlen in the old body have a rea­son to doubt that Dahlen in the new body wasn’t re­ally Dahlen? I don’t think so, and es­pe­cially Dahlen in the new body would prob­a­bly be pretty sure it was Dahlen, even if it claimed to ra­tio­nally un­der­stand how it might not be. It would know it was some­body, and wouldn’t be able to come up with any other com­pel­ling idea for who it was other than Dahlen.

• I un­der­stand all this. And it’s pre­cisely the sort of per­son­al­ity preser­va­tion that I find largely use­less and would like to avoid. I’m not talk­ing about copy­ing mem­o­ries from one brain to an­other; I’m talk­ing about pre­serv­ing the sense of self in such a way that the per­son un­der­go­ing this pro­ce­dure would have the fol­low­ing sub­jec­tive ex­pe­rience: be anes­thetized (prob­a­bly), un­dergo surgery (be­cause I pic­ture it as some form of surgery), “wake up in new body”. (The old body would likely get buried, be­cause the whole pur­pose of perform­ing such a trans­fer would be to save dy­ing—very old or ter­mi­nally ill—peo­ple’s lives.) There would be only one ex­tant copy of that per­son’s mem­o­ries, and yet they wouldn’t “die”; there would be the same sort of con­ti­nu­ity of self ex­pe­rienced by peo­ple be­fore and af­ter go­ing to sleep. The one who would “die” is tech­ni­cally the per­son in the body which con­sti­tutes the re­cip­i­ent of the trans­fer (who may have been grown just for this pur­pose and kept un­con­scious its whole life). That’s what I mean. Think of it as more or less what hap­pens to the main char­ac­ter in the movie Avatar.

I re­al­ize the whole thing doesn’t sound very sci­en­tific, but have I man­aged to get my point across?

As to up­load­ing, it is more than the brain that needs to be em­u­lated. We have hor­monal sys­tems that me­di­ate fear and joy and prob­a­bly a broad range of other feel­ings. I have a sense of my body that I am in some sense con­stantly aware of which would have to be simu­lated and would prob­a­bly be differ­ent in an em of me than it is in me, just as it would be differ­ent if my mem­o­ries were put in an­other body.

Yes, but… Every­body’s phys­iolog­i­cal ba­sis for feel­ings is more or less the same; granted, there are struc­tural differ­ences that cause vari­a­tion in in­nate per­son­al­ity traits and other men­tal func­tions, and a differ­ent brain might em­ploy the body’s neu­ro­trans­mit­ter re­serve in differ­ent ways (I think), but the whole sys­tem is suffi­ciently similar from hu­man to hu­man that we can re­late to each other’s ex­pe­riences. There would be differ­ences, and the differ­ences would cause the per­son to be­have differ­ently in the “new body” than it did in the “old body”, but I don’t think one would have to move the glands or lim­bic sys­tem or what-have-you in ad­di­tion to just the brain.

• I un­der­stand what you are go­ing for. And I pre­sent the fol­low­ing prob­lem with it.

Dahlen A is put to un­con­scious. While A is un­con­scious mem­o­ries are com­pletely copied to un­con­scious body B. Dahlen B is wo­ken up. Your sce­nario is fulfilled, Dahlen B has en­tirely the mem­o­ries of be­ing put to sleep in body A and wak­ing up in body B. Dahlen B ex­am­ines his mem­o­ries and sees no gap in his ex­is­tence other than the “nor­mal” one of the anes­the­sis to ren­der Dahlen A un­con­scious. Your de­sires for a trans­fer sce­nario are fulfilled!

Sce­nario 1: Dahlen A is kil­led while un­con­scious and body dis­posed of. Noth­ing ever in­terferes with the per­cep­tion of Dahlen A and ev­ery­one around that there has been a trans­fer of con­scious­ness from Dahlen A to Dahlen B.

Sce­nario 2: A few days later Dahlen A is wo­ken up. Dahlen A of course has the sense of con­tin­u­ous con­scious­ness just as he would if he had un­der­gone a gall blad­der surgery. Dahlen A and Dahlen B are brought to­gether with other friends of Dahlen. Dahlen A is in­tro­spec­tively sure that he is the “real” Dahlen and no trans­fer ever took place. Dahlen B is in­tro­spec­tively sure that he is the “real” Dahlen and that a trans­fer did take place.

Your sce­nario as­sumes that there can be only one Dahlen. That the essence of Dahlen is a unique thing in the uni­verse, and that it can­not be copied so that there are two. I think this as­sump­tion is false. I think if you make a “good enough” copy of Dahlen that you will have two essences of Dahlen, and that at no point does a sin­gle essence of Dahlen ex­ist, and move from one body to an­other.

Fur­ther, if I am right and the essence of Dahlen can be copied, mul­ti­plied, and each pos­ses­sor of a copy has the com­plete in­tro­spec­tive prop­erty of see­ing that it is in fact Dahlen, then it is un­scien­tific to think that in the ab­sence of copy­ing, that your day to day ex­is­tence is any­thing more than this. That each day you wake up, each mo­ment you ex­pe­rience, your “con­ti­nu­ity” is some­thing you ex­pe­rience sub­jec­tively as a cur­rent state due to your ex­am­i­na­tion of your mem­o­ries. More im­por­tant, your con­ti­nu­ity is NOT some­thing “real,” not some­thing which ei­ther other ob­servers, or even your­self and your copies in­tro­spect­ing from within the brain of Dahlen A, B, C etc. can ever dis­t­in­guish from “real” con­ti­nu­ity vs just the sense of con­ti­nu­ity which fol­lows from a good qual­ity mem­ory copy.

That there is a sin­gle essence of Dahlen which nor­mally stays in one body, but which can be moved from one body to an­other, or into a ma­chine, I be­lieve is a false as­sump­tion, and that it is falsified by these thought ex­per­i­ments. As much as you and I might like to be­lieve there is an es­sen­tial con­ti­nu­ity which we pre­serve as long as we stay al­ive, a ra­tio­nal ex­am­i­na­tion of how we ex­pe­rience that con­ti­nu­ity shows that it is not a real con­ti­nu­ity, that copies could be cre­ated which would ex­pe­rience that con­ti­nu­ity in as real a sense as the origi­nal whether or not the origi­nal is kept around.

• By this rea­son­ing, isn’t it okay to kill some­one (or at least to kill them in their sleep)? After all, if ev­ery­one’s life is a con­stant se­quence of differ­ent en­tities, what you’re kil­ling would have ceased ex­ist­ing any­way. You’re just pre­vent­ing a new en­tity from com­ing into ex­is­tence. But pre­vent­ing a new en­tity from com­ing into ex­is­tence isn’t mur­der, even if the new en­tity re­sem­bles a pre­vi­ous one.

• By this rea­son­ing, isn’t it okay to kill some­one (or at least to kill them in their sleep)?

You tell me.

If you don’t like the moral im­pli­ca­tions of a cer­tain hy­poth­e­sis, this should have pre­cisely zero effect on your es­ti­ma­tion of the prob­a­bil­ity that this hy­poth­e­sis is cor­rect. The en­tire his­tory of the grow­ing ac­cep­tance of evolu­tion as a “true” the­ory fol­lows pre­cisely this course. Many peo­ple HATED the im­pli­ca­tion that man is just an­other an­i­mal. That a sen­ti­ment for moral­ity evolved be­cause groups in which that sen­ti­ment ex­isted were able to out-com­pete groups in which that sen­ti­ment was weaker. That the statue of David or the the­ory of Gen­eral Rel­a­tivity, or the love you feel for your mother or your dog arise as a con­se­quence, ul­ti­mately, of mind­less ran­dom vari­a­tions pro­duc­ing pop­u­la­tions from which some do bet­ter than oth­ers and pass down the vari­a­tions they have to the next gen­er­a­tion.

So if the im­pli­ca­tions of the con­ti­nu­ity of con­scious­ness are morally dis­taste­ful to you, do not make the mis­take of think­ing that makes them any less likely to be true. A study of sci­ence and sci­en­tific progress should cure you of this very hu­man ten­dency.

• If your rea­son­ing im­plies ~X, then X im­plies that your rea­son­ing is wrong. And if X im­plies that your rea­son­ing is wrong, then ev­i­dence for X is ev­i­dence against your rea­son­ing.

In other words, you have no idea what you are talk­ing about. The fact that some­thing has “dis­taste­ful im­pli­ca­tions” (that is, that it im­plies ~X, and there is ev­i­dence for X) does mean it is less likely to be true.

The fact that some­thing has dis­taste­ful im­pli­ca­tions means it is less likely to be true.

[pol­lid:802]

• His­tor­i­cally, the hy­poth­e­sis that the earth or­bited the sun had the dis­taste­ful im­pli­ca­tions that we were not the cen­ter of the uni­verse. Gal­ileo was pros­e­cuted for this be­lief and re­canted it un­der threat. I am sur­prised that you think the dis­taste­ful im­pli­ca­tions for this be­lief were ev­i­dence that the earth did not in fact or­bit the sun.

His­tor­i­cally the hy­poth­e­sis that hu­mans evolved from non-hu­man an­i­mals had the dis­taste­ful im­pli­ca­tions that hu­mans had not been cre­ated by god in his image and pro­vided with im­mor­tal souls by god. I am sur­prised that you con­sider this dis­taste to be ev­i­dence that evolu­tion is an in­cor­rect the­ory of the ori­gin of species, in­clud­ing our own.

This is a ra­tio­nal­ity mes­sage board, de­voted to, among other things, list­ing the com­mon mis­takes that hu­mans make in try­ing to de­ter­mine the truth. I would have bet dol­lars against donuts that re­ject­ing the truth of a hy­poth­e­sis be­cause its im­pli­ca­tions were dis­taste­ful would have been an ob­vi­ous can­di­date for that list, and I would have ap­par­ently lost.

• If you had rea­son to be­lieve that the Earth is the cen­ter of the uni­verse, the fact that or­bit­ing the sun con­tra­dicts that is ev­i­dence against the Earth or­bit­ing the sun. It is re­lated to proof by con­tra­dic­tion; if your premises lead you to a con­tra­dic­tory con­clu­sion, then one of your premises is bad. And if one of your premises is some­thing in which you are jus­tified in hav­ing ex­tremely high con­fi­dence, such as “there is such a thing as mur­der”, it’s prob­a­bly the other premise that needs to be dis­carded.

I am sur­prised that you con­sider this dis­taste to be ev­i­dence that evolu­tion is an in­cor­rect the­ory of the ori­gin of species

If you have rea­son to be­lieve that hu­mans have souls, and evolu­tion im­plies that they don’t, that is ev­i­dence against evolu­tion. Of course, how good that is as ev­i­dence against evolu­tion de­pends on how good your rea­son is to be­lieve that hu­mans have souls. In the case of souls, that isn’t re­ally very good.

• Ev­i­dence that kil­ling is wrong is cer­tainly pos­si­ble, but your state­ment “I think that kil­ling is wrong” is such weak ev­i­dence that it is fair for us to dis­miss it. You may provide rea­sons why we should think kil­ling is wrong, and maybe we will ac­cept your rea­sons, but so far you have not given us any­thing worth con­sid­er­ing.

I think that you are also equiv­o­cat­ing on the word ‘im­ply’, sug­gest­ing that ‘dis­taste­ful im­pli­ca­tions’ means some­thing like ‘log­i­cal im­pli­ca­tions’.

• The task you de­scribe, at least the part where no whole brain trans­plant is in­volved, can be di­vided into two parts: 1) ex­tract­ing the es­sen­tial in­for­ma­tion about your mind from your brain, and 2) im­plant­ing that same in­for­ma­tion back into an­other brain.

Either of these could be achieved in two rad­i­cally differ­ent ways: a) psy­cholog­i­cally, i.e. by in­ter­view or mem­oir writ­ing on the ex­trac­tion side and “brain-wash­ing” on the im­plant­ing side, or b) tech­nolog­i­cally, i.e. by func­tional MRI, elec­tro-en­cephalog­ra­phy, etc on the ex­trac­tion side. It is hard for me to en­vi­sion a tech­nolog­i­cal im­plan­ta­tion method.

Either way, it seems to me that once we un­der­stand the mind enough to do any of this, it will turn out the eas­iest to just do the ex­trac­tion part and then simu­late the mind on a com­puter, in­stead of im­plant­ing it into a new body. Elimi­nate the wet­ware, and gain the benefit of reg­u­lar back­ups, co­pi­ous copies, and Moore’s law for in­creas­ing effec­tive­ness. Also, this would be eth­i­cally much more tractable.

It seems to me this could also be the solu­tion to the un­friendly AI prob­lem. What if the AI are us? Then yield­ing the world to them would not be so much of a prob­lem, sud­denly.

• psy­cholog­i­cally, i.e. by in­ter­view or mem­oir writ­ing on the ex­trac­tion side and “brain-wash­ing” on the im­plant­ing side,

I would ex­pect recre­at­ing a mind from in­ter­views and mem­o­irs to be about as ac­cu­rate as build­ing a car based on in­ter­views and mem­o­irs writ­ten by some­one who had driven cars. which is to say, the part of our mind that talks and writes is not noted for its brilli­ant and de­tailed in­sight into how the vast ma­jor­ity of the mind works.

• Good point.

I sup­pose it boils down to what you in­clude when you say “mind”. I think the part of our mind that talks and writes is not very differ­ent from the part that thinks. So, if you nar­rowly, but rea­son­ably, define the “mind” as only the con­scious, think­ing part of our per­son­al­ity, it might not be so far­fetched to think a rea­son­able re­con­struc­tion of it from writ­ings is pos­si­ble.

Thought and lan­guage are closely re­lated. Ask your­self: How many of my thoughts could I put into lan­guage, given a good effort? My gut feel­ing is “most of them”, but I could be wrong. The same goes for mem­o­ries. If a mem­ory can not be ex­pressed, can it even be called a mem­ory?

• Is “how do I get bet­ter at sex?” a solved prob­lem?

Is it just a mat­ter of get­ting a part­ner who will given you feed­back and prac­tic­ing?

• I think “how do you get bet­ter”, mostly yes, but “how do you get to be very very good”, mostly no.

• Ok. Is there a trick to that one or do you just need to have got­ten the lucky genes?

• “No”, as in “not a solved prob­lem” im­plies that no one knows :-)

Whether you need lucky genes is hard to tell. Maybe all you need is lack of un­lucky ones :-/​

• Is it a prob­lem that any­one has put sig­nifi­cant effort into? What’s the state of the ev­i­dence?

Now that I think about it, I’m a lit­tle sur­prised there isn’t a sub­cul­ture of peo­ple try­ing to ex­cel at sex, sort of the way pickup artist try an ex­cel at get­ting sex.

Is this be­cause there is no tech­nique for for do­ing sex well? Be­cause most peo­ple think there’s no tech­nique for for do­ing sex well? Be­cause sex is good enough already? Be­cause sex is ac­tu­ally more about sta­tus than plea­sure? Be­cause such a sub­cul­ture ex­its and I’m ig­no­rant of it?

• Be­cause sex is good enough already?

Data sug­gest that a fair num­ber of woman don’t get or­gasms dur­ing sex but the liter­a­ture sug­gest that they could given the proper en­vi­ron­ment. Squirt­ing in women seems to hap­pen sel­dom enough that the UK bans it in their porn for be­ing ab­nor­mal. But of course sex is about more than just or­gasm length and in­ten­sity ;)

Be­cause such a sub­cul­ture ex­its and I’m ig­no­rant of it?

Yes. In gen­eral one of the think that dis­t­in­guish the pickup artist com­mu­nity is that it’s full of peo­ple who rather sit in front of their com­puter to talk about tech­niques than in­ter­act face to face. That means you find a lot of in­for­ma­tion about it on the in­ter­net. Many of the peo­ple who are very kines­thetic don’t spend much time on the net.

But that doesn’t mean there no in­for­ma­tion available on the in­ter­net.

Get­ting ideas about how sex is sup­posed to work from porn is very bad. Porn is cre­ated to please the viewer, not the ac­tors. Porn pro­duc­ers have to worry about is­sues like cam­era an­gles. Sen­sual touch can cre­ate feel­ings with­out look­ing good on the cam­era. Porn of­ten ig­nores the state of mind of the ac­tors.

Books on the other hand do provide some knowl­edge, even when they alone aren’t enough. Tim Fer­riss has in it’s “The 4-Hour Body” book two chap­ters about the sub­ject, in­clud­ing the ba­sic anatomy les­son of how the g-spot works. Apart from that I’m not fa­mil­iar with English liter­a­ture on the sub­ject but Tim Fer­riss sug­gests among oth­ers http://​​www.tinynib­bles.com/​​ for fur­ther read­ing.

The com­mu­nity in which I would ex­pect the most knowl­edge are polyamorous peo­ple who speak very openly with each other.

Us­ing our cher­ished ra­tio­nal­ity skills we can start to break the skill down into sub­ar­eas:

1) Every­body is differ­ent. Don’t as­sume that ev­ery men or woman wants the same thing.

2) Con­sent: Don’t do some­thing that your part­ner doesn’t want you to do to him. When in doubt, ask.

3) Mind­set: In­con­fi­dence and feel­ing pres­sure to perform can get in the way of be­ing pre­sent. Var­i­ous forms of “sex is bad”-be­liefs can re­duce en­joy­ment.

Authen­tic ex­pres­sion and do­ing in ev­ery mo­ment what feels right, is a good frame. If you need some­thing to oc­cupy your mind, think in terms of in­ves­ti­ga­tion. Be cu­ri­ous about effects of your own ac­tions. What hap­pens in your own body? What hap­pens in the body of your part­ner? How does it feel? Be always open for the pre­sent.

If you want to learn to be in that frame, classes in “Move­ment Science” (in dance stu­dios) or con­tact im­pro­vi­sa­tion can teach you to ac­cess that state of mind. In Ber­lin where I live that com­mu­nity also over­laps with the poly crowd.

4) Dom­i­nance Higher testos­terone and the be­hav­ior that it pro­duces means bet­ter sex.

5) Open Com­mu­ni­ca­tion Creat­ing a space where de­sires can be ex­pressed with­out any fear of judge­ment is a skill that most peo­ple don’t have.

6) Fine con­trol over your own body. There are many ways to train those skills.

7) Per­cep­tions of the part­ner.

• I’m a lit­tle sur­prised there isn’t a sub­cul­ture of peo­ple try­ing to ex­cel at sex.

I’m sure there is, but I don’t think it would want to be very… pub­lic about it. For one thing, I wouldn’t be sur­prised if com­pe­tent pro­fes­sion­als were very good (and very ex­pen­sive).

Given Chris­ti­an­ity’s prud­ish­ness (thank you, St.Au­gus­tine), you may also want to search out­side of the Western world—Asia, in­clud­ing In­dia, sound promis­ing.

But as usual, one of the first ques­tions is what do you want to op­ti­mize for. And don’t for­get that men and women start from quite differ­ent po­si­tions.

• And don’t for­get that men and women start from quite differ­ent po­si­tions.

I don’t know what you mean by this.

• The phys­iol­ogy of men and women is sig­nifi­cantly differ­ent.

• A ques­tion about Lob’s the­o­rem: as­sume not prov­able(X). Then, by rules of If-then state­ments, if prov­able(X) then X is prov­able But then, by Lob’s the­o­rem, prov­able(X), which is a con­tra­dic­tion. What am I miss­ing here?

• I’m not sure how you’re get­ting from not prov­able(X) to prov­able(prov­able(X) → X), and I think you might be mix­ing meta lev­els. If you could prove not prov­able(X), then I think you could prove (prov­able(X) ->X), which then gives you prov­able(X). Per­haps the solu­tion is that you can never prove not prov­able(X)? I’m not sure about this though.

• I for­get the for­mal name for the the­o­rem, but isn’t (if X then Y) iff (not-x or Y) prov­able in PA? Be­cause I was pretty sure that’s a fun­da­men­tal the­o­rem in first or­der logic. Your solu­tion is the one that looked best, but it still feels wrong. Here’s why: Say P is prov­able. Then not-P is prov­ably false. Then not(prov­able(not-P)) is prov­able. Not be­ing able to prove not(prov­able(x)) means noth­ing is prov­able.

• You’re right that (if X then Y) is just fancy no­ta­tion for (not(X) or Y). How­ever, I think you’re mix­ing up lev­els of where things are be­ing proved. For the pur­poses of the rest of this com­ment, I’ll use prov­able(X) to mean that PA or what­ever proves X, and not that we can prove X. Now, sup­pose prov­able(P). Then prov­able(not(not(P))) is deriv­able in PA. You then claim that not(prov­able(not(P))) fol­lows in PA, that is to say, that prov­able(not(Q)) → not(prov­able(Q)). How­ever, this is pre­cisely the state­ment that PA is con­sis­tent, which is not prov­able in PA. There­fore, even though we can go on to prove not(prov­able(not(P))), PA can’t, so that last step doesn’t work.

• Wait. Not(prov­able(con­sis­tency)) is prov­able in PA? Then run that through the above.

• Not(prov­able(con­sis­tency)) is prov­able in PA?

I’m not sure that this is true. I can’t find any­thing that says ei­ther way, but there’s a sec­tion on Godel’s sec­ond in­com­plete­ness the­o­rem in the book “Set the­ory and the con­tinuum hy­poth­e­sis” by Paul Co­hen that im­plies that the the­o­rem is not prov­able in the the­ory that it ap­plies to.

• I’ll rephrase it this way:
For all C: Either prov­able(C) or not(prov­able(C)) If prov­able(C), then prov­able(C) If not prov­able(C), then use the above logic to prove prov­able C. There­fore all C are prov­able.

• Which “above logic” are you refer­ring to? If you mean your OP, I don’t think that the logic holds, for rea­sons that I’ve ex­plained in my replies.

• Your rea­sons were that not(prov­able(c)) isn’t prov­able in PA, right? If so, then I will re­but thusly: the setup in my com­ment im­me­di­ately above(I.e. ei­ther prov­able(c) or not prov­able(c)) gets rid of that.

• I’m not claiming that there is no propo­si­tion C such that not(prov­able(C)), I’m say­ing that there is no propo­si­tion C such that prov­able(not(prov­able(C))) (again, where all of these ‘prov­able’s are with re­spect to PA, not our whole abil­ity to prove things). I’m not see­ing how you’re get­ting from not(prov­able(not(prov­able(C)))) to prov­able(C), un­less you’re com­mut­ing ‘not’s and ’prov­able’s, which I don’t think you can do for rea­sons that I’ve stated in an an­ces­tor to this com­ment.

• Well, there is, un­less i mi­s­un­der­stand what meta level prov­able(not(prov­able(con­sis­tency))) is on.

• I think you do mi­s­un­der­stand that, and that the proof of not(prov­able(con­sis­tency(PA))) is not in fact in PA (re­mem­ber that the “prov­able()” func­tion refers to prov­abil­ity in PA). Fur­ther­more, re­gard­ing your com­ment be­fore the one that I am re­spond­ing to now, just be­cause not(prov­able(C)) isn’t prov­able in PA, doesn’t mean that prov­able(C) is prov­able in PA: there are lots of state­ments P such that nei­ther prov­able(P) nor prov­able(not(P)), since PA is in­com­plete (be­cause it’s con­sis­tent).

• That doesn’t ac­tu­ally an­swer my origi­nal ques­tion—I’ll try writ­ing out the full proof.

Premises:

1. P or not-P is true in PA

2. Also, be­cause of that, if p → q and not(p)-> q then q—use rules of dis­tri­bu­tion over and/​or

So:

1. prov­able(P) or not(prov­able(P)) by premise 1

2: If prov­able(P), prov­able(P) by: switch if p then p to not p or p, premise 1

3: if not(prov­able(P)) Then prov­able( if prov­able(P) then P): since if p then q=not p or q and not(not(p))=p

4: there­fore, if not(prov­able(P)) then prov­able(P): 3 and Lob’s theorem

5: There­fore Prov­able(P): By premise 2, line 2, and line 4.

Where’s the flaw? Is it be­tween lines 3 and 4?

• I think step 3 is wrong. Ex­pand­ing out your logic, you are say­ing that if not(prov­able(P)), then (if prov­able(P) then P), then prov­able(if prov­able(P) then P). The sec­ond step in this chain is wrong, be­cause there are true facts about PA that we can prove, that PA can­not prove.

• So the state­ment (if not(p) then (if p then q)) is not prov­able in PA? Doesn’t it fol­low im­me­di­ately from the defi­ni­tion of if-then in PA?

• (if not(p) then (if p then q)) is prov­able. What I’m claiming isn’t nec­es­sar­ily prov­able is (if not(p) then prov­able(if prov­able(p) then q)), which is a differ­ent state­ment.

• Oh, that’s what I’ve been failing to get across.

I’m not say­ing if not(p) then (if prov­able(p) then q). I’m say­ing if not prov­able(p) then (if prov­able(p) then q)

• I’m say­ing if not prov­able(p) then (if prov­able(p) then q)

You aren’t say­ing that though. In the post where you num­bered your ar­gu­ments, you said (bold­ing mine)

if not(prov­able(P)) then prov­able(if prov­able(P) then P)

which is differ­ent, be­cause it has an ex­tra ‘prov­able’.

• So then here’s a smaller lemma: for all x and all q:

If(not(x))

Then prov­able(if x then q): by defi­ni­tion of if-then

So re­place x by Prov­able(P) and q by p.

Where’s the flaw?

• The flaw is that you are cor­rectly notic­ing that prov­able(if(not(x) then (if x then q)), and in­cor­rectly con­clud­ing if(not(x)) then prov­able(if x then q). It is true that if(not(x)) then (if x then q), but if(not(x)) is not nec­es­sar­ily prov­able, so (if x then q) is also not nec­es­sar­ily prov­able.

• is x or not x prov­able? Then use my proof struc­ture again.

• The whole point of this dis­cus­sion is that I don’t think that your proof struc­ture is valid. To be hon­est, I’m not sure where your con­fu­sion lies here. Do you think that all state­ments that are true in PA are prov­able in PA? If not, how are you de­riv­ing prov­able(if x then q) from (if x then q)?

In re­gards to your above com­ment, just be­cause you have prov­able(x or not(x)) doesn’t mean you have prov­able(not(x)), which is what you need to de­duce prov­able(if x then q).

• To an­swer the be­low: I’m not say­ing that prov­able(X or notX) im­plies prov­able (not X). I’m say­ing...I’ll just put it in lemma form(P(x) means prov­able(x):

If P( if x then Q) AND P(if not x then Q)

Then P(not x or Q) and P(x or Q): by rules of if then

Then P( (X and not X) or Q): by rules of distribution

Then P(Q): Rules of or statements

So my proof struc­ture is as fol­lows: Prove that both Prov­able(P) and not Prov­able(P) im­ply prov­able(P). Then, by the above lemma, Prov­able(P). I don’t need to prove Prov­able(not(Prov­able(P))), that’s not re­quired by the lemma. All I need to prove is that the log­i­cal op­er­a­tions that lead from Not(prov­able(P))) to Prov­able(P)) are truth and prov­abil­ity preserving

• Break­ing my no-com­ment com­mit­ment be­cause I think I might know what you were think­ing that I didn’t re­al­ise that you were think­ing (won’t com­ment af­ter this though): if you start with (prov­able(prov­able(P)) or prov­able(not(prov­able(P)))), then you can get your de­sired re­sult, and in­deed, prov­able(prov­able(P) or not(prov­able(P))). How­ever, prov­able(Q or not(Q)) does not im­ply prov­able(Q) or prov­able(not(Q)), since there are un­de­cide­able ques­tions in PA.

• Ohhh, thanks. That ex­plains it. I feel like there should ex­ist things for which prov­able(not(p)), but I can’t think of any off­hand, so that’ll do for now.

• I agree that if you could prove that (if not(prov­able(P)) then prov­able(P)), then you could prove prov­able(P). That be­ing said, I don’t think that you can ac­tu­ally prove (if not(prov­able(P)) then prov­able(P)). A few times in this thread, I’ve shown what I think the prob­lem is with your at­tempted proof—the sec­ond half of step 3 does not fol­low from the first half. You are as­sum­ing X, prov­ing Y, then con­clud­ing prov­able(Y), which is false, be­cause X it­self might not have been prov­able. I am re­ally tired of this thread, and will no longer com­ment.

• Ok, thanks for clear­ing that up.

• As far as I know, that is ac­tu­ally the solu­tion. If you could prove “not prov­able(X)” then in par­tic­u­lar you have proven that the proof sys­tem you’re work­ing in is con­sis­tent (an in­con­sis­tent sys­tem proves ev­ery­thing by ex­plo­sion). But Godel.

• Maneki Neko is a short story about an AI that man­ages a kind of gift econ­omy. It’s an en­joy­able read.

I’ve been cu­ri­ous about this ‘class’ of sys­tems for a while now, but I don’t think I know enough about eco­nomics to ask the ques­tions well. For ex­am­ple- the story sup­plies a su­per­in­tel­li­gence to func­tion as a com­pe­tent cen­tral man­ager, but could such a gift net­work the­o­ret­i­cally ex­ist with­out be­ing cen­trally man­aged (and with­out triv­ially re­duc­ing to mod­ern forms of cur­rency ex­change)? Could a var­i­ant of Wat­son be used to au­to­mate the dis­tri­bu­tion of cap­i­tal in the same way that it makes a med­i­cal dig­no­sis? And so on.

In par­tic­u­lar, I’m look­ing for the in­tel­lec­tual tools that would be used to ask these ques­tions in a more rigor­ous way; it would be great if I had bet­ter ways of figur­ing out which of these ques­tions are ob­vi­ously stupid and which are not. Spe­cific dis­ci­plines in eco­nomics or game the­ory, per­haps. Things along the lines of LW’s Mechanism De­sign se­quence would be fan­tas­tic. Can any­one give me a few poin­t­ers?

• My in­tu­ition is ev­ery good al­lo­ca­tion sys­tem will use prices some­where, whether the users see them or not. The main perk of the story’s econ­omy is get­ting things you need with­out hav­ing to ex­plic­itly de­cide to buy them (ie the down-on-his-luck guy un­ex­pect­edly gifted his fa­vorite coffee), and that could be im­ple­mented through in­di­vi­d­ual AI agents rather than a cen­tral AI.

Flesh­ing out how this might play out, if I’m feel­ing sick, my AI agent no­tices and broad­casts a bid for hot soup. The agents of peo­ple nearby re­spond with offers. The low­est offer might come from some­one already in a soup shop who lives next door to me since they’ll hardly have to go out of their way. Their agent would no­tify them to buy some­thing ex­tra and de­liver it to me. Once the task is fulfilled, my agent would send the agreed-upon pay­ment. As long as the agents are well-cal­ibrated to our needs and costs, it’d feel like a great gift even if there are auc­tions and pay­ments be­hind the scenes.

For poin­t­ers, gen­eral equil­ibrium the­ory stud­ies how to al­lo­cate all the goods in an econ­omy. Depend­ing on how you squint at the model, it could be study­ing cen­tral­ized or de­cen­tral­ized mar­kets based on money or pure ex­change. A Toolbox for Eco­nomic De­sign is fairly ac­cessible tex­book on mechanism de­sign that cov­ers lots of al­lo­ca­tion top­ics.

• This looks very use­ful. Thanks!

Another one of those in­ter­est­ing ques­tions is whether the pric­ing sys­tem must be equiv­a­lent to cur­rency ex­change. To what ex­tent are the tra­di­tional modes of trans­ac­tion a legacy of the limi­ta­tions be­hind phys­i­cal coinage, and what de­grees of free­dom are offered by ubiquitous com­pu­ta­tion and con­nec­tivity? Etc. (I have a lot of ques­tions.)

• Re­sults like the Se­cond Welfare The­o­rem (ev­ery effi­cient al­lo­ca­tion can be im­ple­mented via com­pet­i­tive equil­ibrium af­ter some lump-sum trans­fers) sug­gests it must be equiv­a­lent in the­ory.

Eric Bud­ish has done some in­ter­est­ing work chang­ing the course al­lo­ca­tion sys­tem at Whar­ton to use gen­eral equil­ibrium the­ory be­hind the scenes. In the pre­vi­ous sys­tem, courses were al­lo­cated via a fake money auc­tion where stu­dents had to ac­tu­ally make bids. In the new sys­tem, stu­dents sub­mit prefer­ences and the al­lo­ca­tion is com­puted as the equil­ibrium start­ing from “equal in­comes”.

What benefits do you think a differ­ent sys­tem might provide, or what prob­lems does mon­e­tary ex­change have that you’re try­ing to avoid? Ex­tra com­pu­ta­tion and con­nec­tivity should just open op­por­tu­ni­ties for new mar­kets and dy­namic pric­ing, rather than sug­gest we need some­thing new.

• I’m look­ing for the in­tel­lec­tual tools that would be used to ask these ques­tions in a more rigor­ous way

The field of study that deals with this is called eco­nomics. Any rea­son an in­tro text­book won’t suit you?

• Could a var­i­ant of Wat­son be used to au­to­mate the dis­tri­bu­tion of cap­i­tal in the same way that it makes a med­i­cal dig­no­sis?

The stock mar­ket has a lot of ca­pa­ble AIs that man­age cap­i­tal al­lo­ca­tion.

• Fair point. It’s my un­der­stand­ing that this is limited to rapid day trades, with im­pli­ca­tions for the price of a stock but not cash-on-hand for the ac­tual com­pany. I was imag­in­ing some­thing more like a helper al­gorithm for ven­ture cap­i­tal or an­gel in­vestors, com­pa­rable to the PGMs un­der­pin­ning the in­surance in­dus­try.

• Is it a LessWron­gian faux pas to com­ment only to agree with some­one? Here’s the con­text:

That’s the kind of per­son that goes on to join LW and tell you. There are also peo­ple who read a se­quence post or two be­cause they fol­lowed a link from some­where, weren’t shocked at all, maybe learned some­thing, and left. In fact I’d ex­pect they’re the vast ma­jor­ity.

I was go­ing to say that I agree and that I had not con­sid­ered my ob­ser­va­tion as an effect of sur­vivor­ship bias.

I guess I thought it might be use­ful to ex­plic­itly re­late what he said to a bias. Maybe that’s just stat­ing the ob­vi­ous here? Maybe I should do it any­way be­cause it might help some­one?

• What pre­req­ui­site knowl­edge is nec­es­sary to read and un­der­stand Nick Bostrom’s Su­per­in­tel­li­gence?

• Here I be, look­ing at a decade old Kurzweil book, and I want to know whether the trends he’s graph­ing hold up af­ter in later years. I have no inkling of where on earth one GETs these kinds of fac­toids, ex­cept by some mys­ti­cal voodoo pow­ers of Re­search be­stowed by Higher Ed­u­ca­tion. It’s not just guessti­ma­tion… prob­a­bly.

Bits per Se­cond per Dol­lar for wire­less de­vices? Smallest DRAM Half Pitches? Rates of adop­tion for pre-in­dus­trial in­ven­tions? From whence do all these num­bers come and how does one get more re­cent col­lec­tions of num­bers?

• LW user Stu­art Arm­strong did a num­ber of posts as­sess­ing Kurzweil’s pre­dic­tions: Here, here, here, and here.

• Mostly just out of cu­ri­os­ity:

What hap­pens karma-wise when you sub­mit a post to Dis­cus­sion, it gets some up/​down­votes, you re­sub­mit it to Main, and it gets up/​down­votes there? Does the post’s score trans­fer, or does it start from 0?

• The post’s score trans­fers, but I think that the votes that were ap­plied when it was in Dis­cus­sion don’t get the x10 karma mul­ti­plier that posts in Main oth­er­wise do.

• Thanks!

• How do I im­prove my abil­ity to simu­late/​guess other peo­ple’s in­ter­nal states and fu­ture be­hav­iors? I can, just barely, read emo­tions, but I make the av­er­age hu­man look like a telepath.

• It’s trial and er­ror mostly, pay­ing at­ten­tion to other peo­ple do­ing well or mak­ing mis­takes, get­ting hon­est feed­back from a skil­led and trusted friend. Learn­ing so­cial skills is like learn­ing to ride a bike, read­ing about it doesn’t give you much of an ad­van­tage.

The younger you are the less it costs to make mis­takes. I think a so­cial job is a good way to learn be­cause cus­tomers are way less for­giv­ing than other peo­ple you ran­domly meet. You could vol­un­teer for some so­cial tasks too.

If your na­tive hard­ware is some­how so­cially limited then you might benefit from read­ing a lit­tle bit more and you might have to de­velop workarounds to use what you’ve got to read peo­ple. It’s difficult to learn from mis­takes if you don’t know you’re mak­ing them.

One thing I’ve learned about the av­er­age hu­man look­ing like a telepath is that most peo­ple are way too cer­tain about their par­tic­u­lar as­sump­tion when there are ac­tu­ally mul­ti­ple pos­si­ble ways to un­der­stand a situ­a­tion. Peo­ple gen­er­ally aren’t as great at read­ing each other as they think that are.

• My na­tive hard­ware is definitely limited—I’m autis­tic.

The stan­dard quick-and-dirty method of pre­dict­ing oth­ers seems to be “model them as slightly mod­ified ver­sions of you”, but when other peo­ple’s minds are more similar to each other than they are to you, the method works far bet­ter for them than it does for you.

My re­al­time mod­el­ing isn’t that much worse than other peo­ple’s, but other peo­ple can do a lot more with a cou­ple of min­utes and no dis­trac­tions than I can.

Thanks a bunch for the sug­ges­tions!

• The stan­dard quick-and-dirty method of pre­dict­ing oth­ers seems to be “model them as slightly mod­ified ver­sions of you”

It cer­tainly doesn’t feel that way to me, but I might have in­her­ited some autis­tic char­ac­ter­is­tics since there are a cou­ple of autis­tic peo­ple in my ex­tended fam­ily. Now that I’ve worked with peo­ple more, it’s more like I have sev­eral ba­sic mod­els of peo­ple like “ra­tio­nal”, “emo­tional”, “ag­gres­sive”, “sub­mis­sive”, “as­sertive”, “po­lite”, “stupid”, “smart”, and then mod­ify those first im­pres­sions ac­cord­ing to ad­di­tional in­for­ma­tion.

I definitely try not to model other peo­ple based on my own prefer­ences since they’re pretty un­usual, and I hate it when other peo­ple try to model me based on their own prefer­ences es­pe­cially if they’re emo­tional and ex­tro­verted. I find that kind of em­pa­thy very limited, and these days I think I can model a wider va­ri­ety of peo­ple than many nat­u­ral ex­tro­verts can, in the limited types of situ­a­tions where I need to.

• Thanks! Your per­son­al­ity archetypes/​stereo­types sound like a quick-and-dirty mod­el­ing sys­tem that I can ac­tu­ally use, but one that I shouldn’t ex­plain to the peo­ple who know me by my true name.

That prob­a­bly ex­plains why I hadn’t heard about it already: if it were less offen­sive-sound­ing, then some­one would have told me about it. In­stead, we get the re­ally-nice-sound­ing but not very prac­ti­cal sug­ges­tions about putting your­self in other peo­ples’ shoes, which is bet­ter for ba­sic* moral­ity than it is for pre­dic­tion.

*By “ba­sic”, I mean “stuff all cur­rently used eth­i­cal sys­tems would agree on”, like ‘don’t hit some­one in or­der to ac­quire their toys.’

• As­sum­ing for a mo­ment that Everett’s in­ter­pre­ta­tion is cor­rect, there will even­tu­ally be a way to very con­fi­dently de­duce this (and time, iden­tity and con­scious­ness work pretty much like de­scribed by Drescher IIRC—there is no con­tinu­a­tion of con­scious­ness, just mem­o­ries, and noth­ing mean­ingful sep­a­rates your iden­tity from your copies):

Should be­ings/​so­cieties/​sys­tems clever enough to figure this out (and with some­thing like prefer­ences or val­ues) just seek to self-de­struct if they find them­selves in a suffi­ciently sub­op­ti­mal branch, suffer­ing or oth­er­wise worse off than they plau­si­bly could be? Com­mit­ting to give up in case things go awry would lessen the im­pact of set­backs and in­crease the pro­por­tion of branches where ev­ery­thing is stel­lar, just due to good luck. Keep the best wor­lds, dis­card the rest, avoid a lot of has­sle.

This is ob­vi­ously not ap­pli­ca­ble to e.g. hu­man­ity as it is, where self-de­struc­tion on any level is in­con­ve­nient, if at all pos­si­ble, and gen­er­ally not a nice thing to do. But would it the­o­ret­i­cally make sense for in­tel­li­gences like this to de­velop, and maybe even have an over­whelming ten­dency to de­velop in the long term? What if this is one of the vast amount of branches where ev­ery­one in the ob­serv­able uni­verse pretty much failed to have a good enough time and a bright enough fu­ture and just offed them­selves be­fore in­ter­stel­lar travel etc., be­cause a suffi­ciently ad­vanced civ­i­liza­tion sees it’s just not a big deal in an Everett mul­ti­verse?

(There’s prob­a­bly a lot that I’ve missed here as I have no deep knowl­edge re­gard­ing the MWI, and my read­ing his­tory so far only touches on this kind of stuff in gen­eral, but yay stupid ques­tions thread.)

• Should be­ings/​so­cieties/​sys­tems clever enough to figure this out (and with some­thing like prefer­ences or val­ues) just seek to self-de­struct if they find them­selves in a suffi­ciently sub­op­ti­mal branch, suffer­ing or oth­er­wise worse off than they plau­si­bly could be?

Not re­ally. If you’re in a sub­op­ti­mal branch, but still do­ing bet­ter than if you didn’t ex­ist at all, then you aren’t mak­ing the world bet­ter off by self-de­struc­t­ing re­gard­less of whether other branches ex­ist.

Com­mit­ting to give up in case things go awry would lessen the im­pact of set­backs and in­crease the pro­por­tion of branches where ev­ery­thing is stel­lar, just due to good luck. Keep the best wor­lds, dis­card the rest, avoid a lot of has­sle.

It would not in­crease the pro­por­tion (tech­ni­cally, you want to be talk­ing about mea­sure here, but the dis­tinc­tion isn’t im­por­tant for this par­tic­u­lar dis­cus­sion) of branches where ev­ery­thing is stel­lar—just the pro­por­tion of branches where ev­ery­thing is stel­lar out of the to­tal pro­por­tion of branches where you are al­ive, which isn’t so im­por­tant. To see this, imag­ine you have two branches, one where things are go­ing poorly and one where things are go­ing great. The pro­por­tion of branches where things are go­ing stel­lar is 12. Now sup­pose that the be­ing/​so­ciety/​sys­tem that is go­ing poorly self-de­structs. The pro­por­tion of branches where things are go­ing stel­lar is still 12, but now you have a branch where in­stead of hav­ing a be­ing/​so­ciety/​sys­tem that is go­ing poorly, you have no be­ing/​so­ciety/​sys­tem at all.

• Thanks! Ah, I’m prob­a­bly just typ­i­cal-mind­ing like there’s no to­mor­row, but I find it in­con­ceiv­able to place much value on the amount of branches you ex­ist in. The per­ceived con­tinu­a­tion of your con­scious­ness will still go on as long as there are be­ings with your mem­o­ries in some branch: in gen­eral, it seems to me that if you say you “want to keep liv­ing”, you mean you want there to be copies of you in some or the pos­si­ble fu­tures, wak­ing up the next morn­ing do­ing stuff pre­sent-you would have done, re­call­ing what pre­sent-you thought yes­ter­day, and so on (in ad­di­tion you will prob­a­bly want a low prob­a­bil­ity for this fu­ture to in­clude sig­nifi­cant suffer­ing). Like­wise, if you say you “want to see hu­man­ity flour­ish in­definitely”, you want a fu­ture that in­cludes your biolog­i­cal or cul­tural peers and offspring coloniz­ing space and all that, re­mem­ber­ing and cher­ish­ing many of the val­ues you once had (sans sig­nifi­cant suffer­ing). To me it seems im­pos­si­ble to as­sign value to the amount of MWI-copies of you, not least be­cause there is no way you could even con­ceive their num­ber, or usu­ally make mean­ingful eth­i­cal de­ci­sions where you weigh their amounts.* In­stead, what mat­ters over­whelm­ingly more is the prob­a­bil­ity of any given copy liv­ing a high qual­ity life.

just the pro­por­tion of branches where ev­ery­thing is stel­lar out of the to­tal pro­por­tion of branches where you are alive

Yes, this is ob­vi­ous of course. What I meant was ex­actly this, be­cause from the point of view of a set of ob­servers, elimi­nat­ing the set of ob­servers from a branch ⇔ ren­der­ing the branch ir­rele­vant, pretty much.

which isn’t so im­por­tant.

To me it did feel like this is ob­vi­ously what’s im­por­tant, and the branches where you don’t ex­ist sim­ply don’t mat­ter—there’s no one there to ob­serve any­thing af­ter all, or judge the lack of you to be a loss or morally bad (again, not ap­pli­ca­ble to in­di­vi­d­ual hu­mans).

If I learned to­day that I have a 1% chance to de­velop a maybe-ter­mi­nal, cer­tainly suffer­ing-caus­ing can­cer to­mor­row, and I could press a but­ton to just elimi­nate the branches where that hap­pens, I would not have thought I am com­mit­ting a moral atrocy. I would not feel like I am kil­ling my­self just be­cause part of my fu­ture copies never get to ex­ist, nor would I feel bad for the copies of the rest of all peo­ple—no one would ever no­tice any­thing, vast amounts of fu­ture copies of cur­rent peo­ple would wake up just like they thought they would the next morn­ing, and carry on with their lives and as­pira­tions. But this is cer­tainly some­thing I should learn to un­der­stand bet­ter be­fore any­one gives me a world-de­stroy­ing can­cer cure but­ton.

*Which is one main differ­ence when com­par­ing this to reg­u­lar old pop­u­la­tion ethics, I sup­pose.

• To me it seems im­pos­si­ble to as­sign value to the amount of MWI-copies of you, not least be­cause there is no way you could even con­ceive their num­ber, or usu­ally make mean­ingful eth­i­cal de­ci­sions where you weigh their amounts.

As it hap­pens, you to­tally can (it’s called the Born mea­sure, and it’s the same num­ber as what peo­ple used to think was the prob­a­bil­ities of differ­ent branches oc­cur­ring), and agents that satisfy sane de­ci­sion-the­o­retic crite­ria weight branches by their Born mea­sure—see this pa­per for the de­tails.

I would not feel like I am kil­ling my­self just be­cause part of my fu­ture copies never get to ex­ist, nor would I feel bad for the copies of the rest of all peo­ple—no one would ever no­tice any­thing, vast amounts of fu­ture copies of cur­rent peo­ple would wake up just like they thought they would the next morn­ing, and carry on with their lives and as­pira­tions.

This is a good place to strengthen in­tu­ition, since if you re­place “kil­ling my­self” with “tor­tur­ing my­self”, it’s still true that none of your fu­ture selves who re­main al­ive/​un­tor­tured “would ever no­tice any­thing, vast amounts of fu­ture copies of [your­self] would wake up just like they thought they would the nloext morn­ing, and carry on with their lives and as­pira­tions”. If you ar­range for your­self to be tor­tured in some branches and not oth­ers, you wake up just as nor­mal and live an or­di­nary, fulfilling life—but you also wake up and get tor­tured. Similarly, if you ar­range for your­self to be kil­led in some branches and not oth­ers, you wake up just as nor­mal and live an or­di­nary, fulfilling life—but you also get kil­led (which is pre­sum­ably a bad thing even or es­pe­cially if ev­ery­body else also dies).

One way to in­tu­itively see that this way of think­ing is go­ing to get you in trou­ble is to note that your prefer­ences, as stated, aren’t con­tin­u­ous as a func­tion of re­al­ity. You’re say­ing that uni­verses where (1-x) pro­por­tion of branches fea­ture you be­ing dead and x pro­por­tion of branches fea­ture you be­ing al­ive are all equally fine for all x > 0, but that a uni­verse where you are dead with pro­por­tion 1 and al­ive with pro­por­tion 0 would be awful (well, you didn’t ac­tu­ally say that, but oth­er­wise you would be fine with kil­ling some of your pos­si­ble fu­ture selves in a clas­si­cal uni­verse). How­ever, there is ba­si­cally no differ­ence be­tween a uni­verse where (1-ep­silon) pro­por­tion of branches fea­ture you be­ing dead and ep­silon pro­por­tion of branches fea­ture you be­ing al­ive, and a uni­verse where 1 pro­por­tion of branches fea­ture you be­ing dead and 0 pro­por­tion of branches fea­ture you be­ing al­ive (since don’t for­get, MWI looks like a su­per­po­si­tion of waves, not a col­lec­tion of sep­a­rate uni­verses). This is the sort of thing which is li­able to lead to crazy be­havi­our.

• I’m sorry, but “sort of thing which is li­able to lead to crazy be­havi­our” won’t cut it. Could you give an ex­am­ple of crazy be­havi­our with this prefer­ence or­der­ing? I still think this ap­proach (not count­ing mea­sure as long as some of me ex­ists) feels right and is what I want. I’m not too wor­ried about dis­con­ti­nu­ity at only x=0 (and if you look at larger mul­ti­verses, x prob­a­bly never equals 0.)

To ar­gue over a spe­cific ex­am­ple: if I set up some­thing that chooses a num­ber ran­domly with quan­tum noise, then buys a lot­tery ticket, then kills me (in my sleep) only if the ticket doesn’t win, then I as­sign pos­i­tive util­ity to turn­ing the ma­chine on. (As­sum­ing I don’t give a damn about the rest of the world who will have to man­age with­out me.) Can you turn this into ei­ther an in­co­her­ent prefer­ence, or an ob­vi­ously wrong prefer­ence?

(Per­son­ally, I’ve thought about the TDT ar­gu­ment for not do­ing that; be­cause you don’t want ev­ery­one else to do it and cre­ate wor­lds in which only 1 per­son who would do it is left in each, but I’m not con­vinced that there are a sig­nifi­cant num­ber of peo­ple who would fol­low my de­ci­sion on this. If I ever meet some­one like that, I might team up with them to en­sure we’d both end up in the same world. I haven’t seen any anal­y­sis of TDT/​an­throp­ics ap­plied to this prob­lem, per­haps be­cause other peo­ple care more about the world?)

• Another way to look at it is this: imag­ine you wake up af­ter the bet, and don’t yet know whether you are go­ing to quickly be kil­led or whether you are about to re­cieve a large cash prize. It turns out that your sub­jec­tive cre­dence for which branch you are in is given by the Born mea­sure. There­fore, (as­sum­ing that not tak­ing the bet max­imises ex­pected util­ity in the sin­gle-world case), you’re go­ing to wish that you hadn’t taken the bet im­me­di­ately af­ter tak­ing it, with­out learn­ing any­thing new or chang­ing your mind about any­thing. Thus, your prefer­ences as stated ei­ther in­volve weird time in­con­sis­ten­cies, or care about whether there’s a tiny sliver of time be­tween the wor­lds branch­ing off and be­ing kil­led. At any rate, in any prac­ti­cal situ­a­tion, that tiny sliver of time is go­ing to ex­ist, so if you don’t want to im­me­di­ately re­gret your de­ci­sion, you should max­imise ex­pected util­ity with re­spect to the Born mea­sure, and not dis­count wor­lds where you die.

• Your prefer­ence already feels “ob­vi­ously wrong” to me, and I’ll try to ex­plain why. If we imag­ine that only one world ex­ists, but we don’t know how it will evolve, I wouldn’t take the analogue of your lot­tery ticket ex­am­ple, and I sus­pect that you wouldn’t ei­ther. The rea­son that I wouldn’t do this is be­cause I care about the pos­si­ble fu­ture wor­lds where I would die, de­spite the fact that I wouldn’t ex­ist there (af­ter very long). I’m not sure what other rea­son there would be to re­ject this bet in the sin­gle-world case. How­ever, you are say­ing that you don’t care about the ac­tual fu­ture wor­lds where you die in the many-wor­lds case, which seems bizarre and in­con­sis­tent with what I imag­ine your prefer­ences would be in the sin­gle-world case. It’s pos­si­ble that I’m wrong about what your prefer­ences would be in the sin­gle-world case, but then you’re act­ing ac­cord­ing to the Born rule any­way, and whether the MWI is true doesn’t en­ter into it.

(EDIT: that last sen­tence is wrong, you aren’t act­ing ac­cord­ing to the Born rule any­way.)

In re­gards to my point about dis­con­ti­nu­ity, it’s worth know­ing that to know whether x = 0 or x > 0, you need in­finitely pre­cise knowl­edge of the wave func­tion. It strikes me as un­rea­son­able and off-putting that no finite amount of in­for­ma­tion about the state of the uni­verse can dis­cern be­tween one uni­verse which you think is to­tally fan­tas­tic and an­other uni­verse which you think is ter­rible and awful. That be­ing said, I can imag­ine some­one be­ing un­per­suaded by this ar­gu­ment. If you are will­ing to ac­cept dis­con­ti­nu­ity, then you get a the­ory where you are still max­imis­ing ex­pected util­ity with re­spect to the Born rule, but your util­ities can be in­finite or in­finites­i­mal.

On a slightly differ­ent note, I would highly recom­mend read­ing the pa­per which I linked (most of which I think is com­pre­hen­si­ble with­out a huge amount of tech­ni­cal back­ground), which mo­ti­vates the ax­ioms you need for the Born rule to work, and dis­mo­ti­vates other de­ci­sion rules.

EDIT: Also, I’m sorry about the “sort of thing which is li­able to lead to crazy be­havi­our” thing, it was a long com­ment and my com­puter had already crashed once in the mid­dle of com­pos­ing it, so I re­ally didn’t want to write more.

• I down­loaded the pa­per you linked to and will read it shortly. I’m to­tally sym­pa­thetic to the “didn’t want to make a long com­ment longer” ex­cuse, hav­ing felt that way many times my­self.

I agree in the sin­gle-world case, I wouldn’t want to do it. That’s not be­cause I care about the sin­gle world with­out me per se (as in car­ing for the peo­ple in the world), but be­cause I care about my­self who would not ex­ist with ~1 prob­a­bil­ity. In a mul­ti­verse, I still ex­ist with ~1 prob­a­bil­ity. You can ar­gue that I can’t know for sure that I live in a mul­ti­verse, which is one of the rea­sons I’m still al­ive in your world (the main rea­son be­ing it’s not prac­ti­cal for me right now, and I’m not re­ally con­fi­dent enough to bother re­search­ing and set­ting some­thing like that up.) How­ever, you also don’t know that any­thing you do is safe, by which I mean things like driv­ing, walk­ing out­side, etc. (I’d say those things are far more ra­tio­nal in a mul­ti­verse, any­way, but even peo­ple who be­lieve in sin­gle world still do these things.)

Another rea­son I don’t have a prob­lem with dis­con­ti­nu­ity is that the whole prob­lem seems only to arise when you have an in­finite num­ber of wor­lds, and I just don’t feel like that ar­gu­ment is con­vinc­ing.

I don’t think you need in­finite knowl­edge to know whether x=0 or x>0, es­pe­cially if you give some prob­a­bil­ity to higher level mul­ti­verses. You don’t need to know for sure that x>0 (as you can’t know any­way), but you can have 99.9% con­fi­dence that x>0 rather eas­ily, con­di­tional on MWI be­ing true. As I ex­plained, that is enough to take risks.

If I wake up af­ter, in my case that I laid out, that would mean that I won, as I speci­fied I would be kil­led while asleep. I could even spec­ify that the en­tire lotto pick­ing,noise gen­er­a­tion, and check­ing is done while I sleep, so I don’t have to worry about it. That said, I don’t think the ques­tion of my sub­jec­tive ex­pec­ta­tion of no longer ex­ist­ing is well-defined, be­cause I don’t have a sub­jec­tive ex­pe­rience if I no longer ex­ist. If am cloned, then told one of me is go­ing to be va­por­ized with­out any fur­ther no­tice, and it hap­pens fast enough not to have them feel any­thing, then my sub­jec­tive ex­pec­ta­tion is 100% to sur­vive. That’s differ­ent from the tor­ture case you men­tioned above, where I ex­pect to sur­vive, and have sub­jec­tive ex­pe­riences. I think we do have some more fun­da­men­tal dis­agree­ment about an­throp­ics, which I don’t want to ar­gue over un­til I hash out my view­point more. (In­ci­den­tally, it seemed to me that Eliezer agrees with me at least partly, from what he writes in http://​​less­wrong.com/​​lw/​​14h/​​the_hero_with_a_thou­sand_chances/​​:

“What would hap­pen if the Dust won?” asked the hero. “Would the whole world be de­stroyed in a sin­gle breath?”

Aer­hien’s brow quirked ever so slightly. “No,” she said serenely. Then, be­cause the ques­tion was strange enough to de­mand a longer an­swer: “The Dust ex­pands slowly, us­ing ter­ri­tory be­fore de­stroy­ing it; it en­slaves peo­ple to its ser­vice, be­fore slay­ing them. The Dust is pa­tient in its will to de­struc­tion.”

The hero flinched, then bowed his head. “I sup­pose that was too much to hope for; there wasn’t re­ally any rea­son to hope, ex­cept hope… it’s not re­quired by the logic of the situ­a­tion, alas...”

I in­ter­preted that as say­ing that you can only rely on the an­thropic prin­ci­ple (and su­per quan­tum psy­chic pow­ers), if you die with­out pain.)

I’m ac­tu­ally plan­ning to write a post about Big Wor­lds, an­throp­ics, and some other top­ics, but I’ve got other things and am con­tin­u­ously putting it off. Even­tu­ally. I’d ideally like to finish some an­throp­ics books and pa­pers, in­clud­ing Bostrom’s, first.

• Another, more con­cise way of putting my trou­bles with dis­con­ti­nu­ity: I think that your util­ity func­tion over uni­verses should be a com­putable func­tion, and the com­putable func­tions are con­tin­u­ous.

Also—what, you have bet­ter things to do with your time than read long aca­demic pa­pers about philos­o­phy of physics right now be­cause an in­ter­net stranger told you to?!

• In the sin­gle-world case, I wouldn’t want to do it. That’s not be­cause I care about the sin­gle world with­out me per se (as in car­ing for the peo­ple in the world), but be­cause I care about my­self who would not ex­ist with ~1 prob­a­bil­ity.

Here’s the thing: you ob­vi­ously think that you dy­ing is a bad thing. You ap­par­ently like liv­ing. Even if the prob­a­bil­ity were 20-80 of you dy­ing, I imag­ine you still wouldn’t take the bet (in the sin­gle-world case) if the re­ward were only a few dol­lars, even though you would likely sur­vive. This in­di­cates that you care about pos­si­ble fu­tures where you don’t ex­ist—not in the sense that you care about peo­ple in those fu­tures, but that you count those fu­tures in your de­ci­sion al­gorithm, and weigh them nega­tively. By anal­ogy, I think you should care about branches where you die—not in the sense that you care about the welfare of the peo­ple in them, but that you should take those branches into ac­count in your de­ci­sion al­gorithm, and weigh them nega­tively.

Another rea­son I don’t have a prob­lem with dis­con­ti­nu­ity is that the whole prob­lem seems only to arise when you have an in­finite num­ber of wor­lds, and I just don’t feel like that ar­gu­ment is con­vinc­ing.

I’m not sure what you can mean by this com­ment, es­pe­cially “the whole prob­lem”. My ar­gu­ments against dis­con­ti­nu­ity still ap­ply even if you only have a su­per­po­si­tion of two wor­lds, one with am­pli­tude sqrt(x) and an­other with am­pli­tude sqrt(1-x).

I don’t think you need in­finite knowl­edge to know whether x=0 or x>0, es­pe­cially if you give some prob­a­bil­ity to higher level mul­ti­verses.

… I promise that you aren’t go­ing to be able to perform a test on a qubit $\\alpha |0\\rangle \+ \\beta |1\\rangle$ that you can ex­pect to tell you with 100% cer­tainty that $\\alpha = 0$, even if you have mul­ti­ple iden­ti­cal qubits.

You don’t need to know for sure that x>0 (as you can’t know any­way), but you can have 99.9% con­fi­dence that x>0 rather eas­ily, con­di­tional on MWI be­ing true. As I ex­plained, that is enough to take risks.

This wasn’t my point. My point was that your prefer­ences make huge value dis­tinc­tions be­tween uni­verses that are al­most iden­ti­cal (and in fact ar­bi­trar­ily close to iden­ti­cal). Even though your value func­tion is tech­ni­cally a func­tion of the phys­i­cal state of the uni­verse, it’s like it may as well not be, be­cause ar­bi­trary amounts of knowl­edge about the phys­i­cal state of the uni­verse still can’t dis­t­in­guish be­tween types of uni­verses which you value very differ­ent amounts. This in­tu­itively seems ir­ra­tional and crazy to me in and of it­self, but YMMV.

If I wake up af­ter, in my case that I laid out, that would mean that I won, as I speci­fied I would be kil­led while asleep. I could even spec­ify that the en­tire lotto pick­ing, noise gen­er­a­tion, and check­ing is done while I sleep, so I don’t have to worry about it.

I find it highly im­plau­si­ble that this should make a differ­ence for your de­ci­sion al­gorithm. Imag­ine that you could ex­tend your life in all branches by a few sec­onds in which you are to­tally bliss­ful. I imag­ine that this would be a pleas­ant change, and there­fore prefer­able. You can then con­tem­plate what will hap­pen next in your pleas­ant state, and if my ar­gu­ments go through, this would mean that your origi­nal de­ci­sion was bad. So, we have a situ­a­tion where you used to pre­fer tak­ing the bet to not tak­ing the bet, but when we made the bet sweeter, you know pre­fer not tak­ing the bet. This seems ir­ra­tional.

That said, I don’t think the ques­tion of my sub­jec­tive ex­pec­ta­tion of no longer ex­ist­ing is well-defined, be­cause I don’t have a sub­jec­tive ex­pe­rience if I no longer ex­ist.

I think it is ac­tu­ally well-defined? Right now, even if I were told that no mul­ti­verse ex­ists, I would be pretty sure that I would con­tinue liv­ing, even though I wouldn’t be hav­ing ex­pe­riences if I were dead. I think the prob­lem here is that you are con­fus­ing my in­vo­ca­tion of sub­jec­tive prob­a­bil­ities (while you’re pon­der­ing what will hap­pen next in your branch) of what will ob­jec­tively hap­pen next with a state­ment about sub­jec­tive ex­pe­riences later.

I think we do have some more fun­da­men­tal dis­agree­ment about an­throp­ics, which I don’t want to ar­gue over un­til I hash out my view­point more.

I would be in­ter­ested in read­ing your view­points about an­throp­ics, should you pub­lish them. That be­ing said, given that you don’t take the suicide bet in the sin­gle-world case, I think that we prob­a­bly don’t.

• Look­ing for some peo­ple to re­fute this re­cently hair-brained idea I came up with.

The time pe­riod from the ad­vent of the in­dus­trial rev­olu­tion to the so-called digi­tal rev­olu­tion was about 150 − 200 years. Even though com­put­ers were be­ing used around WWII, wide­spread com­puter use didn’t start to shake things up un­til 1990 or so. I would imag­ine that AI would con­sti­tute a similar fun­da­men­tal shift in how we live our lives. So would it be a rea­son­able ex­trap­o­la­tion to think that wide­spread AI would be about 150 − 200 years af­ter the be­gin­ning of the in­for­ma­tion age?

• By what prin­ci­ple would such an ex­trap­o­la­tion be rea­son­able?

• If you are do­ing refer­ence class fore­cast­ing, you need at least a few mem­bers in your refer­ence class and a few out­side of it, to­gether with the rea­sons why some are in and oth­ers out. If you are gen­er­al­iz­ing from one ex­am­ple, then, well...

• I’m a firstyear AI stu­dent, and we are cur­rently in the mid­dle of ex­plor­ing AI ‘his­tory’. Of­course I don’t know a lot about about AI yet, but the in­ter­est­ing part about learn­ing of the his­tory of AI is that in some sense the cli­max of AI-re­search is already be­hind us. Peo­ple got very in­ter­ested in AI af­ter the Dart­mouth con­fer­ence ( http://​​en.wikipe­dia.org/​​wiki/​​Dart­mouth_Con­fer­ences ) and were so op­ti­mistic that they thought they could make an ar­tifi­cial in­tel­li­gent sys­tem in 20 years. And here we are, still strug­gling with the seem­ingly sim­plest things such as com­puter vi­sion etc.

The prob­lem is they came across some hard prob­lems which they can’t re­ally ig­nore. One of them is the frame prob­lem. http://​​www-for­mal.stan­ford.edu/​​leora/​​fp.pdf One of them is the com­mon sense prob­lem.

Solu­tions to many of them (I be­lieve) are ei­ther 1) huge brute-force power or 2) ma­chine learn­ing. And ma­chine learn­ing is a thing which we can’t seem to get very far with. Pro­gram­ming a com­puter to pro­gram it­self, I can un­der­stand why that must be quite difficult to ac­com­plish. So since the 80s AI re­searchers have mainly fo­cused on build­ing ex­pert sys­tems: sys­tems which can do a cer­tain task much bet­ter than hu­mans. But they lack in many things that are very easy for hu­mans (which is ap­par­ently called the Mo­ravec’s para­dox ).

Any­way, the point Im try­ing to get across, and Im in­ter­ested in hear­ing whether you agree or not, is that AI was/​is very over­rated. I doubt we can ever make a real ar­tifi­cial in­tel­li­gent agent, un­less we can solve the ma­chine learn­ing prob­lem for real. And I doubt whether that is ever truly pos­si­ble.

• And ma­chine learn­ing is a thing which we can’t seem to get very far with.

Stan­dard vanilla su­per­vised ma­chine learn­ing (e.g. back­prop neu­ral net­works and SVMs) is not go­ing any­where fast, but deep learn­ing is re­ally a new thing un­der the sun.

• but deep learn­ing is re­ally a new thing un­der the sun.

On the con­trary, the idea of mak­ing deeper nets is nearly as old as or­di­nary 2-layer neu­ral nets, suc­cess­ful im­ple­men­ta­tions dates back to the late 90′s in the form of con­volu­tional neu­ral nets, and they had an­other burst of pop­u­lar­ity in 2006.

Ad­vances in hard­ware, data availa­bil­ity, heuris­tics about ar­chi­tec­ture and train­ing, and large-scale cor­po­rate at­ten­tion have al­lowed the cur­rent burst of rapid progress.

This is both heart­en­ing, be­cause the foun­da­tions of its suc­cess are deep, and tem­per­ing, be­cause the limi­ta­tions that have held it back be­fore could re­sur­face to some de­gree.

• And I doubt whether that is ever truly pos­si­ble.

It’s pos­si­ble. We’re an ex­am­ple of that. The ques­tion is if it’s hu­manly pos­si­ble.

There’s a com­mon idea of an AI be­ing able to make an­other twice as smart as it­self, which could make an­other twice as smart as it­self, etc. caus­ing an ex­po­nen­tial in­crease in in­tel­li­gence. But it seems just as likely that an AI could only make one half as smart as it­self, in which case we’ll never even be able to get the first hu­man-level AI.

• The ex­am­ple you give to prove plau­si­bil­ity is also a coun­terex­am­ple to the ar­gu­ment you make im­me­di­ately af­ter­wards. We know that less-in­tel­li­gent or even non-in­tel­li­gent things can pro­duce greater in­tel­li­gence be­cause hu­mans evolved, and evolu­tion is not in­tel­li­gent.

It’s more a mat­ter of whether we have enough time to drudge some­thing rea­son­able out of the prob­lem space. If we were smarter we could search it faster.

• Evolu­tion is an op­ti­miza­tion pro­cess. It might not be “in­tel­li­gent” de­pend­ing on your defi­ni­tion, but it’s good enough for this. Of course, that just means that a rather pow­er­ful op­ti­miza­tion pro­cess oc­curred just by chance. The real prob­lem is, as you said, it’s ex­tremely slow. We could prob­a­bly search it faster, but that doesn’t mean that we can search it fast.

• Did or­ga­nized Ob­jec­tivist ac­tivism, at least in some of its nut­tier phases, offer to turn its ad­her­ents who get it right into a kind of su­per­hu­man en­tity? I guess you could call such en­hanced peo­ple “Oper­at­ing Ob­jec­tivists,” analo­gous to the en­hanced state promised by an­other cult.

In­ter­est­ingly enough Rand seems to make a dis­claimer about that in her novel At­las Shrugged. The philos­o­phy pro­fes­sor char­ac­ter Hugh Ak­ston says of his star stu­dents, Rag­nar Dan­neskjold, John Galt and Fran­cisco d’An­co­nia:

“Don’t be as­ton­ished, Miss Tag­gart,” said Dr. Ak­ston, smil­ing, “and don’t make the mis­take of think­ing that these three pupils of mine are some sort of su­per­hu­man crea­tures. They’re some­thing much greater and more as­tound­ing than that: they’re nor­mal men—a thing the world has never seen—and their feat is that they man­aged to sur­vive as such. It does take an ex­cep­tional mind and a still more ex­cep­tional in­tegrity to re­main un­touched by the brain-de­stroy­ing in­fluences of the world’s doc­trines, the ac­cu­mu­lated evil of cen­turies—to re­main hu­man, since the hu­man is the ra­tio­nal.”

But then look at what Rand shows these allegedly “nor­mal men” can do as Oper­at­ing Ob­jec­tivists:

Hank Rear­den, a kind of self-trained Oper­at­ing Ob­jec­tivist who never stud­ied un­der Ak­ston, can de­sign a new kind of railroad bridge in his mind which ex­ploits the char­ac­ter­is­tics of his new al­loy, even though he has never built a bridge be­fore.

Fran­cisco d’An­co­nia can de­ceive the whole world as he de­pletes his in­her­ited for­tune while mak­ing ev­ery­one be­lieve that he spends his days as a play­boy pickup artist, when he in fact he has lived with­out sex since his youth­ful sex­ual re­la­tion­ship with Dagny.

John Galt can build a mo­tor which vi­o­lates the con­ser­va­tion of en­ergy and the laws of ther­mo­dy­nam­ics. Oh, and he can also con­fi­dently mas­ter Dagny’s un­ex­pected in­tru­sion into Galt’s Gulch de­spite his se­cret crush her, his im­plied adult virginity and his lack of an adult man’s skill set for han­dling women. (You need life ex­pe­rience for that, not ed­u­ca­tion in philos­o­phy.) On top of that, he can sur­vive tor­ture with­out suffer­ing from post-trau­matic stress symp­toms.

So de­spite Rand’s dis­claimer, if you view At­las Shrugged as “ad­ver­tis­ing” for the abil­ities Rand’s philos­o­phy promises as it un­locks your po­ten­tials as a “nor­mal man,” then the Ob­jec­tivist or­ga­ni­za­tions which work with this idea im­plic­itly do seem to offer to turn you into a “su­per­hu­man crea­ture.”

• On top of that, he can sur­vive tor­ture with­out suffer­ing from post-trau­matic stress symp­toms.

PTSS al­most seems like a cul­ture-bound syn­drome of the mod­ern West. In par­tic­u­lar there don’t seem to be any refer­ences to it be­fore WWI and even there (and in sub­se­quent wars) all the refer­ences seem to be from the west­ern al­lies. Fur­ther­more, the re­ac­tion to “shell shock”, as it was then called, dur­ing WWI sug­gests that this was some­thing new that the es­tab­lished struc­tures didn’t know how to deal with.

• Not ev­ery­one who’s had trau­matic ex­pe­riences has PTSD.

The sci­en­tists have a the­ory, and it has to do with the root causes of PTSD, pre­vi­ously un­doc­u­mented. As com­pared with the re­silient Dan­ish sol­diers, all those who de­vel­oped PTSD were much more likely to have suffered emo­tional prob­lems and trau­matic events prior to de­ploy­ment. In fact, the on­set of PTSD was not pre­dicted by trau­matic war ex­pe­riences but rather by child­hood ex­pe­riences of vi­o­lence, es­pe­cially pun­ish­ment se­vere enough to cause bruises, cuts, burns and bro­ken bones. PTSD suffer­ers were also more likely to have wit­nessed fam­ily vi­o­lence and to have ex­pe­rienced phys­i­cal at­tacks, stalk­ing or death threats by a spouse. They also more of­ten had past ex­pe­riences that they could not, or would not, talk about.

• PTSS al­most seems like a cul­ture-bound syn­drome of the mod­ern West.

There are sig­nifi­cant con­founders here, as mod­ern sci­ence-based psy­chol­ogy got started around the same time—and WWI re­ally was very differ­ent from ear­lier con­flicts, not least in its sheer scale. But the idea is nonethe­less in­trigu­ing; the West re­ally is quite differ­ent from tra­di­tional so­cieties, along lines that could plau­si­bly make folks more vuln­er­a­ble to trau­matic shock.

• Seems to me that Rand’s model is similar to LessWrong’s “ra­tio­nal­ity as non-self-de­struc­tion”.

Ob­jec­tivism in the nov­els doesn’t give the heroes any pos­i­tive pow­ers. It merely helps them avoid some harm­ful be­liefs and be­hav­iors, which are ex­tremely com­mon. Not bur­dened by these nega­tive be­liefs and be­hav­iors, these “nor­mal men” can fully fo­cus on what they are good at, and if they have high in­tel­li­gence and make the right choices, they can achieve im­pres­sive re­sults.

(The harm­ful be­liefs and be­hav­iors in­clude: feel­ing guilty for be­ing good at some­thing, fo­cus­ing on ex­ploit­ing other peo­ple in­stead of de­vel­op­ing one’s own skills.)

Hank Rear­den’s de­sign of a new railroad bridge was com­pletely un­re­lated to his poli­ti­cal be­liefs. It was a con­se­quence of his nat­u­ral tal­ent and hard work, per­haps some luck. The poli­ti­cal be­liefs only in­fluenced his de­ci­sion of what to do with the in­vented tech­nol­ogy. I don’t re­mem­ber what ex­actly were his op­tions, but I think one of them was “archive the tech­nol­ogy, to pre­vent changes in the in­dus­try, to pre­serve ex­ist­ing so­cial or­der”, and as a con­se­quence of his be­liefs he re­fused to con­sider this op­tion. And even this was be­fore he be­came a full Ob­jec­tivist. (The only perfect Ob­jec­tivist in the novel is Galt; and per­haps the peo­ple who later ac­cept Galt’s views.)

Fran­cisco d’An­co­nia’s for­tune, as you wrote, was in­her­ited. That’s a ran­dom fac­tor, un­re­lated to Ob­jec­tivism.

John Galt’s “mag­i­cal” mo­tor was also a re­sult of his nat­u­ral tal­ent and hard work, plus some luck. The poli­ti­cal be­liefs only in­fluenced his de­ci­sion to hide the mo­tor from pub­lic, us­ing a pri­vate in­vestor and a se­cret place.

Vio­lat­ing the law of ther­mo­dy­nam­ics, and sur­viv­ing the tor­ture with­out dam­age… that’s fairy-tale stuff. But I think none of them is an in-uni­verse con­se­quence of Ob­jec­tivism.

So, what ex­actly does Ob­jec­tivism (or Hank Rear­den’s be­liefs, which are par­tial Ob­jec­tivism plus some com­part­men­tal­iza­tion) cause, in-uni­verse?

It makes the heroes fo­cus on their tech­ni­cal skills, and the more en­light­ened heroes on keep­ing their tech­ni­cal in­ven­tions for them­selves. As op­posed to at­tempt­ing a poli­ti­cal car­reer or serv­ing the ex­ist­ing poli­ti­cal pow­ers. In­stead of net­work­ing, Rear­den fo­cuses on study­ing metal. In­stead of donat­ing the mag­i­cal ma­chine to the gov­ern­ment, Galt keeps it se­cret. In­stead of hav­ing his for­tune taken by gov­ern­ment, d’An­co­nia de­stroys it… prob­a­bly be­cause of a lack of smarter al­ter­na­tive (or maybe he some­how se­cretly pre­serves a part of his for­tune, and os­ten­ta­tiously de­stroys the rest to draw away at­ten­tion; I don’t re­mem­ber the de­tails here).

Without Ob­jec­tivism, the heroes would most likely be­come clue­less nerds serv­ing the elite, be­cause they couldn’t win at the poli­ti­cal fight (re­quires a com­pletely differ­ent set of skills that peo­ple like Mouch are ex­perts in), but they also wouldn’t un­der­stand that the sys­tem is in­ten­tion­ally de­signed against them, so they would spend their en­ergy in a fu­tile fight, win­ning a few bat­tles but los­ing the war.

Un­der­stand­ing the sys­tem al­lows one to fo­cus on find­ing an “out of the box” solu­tion. John Galt’s vic­tory is his abil­ity to use his nat­u­ral tal­ent and work to de­vise a solu­tion where he can live with­out poli­ti­cal mas­ters. He is eco­nom­i­cally in­de­pen­dent, thanks to his mag­i­cal mo­tor, but also men­tally in­de­pen­dent. (If we re­moved the magic, his vic­tory would be un­der­stand­ing the sys­tem, and the abil­ity to re­sist its emo­tional black­mail and op­ti­mize for him­self.)

The lack of this un­der­stand­ing made Rear­den vuln­er­a­ble to black­mail from his wife, and in a way cost Ed­die Willers his life. (And James Tag­gart his san­ity, if I re­mem­ber cor­rectly.)

tl;dr: (Ac­cord­ing to Rand) Ob­jec­tivism makes you able to un­der­stand how the sys­tem works, so you can more re­al­is­ti­cally op­ti­mize for your val­ues. Ob­jec­tivism doesn’t give you tal­ent, skills, or luck; but it gives you a chance to use them more effi­ciently, in­stead of wast­ing them in a fight you can­not win.

EDIT: In real life, I ex­pect that an Ob­jec­tivist train­ing could make peo­ple be more aware of their goals and ne­go­ti­ate harder. Maybe in­crease work ethics.

• Did or­ga­nized Ob­jec­tivist ac­tivism, at least in some of its nut­tier phases, offer to turn its ad­her­ents who get it right into a kind of su­per­hu­man en­tity? I guess you could call such en­hanced peo­ple “Oper­at­ing Ob­jec­tivists,” analo­gous to the en­hanced state promised by an­other cult.

Not that I’m aware of, but you might also be in­ter­ested in A. E. Van Vogt’s “Null-A” nov­els, which at­tempted to do this for a fic­tion­al­ized ver­sion of Korzyb­ski’s Gen­eral Se­man­tics.

(Van Vogt later did be­come in­volved in Scien­tol­ogy, as did his (and Hub­bard’s) ed­i­tor John W. Camp­bell.)

• For what it’s worth, Rand was an un­usu­ally ca­pa­ble per­son in her spe­cialty (she wrote two pop­u­lar, and some­what poli­ti­cally in­fluen­tial nov­els in her sec­ond lan­guage), but still not in the same class as her heroes.

I’m not sure you’ve got the bit about Rear­den right. I don’t think there’s any ev­i­dence that he came up with the fi­nal de­sign for the bridge. There’s a men­tion that he worked with a team to dis­cover Rear­den metal, and pre­sum­ably he also had an en­g­ineer­ing team. The point was that he (pre­sum­ably) knew enough en­g­ineer­ing to come up with some­thing plau­si­ble, and that he was fas­ci­nated by pro­duc­ing great things enough to be dis­tracted from some­thing ma­jor go­ing wrong that I don’t re­mem­ber.

I have no idea whether Rand knew Galt’s en­g­ine was phys­i­cally im­pos­si­ble, though I think she should have, con­sid­er­ing that other parts of the book were well-re­searched. Dagny’s situ­a­tion at Tag­gart Transcon­ti­nen­tal was prob­a­bly typ­i­cal for an Oper­a­tions vice-pres­i­dent in a fam­ily owned busi­ness. The de­scrip­tion of her do­ing ce­mentless ma­sonry matched with a book on the sub­ject. At­las Shrugged was the only place I saw the pos­si­bil­ity of shale oil men­tioned un­til, decades later, it turned out to be a pos­si­ble tech­nol­ogy.

• The re­search fail that jumped out at me hard­est in At­las Shrugged was the idea that so many peo­ple would con­sider a metal both stronger and lighter than steel phys­i­cally im­pos­si­ble. By the time the book was pub­lished, not only was tita­nium fairly well un­der­stood, it was also be­ing widely used in mil­i­tary and (some; what could be spared from Cold War efforts) com­mer­cial pur­poses. Its prop­er­ties don’t ex­actly match Rear­den Me­tal (even ig­nor­ing the color and other mostly-unim­por­tant char­ac­ter­is­tic) but they’re close enough that it should be ob­vi­ous that such ma­te­ri­als are com­pletely pos­si­ble. Of course, that part of the book also talks about mak­ing steel rails last longer by mak­ing them denser, which seems com­pletely bizarre to me; there are ways to in­crease the hard­ness of steel, but they in­volve things like heat-treat­ing it.

TL;DR: I’m not sure I’d call the book “well-re­searched” as a whole, though some parts may well have been.

• The book ex­ists in a de­liber­ately time­less set­ting—it has el­e­ments of ev­ery­thing from about a cen­tury of span. Railroads weren’t ex­actly build­ing mas­sive new lines in 1957, ei­ther.

• The three peo­ple Ak­ston was talk­ing about didn’t in­clude Rear­den. They were D’An­co­nia, Galt, and Dan­neskjold (the mostly off-stage pirate). I feel as though I’ve lost, not just geek points, but ob­jec­tivist points both for for­get­ting some­thing from the book, but also be­cause I went along with ev­ery­one else who got it wrong.

The re­mark­able thing about Galt and tor­ture isn’t that he didn’t get PTSD, it’s that he com­pletely kept his head, and over-awed his tor­tur­ers. He broke James Tag­gart’s mind, not that Tag­gart’s mind was in such great shape to be­gin with.

• A num­ber of these mat­ters seem more nar­ra­tive or genre con­ve­niences : Fran­cisco acts a play­boy in the same way Bruce Wayne does, Rear­den’s bridge de­vel­op­ment passes a lot of work to his spe­cial­ist en­g­ineers (similarly to Rear­den metal hav­ing a team of sci­en­tists skep­ti­cally helping him) and pre­tends that the man is still a one-man de­signer (among other hand­waves). At the same time, Bat­man is not de­scribed as a su­per­hu­man en­g­ineer or play­boy, nor would he act as those types of heroes. I’m also not sure we can know the long-term nega­tive reper­cus­sions John Galt ex­pe­riences given the length of the book, and not all peo­ple who ex­pe­rience tor­ture dis­play clini­cally rele­vant post-trau­matic stress symp­toms and many who do show them only spo­rad­i­cally. His en­g­ine is based on now-de­bunked the­o­ries of physics that weren’t so ob­vi­ously ther­mo­dy­nam­ics-vi­o­lat­ing at the time, similarly to Pro­ject Xy­lo­phone.

Th­ese men are in­tended to be top-of-field ca­pa­bil­ity from the per­spec­tive of a post-Soviet writer who knew lit­tle about their fields and could eas­ily re­search less. Many of the peo­ple who show up un­der Galt’s tute­lage are similarly ex­cep­tion­ally skil­led, but even more are not so hugely ca­pa­ble.

On the other hand, the abil­ity of her pro­tag­o­nists to per­suade oth­ers and eval­u­ate the risk of get­ting shot starts at su­per­hu­man and quickly be­comes ridicu­lous.

On the grip­ping hand, I’m a lit­tle cau­tious about em­pha­siz­ing fic­tional char­ac­ters and ac­knowl­edgedly Heroic abil­ities as ev­i­dence, es­pe­cially when the au­thor wrote a num­ber of non-fic­tion philos­o­phy texts re­lated to this topic.

• Not to my knowl­edge, but they should have! PM me..

• Not quite in the spirit of ad­mit­ting ig­no­rance, but since it’s in this thread, I’ll an­swer it.

Did or­ga­nized Ob­jec­tivist ac­tivism, at least in some of its nut­tier phases, offer to turn its ad­her­ents who get it right into a kind of su­per­hu­man en­tity? …
an­other cult

No.

So de­spite Rand’s dis­claimer, if you view....

So de­spite what Rand or any Ob­jec­tivist ever said or did, if you choose to view Ob­jec­tivism as a nutty cult, you can.

If you were ac­tu­ally in­ter­ested in why Rand’s char­ac­ters are the way they are, you could read her book on art, “The Ro­man­tic Man­i­festo”. Prob­a­bly a quick google search on the book would give you your an­swer.

• Th­ese aren’t so much “stupid” ques­tions but ones which have no clear an­swer, and I’m cu­ri­ous what peo­ple here feel have to say about this.

-Why should (or shouldn’t) one as­pire to be “good” in the sense of proso­cial, al­tru­is­tic etc.?

-Why should (or shouldn’t) one at­tempt to be as hon­est as pos­si­ble in their day to day lives?

I have strong al­tru­is­tic in­cli­na­tions be­cause that’s how I’m pre­dis­posed to be and of­ten be­cause co­in­cides with my val­ues; other peo­ple’s suffer­ing up­sets me and I would pre­fer to live a world in which peo­ple are kind and sup­port­ive of each other. I want to be nice, but I don’t want to want to be nice; I can’t find strong ra­tio­nal rea­sons to be al­tru­is­tic.

I’m hon­est with peo­ple I vol­un­tar­ily in­ter­act with, but am­biva­lent about ly­ing in gen­eral. For ex­am­ple, I’m cur­rently on sort of in­ter­mit­tent fast­ing reg­i­men and if some­one I’m not par­tic­u­larly fa­mil­iar with offers food, I tend to say “I’ve already ate” rather than giv­ing my real rea­son for ab­stain­ing from. I’ve seen it ar­gued that ly­ing to oth­ers will make you more likely to lie to your­self, but I’m un­con­vinced this is the case.

• I have a vague no­tion from read­ing sci­ence fic­tion sto­ries that black holes may be ex­tremely use­ful for highly ad­vanced (as in, post-sin­gu­lar­ity/​space-far­ing) civ­i­liza­tions. For ex­am­ple, IIRC, in John C. Wright’s Golden Age se­ries, a colony formed near a black hole be­came fan­tas­ti­cally wealthy.

I did some googling, but all I found was that they would be great at cool­ing com­puter sys­tems in space. That seems use­ful, but I was ex­pect­ing some­thing more dra­matic. Am I miss­ing some­thing?

• I did some googling, but all I found was that they would be great at cool­ing com­puter sys­tems in space.

When you’re suffi­ciently ad­vanced, cool­ing your sys­tems, tech­ni­cally dis­pos­ing of en­tropy, is one of the main limit­ing con­straint on your sys­tem. Also if you throw mat­ter into a black hole just right you can get its equiv­a­lent (or half its equiv­a­lent I for­got which) out in en­ergy.

Edit: think­ing about it, it is half the mass.

• Also if you throw mat­ter into a black hole just right you can get its equiv­a­lent (or half its equiv­a­lent I for­got which) out in en­ergy.

Not in use­ful en­ergy, if you’re think­ing of us­ing Hawk­ing ra­di­a­tion; it comes out in very high-en­tropy form. I was so sad when I re­al­ized that the “Hawk­ing re­ac­tor” I’d in­vented in fifth grade would vi­o­late the Se­cond Law of Ther­mo­dy­nam­ics.

• I wasn’t talk­ing about Hawk­ings ra­di­a­tion. If I throw mat­ter in a black hole just right, I can get half the mass to come out in low-en­tropy pho­tons. That’s why the bright­est ob­jects in the uni­verse are black holes that are cur­rently eat­ing some­thing.

• Ah, cool! For­got about how quasars are hy­poth­e­sized to work.

• It is use­able if you use small black­holes. You don’t need to be able to use all of the en­ergy for lots of pur­poses since a tiny bit of mass leads to so much en­ergy.

• They make awe­some garbage dis­posal units :-)

• [Meta]

In the last ‘stupid ques­tions’ thread, I posed the sug­ges­tion that I write a post called “Non-Snappy An­swers to Stupid Ques­tions”, which would be a sum­mary post with a list of the most pop­u­lar stupid ques­tions asked, or stupid ques­tions with pop­u­lar an­swers. That is, I’m tak­ing how many up­votes each pair of ques­tions and an­swers got as an in­di­ca­tor of how many peo­ple care about them, or how many peo­ple at least thought the an­swer to a ques­tion was a good one. I’m do­ing this so there will be a sin­gle spot where in­ter­est­ing an­swers can be found, rather than mem­bers of LessWrong hav­ing to dig through hun­dreds of com­ments on mul­ti­ple threads to dis­cover use­ful an­swers to sim­ple ques­tions.

I’ll pub­lish this post at the end of De­cem­ber, or be­gin­ning of Jan­uary, when this thread is com­plete. It could be up­dated in the fu­ture, but, by that point, it will in­clude ques­tions asked from ten sep­a­rate threads over the course of more than a year, which is a lot. It will in­clude this thread, which will be the most re­cent.

My ques­tion is: how should I or­ga­nize it? Should I sort ques­tions by topic? By how pop­u­lar the ques­tion was? By how pop­u­lar the an­swer was? By some other means? Leave your feed­back be­low.

• 15 Dec 2014 15:56 UTC
0 points

Back in 2010, Will New­some posted this as a joke:

Sure, ev­ery­thing you [said] made sense within your frame of refer­ence, but there are no priv­ileged frames of refer­ence. In­deed, prov­ing that there are priv­ileged frames of refer­ence re­quires a priv­ileged frame of refer­ence and is thus an im­pos­si­ble philo­soph­i­cal act. I can’t prove any­thing I just said, which proves my point, de­pend­ing on whether you think it did or not.

But isn’t it ac­tu­ally true?

• What would I do differ­ently if I be­lieved it was true, or wasn’t?
What ex­pec­ta­tions about fu­ture events would I have in one case, that I wouldn’t have in the other?
What be­liefs about past events would I have in one case, that I wouldn’t have in the other?

• I un­der­stand that this has no de­ci­sion-mak­ing value. I’m only in­ter­ested in the philo­soph­i­cal mean­ing of this point.

• Hm.
Can you say more about what you’re try­ing to con­vey by “philo­soph­i­cal mean­ing”?

For ex­am­ple, what is the philo­soph­i­cal mean­ing of your ques­tion?

• That if we are to be com­pletely in­tel­lec­tu­ally hon­est and rigor­ous, we must ac­cept com­plete skep­ti­cism.

• Hm.
OK. Thanks for re­ply­ing, tap­ping out here.

• Maybe we could hon­estly ac­cept than im­pos­si­ble de­mands of rigor are in­deed im­pos­si­ble. And fo­cus on what is pos­si­ble.

You can’t con­vince a rock to agree with you on some­thing. There is still some chance with hu­mans.

• The Tor­toise’s mind needs the dy­namic of adding Y to the be­lief pool when X and (X→Y) are pre­vi­ously in the be­lief pool. If this dy­namic is not pre­sent—a rock, for ex­am­ple, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y un­til the end of eter­nity, with­out ever get­ting to Y.

This ap­pears to be a cir­cu­lar ar­gu­ment.

Maybe we could hon­estly ac­cept than im­pos­si­ble de­mands of rigor are in­deed im­pos­si­ble. And fo­cus on what is pos­si­ble.

This is why I wrote this:

I un­der­stand that this has no de­ci­sion-mak­ing value.

• It means you should learn to like learn­ing other lan­guages/​ways of think­ing.

• If the Bay Area has such a high con­cen­tra­tion of ra­tio­nal­ists, shouldn’t it have more-ra­tio­nal-than-av­er­age hous­ing, trans­porta­tion and leg­is­la­tion?

Sadly, I know the stupid an­swers to this stupid ques­tions. I just want to vent a bit.

• The Bay Area has a high con­cen­tra­tion of ra­tio­nal­ists com­pared to most places, but I don’t think it’s very high com­pared to the lo­cal pop­u­la­tion. How many ra­tio­nal­ists are we talk­ing about?

• Are ra­tio­nal­ists more or less likely than non-ra­tio­nal­ists to par­ti­ci­pate in lo­cal gov­ern­ment?

• It is mostly ra­tio­nal for gen­er­at­ing ad­van­tage to peo­ple with poli­ti­cal pull and power.