# Open thread, Oct. 12 - Oct. 18, 2015

If it’s worth say­ing, but not worth its own post (even in Dis­cus­sion), then it goes here.

Notes for fu­ture OT posters:

2. Check if there is an ac­tive Open Thread be­fore post­ing a new one. (Im­me­di­ately be­fore; re­fresh the list-of-threads page be­fore post­ing.)

3. Open Threads should be posted in Dis­cus­sion, and not Main.

4. Open Threads should start on Mon­day, and end on Sun­day.

• Hi! I’d like the min­i­mum amount of karma needed to make a post about the Bay Area Sols­tice. [Edit: Now that I have it, dis­re­gard that mes­sage. Thank you, and that’s why this post has this amount of karma.]

Get tick­ets on Eventbrite

• If you want more karma you can post as Dis­cus­sion and then use that karma to prop­a­gate to Main. Also you could add com­ments giv­ing a sum­mary here.

• There might be an alien civ­i­liza­tion build­ing stuff in its so­lar sys­tem.

If this turns out to be aliens rather than a low-prob­a­bil­ity as­tro­nom­i­cal event, does it im­ply that get­ting out into space is a lot harder than it sounds?

• I’ve read the origi­nal pa­per. http://​​arxiv.org/​​pdf/​​1509.03622v1.pdf

There is no in­frared ex­cess—that is the weirdest part of the whole thing. It means that there isn’t a large sys­tem-span­ning amount of ma­te­rial heated by the star and ra­di­at­ing in the in­frared, that we are just see­ing a small frac­tion of as it hap­pens to pass in front of the star from our an­gle. In­stead, there must be only a small amount of ma­te­rial that we are see­ing a rea­son­able frac­tion of each time it oc­cults the star. An in­frared ex­cess does not de­pend on the type of ma­te­rial, merely its sur­face area.

This and the ir­reg­u­lar deep na­ture of the oc­cul­ta­tion is very strange—large deep oc­cul­ta­tions mean the mat­ter has to be diffuse rather than some­thing like a planet, ir­reg­u­lar­ity means theres prob­a­bly mul­ti­ple clumps, but the lack of in­frared ex­cess means we have to be see­ing a pretty good frac­tion of it. The bright­ness of the star also wig­gles a lit­tle bit on time­cales of ~20 days for part of the dataset, in a man­ner they don’t know how to in­ter­pret.

1 - Dust clumps gen­er­ated from a gi­ant im­pact be­tween two planets, spread around the or­bital range of that planet. Should be some in­frared ex­cess in that case though, and the odds of hap­pen­ing to see that in a sys­tem that isn’t ac­tively form­ing are ridicu­lously tiny.

2 - Ex­o­comet storm in which large, icy dusty ob­jects rain down prac­ti­cally on top of the star and poof into dust, then zoom back out into the outer sys­tem, pos­si­bly with one large an­ces­tral ob­ject break­ing up into mul­ti­ple ones that share an or­bit and pass next to the star at ir­reg­u­lar in­ter­vals like our own so­lar sys­tem’s Kreutz sun­graz­ers. In this case large amounts of dust would be ir­reg­u­larly gen­er­ated in close prox­im­ity to the star where we are much more likely to see them pass in front. When they went back and looked at the star with other in­stru­ments, they found a pass­ing red dwarf star only about 1,000 AUs out which could definitely dis­turb the far outer sys­tem and an Oort cloud equiv­a­lent.

3 - Some­thing new, some kind of semi-sta­ble clumpy low mass dust belt or a new form of chaotic vari­able star. Or the as­trom­e­try that ruled out cer­tain classes of ex­pla­na­tion be­ing wrong.

• Any­one want to take bets on whether or not this will turn out in ten years to be nat­u­ral?

• As much as I would like for it to be aliens, I think even a 1% be­lief that it’s aliens is priv­ileg­ing the hy­poth­e­sis too much. Pre­vi­ous ‘weird’ cos­molog­i­cal ob­jects have all turned out to have far more plau­si­ble nat­u­ral ex­pla­na­tions.

All this said, though, it does seem kind of nat­u­ral for a civ­i­liza­tion to put most of its effort into sur­viv­ing in its own so­lar sys­tem—where en­ergy is plen­tiful and com­mu­ni­ca­tion is rapid—rather than spread­ing out­ward into ten­u­ous space where the chances of sur­vival are very low. It’s not ob­vi­ous to me why a civ­i­liza­tion should choose to colonize other so­lar sys­tems. That said, if a civ­i­liza­tion chose to do that and was suc­cess­ful in do­ing that, it would quickly be­come very pop­u­lous, but it re­quires an ini­tial im­pe­tus.

• But how of­ten does that have to hap­pen? They only looked at about 150,000 stars. There are hun­dreds of billions in our galaxy alone, and if alien civ­i­liza­tion de­vel­oped even 1% ear­lier than ours, they’d have had time to colonize the en­tire Virgo su­per­cluster, so long as they start near the cen­ter.

• I’d say that at this point we are largely ig­no­rant of the odds of in­tel­li­gent life ex­ist­ing in a so­lar sys­tem. While at least some ba­sic forms of life ought to be plen­tiful in the galaxy, the con­di­tions for evolu­tion from sim­ple life to in­tel­li­gent life (that is, civ­i­liza­tion-build­ing life) just aren’t un­der­stood to the level that would be re­quired for ANY prob­a­bil­ity es­ti­mate to be given. Note that I’m not say­ing in­tel­li­gent life is rare; I’m just say­ing that both scarcity and abun­dance of in­tel­li­gent life are con­sis­tent with our cur­rent state of knowl­edge.

• But that’s just the prior prob­a­bil­ity. I can still say that we have strong ev­i­dence that the prob­a­bil­ity of a given so­lar sys­tem hav­ing in­tel­li­gent life is much, much lower than one in 150,000.

• Or at least in­tel­li­gent life that mod­ifies its home sys­tem in a way that is visi­ble from thou­sands of light years away.

• I ad­mit that a Dyson sphere seems like an ar­bi­trary place to stop, but I think my ba­sic ar­gu­ment stands ei­ther way. If any in­tel­li­gent life was that com­mon, some of it would spread.

• does it im­ply that get­ting out into space is a lot harder than it sounds?

It en­forces a state­ment along the lines of “these aliens got space travel re­cently or get­ting out into space is a lot harder than it sounds.” That’s weak ev­i­dence, at least, for that claim.

• But if those are aliens, then aliens must be com­mon. And if aliens are com­mon, then there should have been tons of them that got to the space travel point long enough ago to have reached us by now.

• But if those are aliens, then aliens must be com­mon.

Given that the uni­verse started a finite amount of time ago, and sup­pos­ing there is easy space travel, then there is an in­ter­val dur­ing which the first colon­ists have in­trastel­lar space travel but have not visi­bly done in­ter­stel­lar space travel, and we can es­ti­mate how long that in­ter­val is. They’re in that in­ter­val, or there isn’t easy space travel.

We can­not ar­gue “be­cause there is one, there must have been a pre­vi­ous one,” you can’t do that sort of in­duc­tion on the nat­u­ral num­bers, even­tu­ally you hit one. We can ar­gue it’s un­likely, sure, and we weigh that un­like­li­hood against the un­like­li­hood that in­ter­stel­lar travel is hard in or­der to de­ter­mine what our pos­te­rior ends up be­ing.

• They’re in that in­ter­val, or there isn’t easy space travel.

But that’s a lot of in­for­ma­tion. It’s a very short in­ter­val. Since it’s so un­likely to be in that in­ter­val, this is large ev­i­dence against easy space travel.

We can ar­gue it’s un­likely, sure

It’s a prob­a­bil­is­tic ar­gu­ment. But what isn’t? There’s no ar­gu­ment that al­lows in­finite cer­tainty. At least, I’m pretty sure there isn’t.

• But that’s a lot of in­for­ma­tion. It’s a very short in­ter­val. Since it’s so un­likely to be in that in­ter­val, this is large ev­i­dence against easy space travel.

I agree that it’s a lot of in­for­ma­tion. But it’s also the case that we have a lot of in­for­ma­tion about physics, such that in­ter­stel­lar space travel be­ing difficult is also un­likely. Which un­like­li­hood is larger? That’s the ques­tion we need to ask and an­swer, not “the left side of the bal­ance is very heavy.”

• And that’s why my con­clu­sion is “that wasn’t made by aliens.”

• The gen­eral lack of space-go­ing aliens sug­gests that get­ting into space is harder than it sounds.

• The gen­eral lack of space-go­ing aliens sug­gests that get­ting into space is harder than it sounds.

Sure, but we already knew there was a gen­eral lack of space-go­ing aliens. Pre­sum­ing this is aliens, this moves us from “are we the first? Really?” to “are we only shortly af­ter the first? Really?”

• The gen­eral lack of space-go­ing aliens sug­gests that get­ting into space is harder than it sounds.

That’s one ex­pla­na­tion, the other be­ing “in­tel­li­gent life is harder than it sounds” and an­other be­ing “any life is harder than it sounds”.

• Both of those fall un­der “are we the first? Really?”, or the re­lated hy­poth­e­sis that we’re shortly af­ter the first. Or did you mean to re­spond to Nan­cyLe­bovitz?

• Or there are fewer civ­i­liza­tions than we ex­pect, or some­thing is wiping out civ­i­liza­tions once they go to space, or most species for what­ever rea­son de­cide not to go to space, or we are liv­ing in an an­ces­tor simu­la­tion which only does a de­tailed simu­la­tion of our so­lar sys­tem. (I agree that all of these are es­sen­tially want­ing, your in­ter­pre­ta­tion makes the most sense, these ex­am­ples are listed more for com­plete­ness than any­thing else.)

• How can they get a mess of ob­jects whirling around a star with­out get­ting into space?

• I prob­a­bly should have used more ex­act lan­guage. The Fermi Para­dox isn’t mostly about species put­ter­ing around in their home so­lar sys­tem—it’s about filling a galaxy.

• Drats, foiled again!

KUALA LUMPUR: The po­lice have de­clared the in­ter­na­tional “Love and Sex with Robots” con­fer­ence, sched­uled to be held in Iskan­dar Malaysia, as ille­gal.

In­spec­tor-Gen­eral of Po­lice Tan Sri Khalid Abu Bakar said the or­ganiser of the event has been warned not to pro­ceed with the event that was sup­posed to be held from Nov 16 to 19.

“It’s already an offence in Malaysia to have anal sex, what more in­ter­course with robots. Don’t try to be ridicu­lous,” he said at the press con­fer­ence at the Sime Darby Con­ven­tion Cen­tre on Tues­day.

He added that there was “noth­ing sci­en­tific about hav­ing sex with ma­chines.”

• Ac­cord­ing to Wikipe­dia, in Malaysia sale and im­por­ta­tion of sex toys is ille­gal, but it doesn’t sound like there’s any law against us­ing a vibra­tor you made your­self.

• noth­ing sci­en­tific about hav­ing sex with machines

Sex­ol­ogy is not a sci­ence?

Would it be more sci­en­tific to make an in­ter­dis­ci­pline be­tween sex­ol­ogy and, uhm, com­puter sci­ence? Oh wait...

• Any tips on elic­it­ing good, hon­est per­sonal feed­back? I just got a re­jec­tion from a po­si­tion I wanted and will have a call with the head­hunter to­mor­row. I’d like to ex­tract some­thing use­ful in­for­ma­tion out of it. Any tips of good ques­tion for­mu­la­tions?

E.g. in a sur­vey I ask in­stead of “Do you use X?” the ques­tion “In the past 3 months how many times did you use X?” to get a less bi­ased an­swer.

Any good ques­tions/​ideas?

The first an­swer here is pretty good, though doesn’t quite ap­ply for my situ­a­tion: https://​​www.quora.com/​​Whats-the-best-way-to-ask-for-per­sonal-feed­back-from-friends-and-cowork­ers-on-your-strengths-and-weak­nesses

Thank you!

• Thanks tried that. Not sure it worked as I didn’t learn any­thing con­crete. We spent 30 mins in dis­cus­sion though (which he didn’t need to do as there was no fur­ther value he could ex­tract from me).

Oh well, such is life...

• Thanks tried that. Not sure it worked as I didn’t learn any­thing con­crete. We spent 30 mins in dis­cus­sion though (which he didn’t need to do as there was no fur­ther value he could ex­tract from me).

If he’s a head­hunter than he might value the re­la­tion­ship with you to call you up when he has an­other job.

• Maybe, but I’ve rarely got­ten more than one offer from a given head­hunter—ac­tu­ally, I’ve got­ten mul­ti­ple offers from one com­pany more of­ten than through one head­hunt­ing agency. Read­ing be­tween the lines, I get the im­pres­sion that most of them have a library of open­ings and look in real time for can­di­dates match­ing them, rarely go­ing into their back cat­a­log.

Mul­ti­ple offers might be more com­mon for peo­ple with less spe­cial­ized skil­lsets than mine, though.

• Read­ing be­tween the lines, I get the im­pres­sion that most of them have a library of open­ings and look in real time for can­di­dates match­ing them, rarely go­ing into their back cat­a­log.

This is true… but you should be get­ting back in touch with the head­hunter ev­ery three months or so, to make sure you’re in the front of the cat­a­log in­stead of the back :).

• I was just reread­ing Three Wor­lds Col­lide to­day and no­ticed that my feel­ings about the end­ing have changed over the last few years. It used to be ob­vi­ous to me that the “sta­tus quo” end­ing was bet­ter. Now I feel that the “su­per happy” end­ing is bet­ter, and it’s not just a mat­ter of feel­ings—it’s some­how ax­io­mat­i­cally bet­ter, based on what I know about de­ci­sion the­ory.

Namely, the story says that the su­per hap­pies are smarter and un­der­stand hu­man­ity’s util­ity func­tion bet­ter, and also that they are moral and wouldn’t offer a deal un­less it was benefi­cial ac­cord­ing to both util­ity func­tions be­ing merged (not just ac­cord­ing to their value of hap­piness). Un­der these con­di­tions, ac­cept­ing the deal seems like the right thing to do.

• Does the story ac­tu­ally says the Su­per­hap­pies re­ally know hu­man­ity’s util­ity func­tion bet­ter? As in, does an om­ni­scient nar­ra­tor tell it, or is it a Su­per­happy or one of the crew that says this? That changes a lot, to me. Of course the Su­per­hap­pies would be­lieve they know our util­ity func­tion bet­ter than we do. Just like how the hu­mans as­sumed they knew what was bet­ter for the Babyeaters.

Similarly, the Su­per­hap­pies are moral, for their idea of moral­ity. They were perfectly will­ing to use force (not phys­i­cal, but force nonethe­less) to en­courage hu­mans to see their point of view. They threat­ened hu­man­ity and were will­ing to forcibly change hu­man chil­dren, even if the adults could con­tinue to feel pain. While hu­mans also em­ploys threats and force to change be­hav­ior, in most cases we would be hard-pressed to call that “moral.”

From a meta-per­spec­tive, I’d fin­dit odd if Yud­kowsky wrote it like that. He’s not care­less enough to make that mis­take and as far as I know, he thinks hu­man­ity’s util­ity func­tion goes be­yond mere bliss.

The only way I think you could see the Su­per­hap­pies’ solu­tion as ac­cept­able if you don’t think jokes or fic­tion (or other sort of arts in­volv­ing “de­cep­tion”) are some­thing hu­mans would value as part of their util­ity func­tion. Which I per­son­ally would find very hard to un­der­stand.

• The only way I think you could see the Su­per­hap­pies’ solu­tion as ac­cept­able if you don’t think jokes or fic­tion (or other sort of arts in­volv­ing “de­cep­tion”) are some­thing hu­mans would value as part of their util­ity func­tion.

Um, that’s the op­po­site of how util­ity func­tions work. They don’t have sa­cred com­po­nents. You can and should trade off one com­po­nent for a larger gain in an­other com­po­nent. That’s ex­actly what the su­per hap­pies were offer­ing.

• What I’m say­ing is that hu­mans aren’t wrong in trad­ing off some amount of com­fort so they can have jokes, fic­tion, art and ro­man­tic love.

• What why would this be true? Utility func­tions don’t have to be lin­ear, it could even be the case that I place no ad­di­tional util­ity on hap­piness be­yond a cer­tain level.

• True, but the ques­tion in the story is whether to­tal cost of suffer­ing > to­tal benefit from be­ing able to suffer. Th­ese are the com­po­nents be­ing traded. When put this way, the ques­tion an­swers it­self. The only rea­son to re­ply “no” is sta­tus quo bias (men­tion­ing sa­cred com­po­nents of util­ity is an ex­am­ple of that). The stan­dard fix for that is the re­ver­sal test: do you think the cur­rent amount of suffer­ing is co­in­ci­den­tally ex­actly op­ti­mal, or would you pre­fer to add some more? That test is ac­tu­ally men­tioned in the story, the hu­mans ap­ply it to babyeaters, but for­get to ap­ply it to them­selves.

• the ques­tion in the story is whether to­tal cost of suffer­ing > to­tal benefit from be­ing able to suffer

The an­swer to this ques­tion is “No.”

do you think the cur­rent amount of suffer­ing is co­in­ci­den­tally ex­actly op­ti­mal, or would you pre­fer to add some more?

Some peo­ple could use more. Many oth­ers could use less.

The ques­tion you should ask first is whether be­ing able to suffer is a good thing or a bad thing. You start with the as­sump­tion that it is bad, that suffer­ing is bad. You do not suffi­ciently in­ves­ti­gate what the al­ter­na­tive is; you do not suffi­ciently con­sider that ex­pe­rience is sub­jec­tive, and sub­jec­tivity re­quires refer­ence points. To elimi­nate, in per­pe­tu­ity, that half of the axis be­low the cur­rent refer­ence point, is to elimi­nate the axis en­tirely.

• The an­swer to this ques­tion is “No.”

Do you have a proof for this? As far as I know, we have no uni­ver­sally agreed upon way to com­pare differ­ent ways of calcu­lat­ing util­ity.

• There’s no way of calcu­lat­ing util­ity, pe­riod. The is­sue is more sub­stan­tively that suffer­ing is rel­a­tive, and that the elimi­na­tion of suffer­ing is also the elimi­na­tion of hap­piness.

• the elimi­na­tion of suffer­ing is also the elimi­na­tion of happiness

Please ex­plain in more de­tail. The Bud­dhist part of my brain just had a spit-take upon read­ing that.

• Hap­piness and suffer­ing are the same thing—the ex­pe­rience of a di­ver­gence from the norm of your well-be­ing, your ground state. They just differ in di­rec­tion.

A long time ago, I ex­pe­rienced both. For most of my life, I ex­pe­rienced nei­ther—you think pain is a nega­tive ex­pe­rience, I found it to be an -in­ter­est­ing- ex­pe­rience, a di­ver­sion from the end­less gray. To­day, I ex­pe­rience… a very limited de­gree of both, as a re­sult of grad­u­ally ac­cept­ing that suffer­ing is the cost paid to ex­pe­rience hap­piness.

Equa­nim­ity, as it tran­spires, isn’t some­thing you can ex­pe­rience only with re­gard to those things you don’t want to di­rectly ex­pe­rience.

• True, the differ­ence is the di­rec­tion, but surely that counts for some­thing? Pain and plea­sure are chem­i­cally and neu­rolog­i­cally differ­ent phe­nom­ena. A ground state of “end­less gray” is not some­thing you’d re­ally want.

suffer­ing is the cost paid to ex­pe­rience happiness

I’m guess­ing you may be a Ro­man Catholic. In case you’re not, how did you come to see suffer­ing as hav­ing ex­change value?

I hope my com­ments are not taken as offen­sive. I know I some­times tend to dra­ma­tize my de­gree of sur­prise. I gen­uinely wish to un­der­stand your po­si­tion.

• True, the differ­ence is the di­rec­tion, but surely that counts for some­thing? Pain and plea­sure are chem­i­cally and neu­rolog­i­cally differ­ent phe­nom­ena.

I still don’t ex­pe­rience “plea­sure”, at least in sense where I can say, “Yes, that sen­sa­tion is pos­i­tive in a way other sen­sa­tions are not”. At best I can say I ex­pe­rience va­ri­ety. Pain is just start­ing to be a nega­tive thing; it’s difficult to ac­cept it as suffer­ing when it was one of the few things that offered any va­ri­ety at all to my ex­pe­rience for many years. Pain isn’t plea­sure, they’re differ­ent fla­vors, but they’re both spices.

A ground state of “end­less gray” is not some­thing you’d re­ally want.

This is very true.

I’m guess­ing you may be a Ro­man Catholic. In case you’re not, how did you come to see suffer­ing as hav­ing ex­change value?

I was raised, and re­main, an athe­ist. And ex­change value isn’t quite the same thing; it’s more they’re the same vari­able, but differ­ent val­ues. Liv­ing for more than a decade with­out ei­ther suffer­ing or hap­piness, and only start­ing to ex­pe­rience hap­piness when I started to al­low my­self to ex­pe­rience suffer­ing.

I re­gard suffer­ing and hap­piness as sums, rather than in­de­pen­dent vari­ables; they’re com­pos­ite emo­tions, per­haps bet­ter mod­eled as waves, cre­ated by sum­ming up one’s cur­rent to­tal mind­state. Each is the in­verse of the other; be­ing waves, rather than sim­ple lin­ear val­ues, it’s pos­si­ble to both be suffer­ing and be happy, if one area of one’s life is go­ing well and one area is go­ing poorly. But they’re both in­vari­ably tied to one’s norm; if one has had a con­sis­tently good life, their life con­tinu­ally to be con­sis­tently good isn’t go­ing to provide any hap­piness, even though the same sec­tion of life, trans­planted into some­body with a con­sis­tently bad life, would provide ec­stasy. Like­wise, a con­sis­tently bad life doesn’t trans­late into suffer­ing; it’s the par­tic­u­larly bad parts of that life that are ex­pe­rienced as suffer­ing, ev­ery­thing else is ex­pe­rienced as the norm.

This is backed up by stud­ies of self-re­ported hap­piness, which tracks a norm, and only rarely [ETA: per­ma­nently] de­vi­ates from that norm. This norm, this base level of self-re­ported hap­piness (which I dis­t­in­guish from ex­pe­rienced hap­piness), is the norm from which hap­piness and suffer­ing are ex­pe­rienced as de­vi­a­tions.

• this base level of self-re­ported hap­piness … is the norm from which hap­piness and suffer­ing are ex­pe­rienced as de­vi­a­tions.

True, but only par­tially true. The sta­ble base level, as you know, varies. There are peo­ple with high-hap­piness sta­ble level and peo­ple with low-hap­piness sta­ble level. Th­ese peo­ple look and be­have very differ­ently in real life. The high-base peo­ple look and be­have happy at their neu­tral set­ting I don’t see any rea­son to be­lieve that it’s just out­wards man­i­fes­ta­tions which do not re­flect the in­ter­nal state. The low-base peo­ple are, in con­trast, much less happy at their neu­tral set­ting.

So yes, on the one hand hap­piness/​suffer­ing is rel­a­tive to your base state; but on the other hand there is an ab­solute scale as well and high-base peo­ple are hap­pier than low-base peo­ple.

• It’s hard to say what goes on in other peo­ple’s heads, but my self-re­ported hap­piness would be an as­sess­ment of my well-be­ing rel­a­tive to what I re­gard as my cul­tural norm, whereas my ex­pe­rienced hap­piness is a differ­ent value en­tirely.

I base my be­lief that this is the norm for hu­mans on the fact that life satis­fac­tion de­creases are cor­re­lated with suicide rates ir­re­spec­tive of the ab­solute value of life satis­fac­tion (al­though cer­tain fac­tors can have an in­hibitive effect); that is, wealthy na­tions, which gen­er­ally have higher self-re­ported hap­piness lev­els, also have high suicide rates. Their high base level of hap­piness, if this were the same vari­able as ex­pe­rienced hap­piness, should oth­er­wise offset the suffer­ing they ex­pe­rience, which does not ap­pear to hap­pen.

Peo­ple’s so­cial be­hav­ior is more pred­i­cated on their per­ceived re­la­tion­ship to the lo­cal/​cur­rent so­cial group than the state of their in­ter­nal vari­ables. I don’t base this on any study, but rather per­sonal ob­ser­va­tion.

• my self-re­ported happiness

I’m not talk­ing about eval­u­at­ing one’s own in­ter­nal state. I’m talk­ing about out­ward signs.

I know both high-base and low-base peo­ple from, more or less, the same cul­tural cir­cles. It’s not that they would an­swer the ques­tion “How happy are you?” differ­ently—I don’t know, I haven’t asked. It’s just that the high-base peo­ple smile and laugh a lot, are prone to en­gag­ing in spon­ta­neous fun, are gen­er­ally com­fortable with life. And the low-base peo­ple tend to have a char­ac­ter­is­tic dis­ap­prov­ing ex­pres­sion on their faces (which will ac­tu­ally mold their face by mid­dle age), whine and grum­ble a lot, and find life gen­er­ally un­pleas­ant.

Note that here I’m talk­ing about, ba­si­cally, long-term av­er­ages. In the short term high-base peo­ple can and will get un­happy and de­pressed; low-base peo­ple can and will get ex­cited and joyful. But both will re­vert to the mean—I’m not talk­ing about bipo­lar peo­ple who will os­cillate be­tween highs and lows, they are a sep­a­rate cat­e­gory.

• more than a decade with­out ei­ther suffer­ing or happiness

What hap­pened to you dur­ing those years? Feel free to de­cline to an­swer if I’m be­ing too in­tru­sive.

• At the start, I de­cided that emo­tions were hold­ing me back, and that logic was the more ap­pro­pri­ate path, and so sat down one day and de­stroyed my emo­tions.

Over the next few years? I grad­u­ated high school, then col­lege, got a cou­ple of low-level jobs, then a real job, which I’ve held since. Dated a few peo­ple, role-played a nor­mal per­son in the course of my in­ter­ac­tions with them.

My emo­tions weren’t com­pletely gone, over this pe­riod of time, but rather… re­mote, hap­pen­ing to some­body else. If they got par­tic­u­larly in­tense, I could ob­serve my body’s re­ac­tion to them—hands clench­ing in anger, for ex­am­ple—but I didn’t ac­tu­ally ex­pe­rience them. The emo­tions were there, but the con­nec­tion to my con­scious mind was sev­ered.

At some point in there, I read At­las Shrugged, which con­vinced me that emo­tions were not, in fact, use­less dis­trac­tions from pure logic. I still wasn’t ex­pe­rienc­ing them, but the ab­sence was no longer de­sir­able; at that point, it was neu­tral. Every­thing was neu­tral, re­ally. That be­gan the gray phase of my life.

I hon­estly don’t re­mem­ber much from that pe­riod of time. Noth­ing had any kind of sig­nifi­cance. I worked, I dated, read books, played games. None of it par­tic­u­larly mat­tered; ex­is­tence was a habit with­out im­por­tance. It wasn’t un­pleas­ant, be­cause un­pleas­ant­ness would have been some­thing. I was told I was de­pressed. If I was, if I wasn’t—didn’t par­tic­u­larly mat­ter.

Then I tried LSD. And… I had a day that wasn’t gray. I ap­pre­ci­ated the green color of the leaves on trees, the tex­ture of the bark. Shrug So I de­cided I would pre­fer to live like that all the time, and started per­mit­ting my­self to ex­pe­rience life again. Started tak­ing Vi­tamin D, which kick-started the pro­cess.

Which be­gan a rather dark pe­riod, as al­low­ing my­self to ex­pe­rience re­quired con­fronting all the suffer­ing I had avoided. The deaths of some peo­ple who had been close to me in my youth. An ex-girlfriend rap­ing me, and be­fore that with an­other part­ner, my first sex­ual en­coun­ters hav­ing been un­de­sired, but my hav­ing not re­fused be­cause I didn’t care enough to. How in­her­ently abu­sive many of the re­la­tion­ships I was in were, how dys­func­tional the situ­a­tion I had al­lowed my­self to get into was. Ad­mit­ting to my­self that much of the past decade of my life had been a failure.

And then things got bet­ter, be­cause the recog­ni­tion that things were bad was the same as the recog­ni­tion that things could get bet­ter, and so I start­ing mak­ing things bet­ter. I got out of the situ­a­tion, and have started work­ing to­wards the next phase in my life.

• That’s an amaz­ing jour­ney of self-dis­cov­ery. I, too, had a pe­riod where I wanted to erase the parts of me that I found use­less, but I didn’t go as deeply Vul­can as you did. (You’re the first per­son I’ve met who be­came more sen­si­tive and over­all nicer be­cause of At­las Shrugged.) I’m sorry to hear that you went through so many dark places dur­ing your pro­cess, and I find your fi­nal med­i­ta­tions on the mean­ing of suffer­ing to be quite in­spiring. You have my ad­mira­tion.

• Pain and suffer­ing are not the same thing. One woman will suffer while giv­ing birth while the next doesn’t and en­joys the ex­pe­rience.

• I think what the “true” (sta­tus-quo) end­ing proves is that the Su­per-Hap­pies did not ac­cu­rately model hu­man­ity’s util­ity func­tion at all. If they had, they would have pro­posed a deal where hu­man­ity gets rid of most of its pain, but still keeps some, es­pe­cially those “grim” things that hu­mans ac­tu­ally like (some­what counter-in­tu­itively). (And per­haps the Babyeaters’ thing would then be un­der­stood as one of these “grim” things by hu­mans, as it clearly is for the Babyeaters them­selves It’s not clear if the Su­per­hap­pies would be will­ing to ac­quire this value, though). This is a deal that hu­mans would in­deed ac­cept, since it agrees with their val­ues. I think the true moral of this story is that get­ting hu­man wants right for some­thing like CEV is a hard prob­lem, and mak­ing even small mis­takes can have big con­se­quences.

• My feel­ing is that many util­ity func­tions in the gen­eral class of util­ity func­tions that the su­per happy’s is drawn from would lie about how ad­van­ta­geous it is to merge. Weren’t the hu­mans go­ing to lie to the babyeaters?

• But it’s still a com­pro­mise. Is it part of hu­man­ity’s util­ity func­tion to value an­other species’ util­ity func­tion to such an ex­tent that they would ac­cept the trade­off of chang­ing hu­man­ity’s util­ity func­tion to pre­serve as much of the other species’ util­ity func­tion?

I don’t re­call any men­tion of hu­man­ity be­ing to­tal util­i­tar­i­ans in the story. Nei­ther did the com­pro­mise made by the su­per­hap­pies strike me as be­ing bet­ter for all par­ties than their origi­nal val­ues were, for each of them.

The only rea­son the com­pro­mise was sup­posed to be benefi­cial is be­cause the three species made con­tact and couldn’t eas­ily co­ex­ist to­gether from that point on. Also, be­cause the su­per­hap­pies were the stronger force and could there­fore eas­ily en­force their own solu­tion. Cut­ting off the link re­moves those as­sump­tions, and al­lows each species to pre­serve its util­ity func­tion, which I as­sume they have a prefer­ence for, at least hu­mans and baby-eaters.

• Cut­ting off the link (...) al­lows each species to pre­serve its util­ity func­tion, which I as­sume they have a prefer­ence for, at least hu­mans and baby-eaters.

There was an asymetry in the story, if I re­mem­ber cor­rectly.

Babyeaters had a prefer­ence for other species eat­ing their ba­bies. Hu­mans and su­per­hap­pies had a prefer­ence for other species not eat­ing their ba­bies. This part was sy­met­ri­cal. Su­per­hap­pies also had a prefer­ence for other species never feel­ing any pain. But hu­mans didn’t have a prefer­ence for other species feel­ing pain; they just wanted to more or less pre­serve their own biolog­i­cal sta­tus quo. They didn’t mind if su­per­hap­pies re­main… su­per­happy.

This is why cut­ting the link harms the su­per­happy util­ity func­tion more than the hu­man util­ity func­tion. -- Hu­mans will feel the re­lief that babyeater chil­dren are still saved by su­per­hap­pies, more quickly and re­li­ably than hu­mans could do. On the other hand, su­per­hap­pies will know that some­where in the uni­verse hu­man ba­bies are feel­ing pain and frus­tra­tion, and there is noth­ing the su­per­hap­pies can do about it.

The asymetry was that su­per­hap­pies didn’t seem eth­i­cally re­pul­sive to hu­mans. Well, apart from what they wanted to do with hu­mans; which was suc­cess­fully avoided.

• In the story the su­per­hap­pies pro­pose to self-mod­ify to ap­pre­ci­ate com­plex art, not just sim­ple porn, and they say that hu­mans and babyeaters will both think that is an im­prove­ment. So to some de­gree the su­per­hap­pies (with their very ugly space­ships) are re­pul­sive to hu­mans, al­though not as strongly re­pul­sive as the babyeaters.

• they are moral and wouldn’t offer a deal un­less it was benefi­cial ac­cord­ing to both util­ity func­tions be­ing merged (not just ac­cord­ing to their value of hap­piness).

I guess whether it is benefi­cial or not de­pends on what you com­pare to? They say,

he ob­vi­ous start­ing point upon which to build fur­ther ne­go­ti­a­tions, is to com­bine and com­pro­mise the util­ity func­tions of the three species un­til we mu­tu­ally satis­fice, pro­vid­ing com­pen­sa­tion for all changes de­manded.

So they are aiming for satis­fic­ing rather than max­i­miz­ing util­ity: ac­cord­ing to all three be­fore-the-change moral­ities, the post-change state of af­fairs should be ac­cept­able, but not nec­es­sar­ily op­ti­mal. Con­sider these pos­si­bil­ities:

1) Baby-eaters are mod­ified to no longer eat sen­tient ba­bies; hu­mans are un­changed; Su­per­hap­pies like art.

2) Baby-eaters are mod­ified to no longer eat sen­tient ba­bies; hu­mans are pain-free and eat ba­bies; Su­per­hap­pies like art.

3) Baby-eaters, hu­mans, and Su­per­hap­pies are all un­changed.

I think the in­ten­tion of the au­thor is that, ac­cord­ing to pre-change hu­man moral­ity, (1) is the op­ti­mal choice, (2) is bad but ac­cept­able, and (3) is un­ac­cept­able. The su­per­hap­pies in the story claim that (2) is the only al­ter­na­tive that is ac­cept­able to all three pre-change moral­ities. So the su­per-happy end­ing is benefi­cial in the sense that it avoids (3), but it’s a “bad” end­ing be­cause it fails to get (1).

• Hmm, I guess I in­ter­preted the su­per hap­pies pro­posal differ­ently, as say­ing that hu­mans get com­pen­sa­tion for any down­grade from (1) to (2).

• http://​​arxiv.org/​​abs/​​1412.0348

Calcu­lat­ing Leven­shtein dis­tance may be un­op­ti­miz­able.

• What are the con­se­quences?

I guess it is a bad news for bioin­for­mat­ics (com­par­ing two very long pieces of DNA), but maybe there are suffi­ciently use­ful ap­prox­i­ma­tions. Or if one string is fixed and only the other string varies, maybe you can pre­com­pute some data to make the com­par­i­son faster.

• I don’t think the Leven­sthtein dis­tance be­tween two chro­mo­somes is use­ful. If a gene changes lo­ca­tion it’s still for prac­ti­cal pur­poses mostly the same gene but the Leven­sthtein dis­tance is very differ­ent.

• Wet­ware ba­sis for IQ. Ab­stract (em­pha­sis mine):

Func­tional mag­netic res­o­nance imag­ing (fMRI) stud­ies typ­i­cally col­lapse data from many sub­jects, but brain func­tional or­ga­ni­za­tion varies be­tween in­di­vi­d­u­als. Here we es­tab­lish that this in­di­vi­d­ual vari­abil­ity is both ro­bust and re­li­able, us­ing data from the Hu­man Con­nec­tome Pro­ject to demon­strate that func­tional con­nec­tivity pro­files act as a ‘finger­print’ that can ac­cu­rately iden­tify sub­jects from a large group. Iden­ti­fi­ca­tion was suc­cess­ful across scan ses­sions and even be­tween task and rest con­di­tions, in­di­cat­ing that an in­di­vi­d­ual’s con­nec­tivity pro­file is in­trin­sic, and can be used to dis­t­in­guish that in­di­vi­d­ual re­gard­less of how the brain is en­gaged dur­ing imag­ing. Char­ac­ter­is­tic con­nec­tivity pat­terns were dis­tributed through­out the brain, but the fron­topari­etal net­work emerged as most dis­tinc­tive. Fur­ther­more, we show that con­nec­tivity pro­files pre­dict lev­els of fluid in­tel­li­gence: the same net­works that were most dis­crim­i­nat­ing of in­di­vi­d­u­als were also most pre­dic­tive of cog­ni­tive be­hav­ior. Re­sults in­di­cate the po­ten­tial to draw in­fer­ences about sin­gle sub­jects on the ba­sis of func­tional con­nec­tivity fMRI.

• That seems to be pretty worth­while with­out them say­ing how much of the varience they can “pre­dict”.

• Look at the figures available out­side of the pay­wall.

You also prob­a­bly mean “worth­less”.

• Hi! Semi-new lurker here. What is the cur­rent eti­quette on necro­ing? I didn’t find any offi­cial et­ti­quette guide.

• Ne­cro to your heart’s con­tent. It’s fine.

• Feel free to com­ment—since only the user you’re re­ply­ing to (and any­one that has cho­sen to sub­scribe to up­dates for that spe­cific post) is no­tified, you don’t need to fear be­ing a dis­trac­tion to masses of peo­ple who might no longer care.

• You might con­sider click­ing on the user­name. The sec­ond num­ber shows karma in last 30 days and if it is 0 you might not get an­swers.

• That’s a pretty good heuris­tic. OTOH, up un­til this week, my karma in the last 30 days was 0. Now that I’m start­ing the se­quences soon (in the form of “Ra­tion­al­ity: From AI to Zom­bies”), I sus­pect I’ll in­volve my­self in the com­mu­nity some more. Then again, my ac­count didn’t func­tion­ally ex­ist un­til re­cently, mainly be­ing there for the pur­pose of re­serv­ing the name.

• It does also show up on the Re­cent Com­ments view, which is one of the most com­mon ways for peo­ple to jump into dis­cus­sions. So it’ll be no­ticed by other peo­ple as well. (Which is good, if they want to also chime in.)

• I’m new here. Been lurk­ing oc­ca­sion­ally for a few weeks. I have fi­nally signed up. On prin­ci­ple should I avoid vot­ing? (For the time be­ing?)

• I’m con­tem­plat­ing a dis­cus­sion post on this topic, but first I’ll float it here, since there’s a high chance that I’m just be­ing re­ally stupid.

I’m abysmally un­suc­cess­ful at us­ing any­thing like Bayesian rea­son­ing in real life.

I don’t think it’s be­cause I’m do­ing any­thing fun­da­men­tally wrong. Maybe what I’m do­ing wrong is at­tempt­ing to think of these things in a Bayesian way in the first place.

Let’s use a con­crete ex­am­ple. I bought a house. My prior prob­a­bil­ity that any given house­hold ap­pli­ance or fix­ture will break and/​or need main­te­nance in a given month is on the or­der of 5%, ob­vi­ously with some vari­abil­ity de­pend­ing on what ap­pli­ance we’re talk­ing about. This prior is an off-the-cuff in­tu­itive figure based on decades of liv­ing in houses.

Within a month of buy­ing this house, things im­me­di­ately start break­ing. The dish­washer breaks. Then the garbage dis­posal. The sump pump fails com­pletely. The hu­mid­ifier needs re­pair. The air con­di­tioner unit needs to be en­tirely re­placed. The sid­ing needs to be re­painted. A sec­tion of fence needs to be re­placed. The sprin­klers don’t work. This is all within roughly the first four months.

So, my prior was garbage, but the real is­sue for me is that Bayesian rea­son­ing didn’t re­ally help me. The dish­washer break­ing didn’t cause me to shift my Back­ground Prob­a­bil­is­tic Break­age Rate much at all. One thing break­ing within the first month is al­lowed for by my prior model. Then the sec­ond thing breaks—okay, maybe I need to ad­just my BPBR a a bit. Still, there’s lit­tle rea­son to ex­pect that sev­eral more im­por­tant things will break in short or­der. But that’s ex­actly what hap­pened.

There is a causal story that ex­plains ev­ery­thing (ap­par­ently) break­ing at ba­si­cally the same time, which is that the pre­vi­ous own­ers were not tak­ing good care of the house, and var­i­ous things were already sub­tly bro­ken and limp­ing along at pass­able func­tion­al­ity for a long time. The prob­lem is that this causal story only be­comes pro­moted to “hy­poth­e­sis with sig­nifi­cant prob­a­bil­ity mass” af­ter two or three con­sec­u­tive ma­jor ap­pli­ance dis­asters.

What is an­noy­ing about all this is that my wife doesn’t at­tempt to use any kind of prob­a­bil­is­tic rea­son­ing, and she is ba­si­cally right all the time. I was say­ing things like, “I re­ally doubt the garbage dis­posal is re­ally bro­ken, we just had two other ma­jor things re­placed, what are the odds that an­other thing would break so quickly?” and she would re­ply along the lines of, “I’m pretty sure it’s ac­tu­ally bro­ken, and I can’t fathom why you keep talk­ing about odds when your odds-based as­sess­ments are always wrong,” and I’m at the point of agree­ing with her. Not to men­tion that she was the one who sug­gested the “prior own­ers didn’t main­tain the house” hy­poth­e­sis, while I was still grimly cling­ing to my ini­tial model, in­creas­ingly be­wil­dered by each new dis­aster.

I am prob­a­bly a poster child for “do­ing prob­a­bil­is­tic think­ing wrong” in some ob­vi­ous way that I am blind to. Please help me figure out how and where. I have my own thoughts, but I will wait for oth­ers to re­spond so as to avoid an­chor­ing.

• I think you were ba­si­cally do­ing okay, it’s just that as soon as you for­mu­lated your ini­tial hy­poth­e­sis you should have ac­tively sought out a way to dis­prove it. How hard can I lean on my fence? Is scratch­ing lazily suffi­cient to re­move paint on the sid­ing? Do I dare to wash the floor un­der the book­shelf?… After all, if you sud­denly re­ceived lots of ev­i­dence to the con­trary, you would a) up­date fast, and b) earn hus­band points.

In essence, you should always ask your­self, is this still the rele­vant ques­tion?

• “I re­ally doubt the garbage dis­posal is re­ally bro­ken, we just had two other ma­jor things re­placed, what are the odds that an­other thing would break so quickly?”

You have two hy­pothe­ses: the ap­pli­ances break­ing are not con­nected (in­de­pen­dent); and the ap­pli­ance break­ing are con­nected (de­pen­dent).

In the first case you are say­ing the equiv­a­lent of “I tossed the coin twice and it came up heads both times, it’s re­ally un­likely it will come up heads the third time as well” which should be ob­vi­ously wrong.

In the sec­ond case you should dis­card your model of in­de­pen­dence alongside with your origi­nal prior and con­sider that the break­ages are con­nected.

I think the moral of the story is that life is com­pli­cated and sim­ple mod­els are of­ten too sim­ple to be use­ful. You should dis­card them faster when they show signs of not work­ing.

And, of course, if you are won­der­ing whether your garbage dis­posal is re­ally bro­ken, you should go look at your garbage dis­posal unit and not en­gage in pon­der­ing the­o­ret­i­cal con­sid­er­a­tions.

• See my re­sponse to Chris­ti­anKl be­low for my clar­ifi­ca­tion on my rea­son­ing about “con­sec­u­tive coin flips” which could still be wrong but is hope­fully less wrong than my origi­nal word­ing.

I agree that I should have dis­carded my model more quickly, but I don’t quite see how to gen­er­al­ize that ob­ser­va­tion. Some­times the al­ter­na­tive hy­poth­e­sis (e.g. the break­ages are con­nected) is not ap­par­ent or ob­vi­ous with­out more data—and the pro­cess of col­lect­ing data re­ally just means con­tin­u­ing to make bad pre­dic­tions as you go through life un­til some­thing clicks and you no­tice the un­der­ly­ing struc­ture.

My wife seems to think that mak­ing ex­plicit model-based pre­dic­tions in the first place is the prob­lem. I have a lot of re­spect for Sys­tem 1 and am sym­pa­thetic to this view. But Sys­tem 2 re­ally shouldn’t ac­tively lead me astray.

• my rea­son­ing about “con­sec­u­tive coin flips”

Yes, and note that this part—“that I have to start con­sid­er­ing that the die is loaded”—is key.

but I don’t quite see how to gen­er­al­ize that observation

Um, di­rectly? All mod­els which you are con­sid­er­ing are much sim­pler than the real world. The rele­vant maxim is “All mod­els are wrong, but some are use­ful”.

I think you got caught in the trap of “but I can’t change my prior be­cause pri­ors are not sup­posed to be changed”. That’s not ex­actly true. You can and (given suffi­cient ev­i­dence) should be will­ing to dis­card your en­tire model and the prior with it. Pri­ors only make sense within a speci­fied set of hy­pothe­ses. If your set of hy­pothe­ses changes, the old prior goes out of the win­dow.

The naive Bayes ap­proach sweeps a lot of com­plex­ity un­der the rug (e.g. hy­pothe­ses se­lec­tion) which will bite you in the ass given the slight­est op­por­tu­nity.

Some­times the al­ter­na­tive hy­poth­e­sis (e.g. the break­ages are con­nected) is not ap­par­ent or ob­vi­ous

Yeah, well, wel­come to the real world :-/​

My wife seems to think that mak­ing ex­plicit model-based pre­dic­tions in the first place is the prob­lem.

She is cor­rect if your mod­els are wrong. Get­ting right mod­els is hard and you should not as­sume that the first model you came up with is go­ing to be suffi­ciently cor­rect to be use­ful.

Sys­tem 2 re­ally shouldn’t ac­tively lead me astray.

I see ab­solutely no ba­sis for this be­lief. To mis­quote some­one from mem­ory: “Logic is just a way of mak­ing er­rors with con­fi­dence” :-P

• sep­a­rate ad­vice: look around at things and check if any­thing else is about to break and can be saved from ex­pen­sive re­place­ment via the pro­cess of re­pairs.

• I re­ally doubt the garbage dis­posal is re­ally bro­ken, we just had two other ma­jor things re­placed, what are the odds that an­other thing would break so quickly?

If they’re in­de­pen­dent, the odds are ex­actly the same as if two other ma­jor things had not been re­cently re­placed. If they’re de­pen­dent, the odds are higher. For this de­ci­sion, you have a lot more ev­i­dence about the spe­cific, so your base rate (un­less it’s in­cred­ibly small) doesn’t mat­ter, and the spe­cific ev­i­dence of bro­ken­ness over­whelms it.

A bet­ter ap­pli­ca­tion is bud­get­ing for next month. Com­pare how much you’re plan­ning to set aside for re­pairs with how much your wife is. See who’s right. Up­date. Re­peat.

• I am prob­a­bly a poster child for “do­ing prob­a­bil­is­tic think­ing wrong” in some ob­vi­ous way that I am blind to.

One pos­si­ble mis­take is as­sum­ing that prob­lems will be in­de­pen­dent and spread out evenly over time. That’s an ex­treme as­sump­tion. In real life there are always more rea­sons for prob­lems to cluster than to anti-cluster (so to speak), so it doesn’t bal­ance out at all. Also, prob­lems will do more harm when clus­tered, be­cause your abil­ity to cope is re­duced. So it makes sense to pre­pare for clus­tered prob­lems. When two things go wrong, get ready for the third. That’s very ob­vi­ous in soft­ware en­g­ineer­ing, if you find ten bugs, chances are you haven’t found them all. But it’s true in real life too.

The more gen­eral prob­lem is that you just seem to have less life ex­pe­rience than your wife. To fix that, go out and get ex­pe­rience. Fix stuff, hag­gle, make ar­range­ments… It’ll im­prove your life in other ways as well.

• Some ran­dom thoughts:

As a bounded agent, you have to be aware that it’s phys­i­cally im­pos­si­ble to con­sider all the hy­pothe­ses. When you en­counter new ev­i­dence, you might think of a new hy­poth­e­sis to pro­mote that you hadn’t thought of be­fore—in fact, this is an un­avoid­able part of be­ing a good bounded agent. So don’t worry about com­ing up with the One True Prior ahead of time and then up­dat­ing it—in­stead, try to plan for the most likely out­comes, but leave a “some­thing else” cat­e­gory and be ready to change your mind.

And given that we’re bi­ased, when we make plans we’re prob­a­bly go­ing to get some prob­a­bil­ities wrong—in this case, fu­ture events con­tain in­for­ma­tion about how one was bi­ased. Try to learn about your own bi­ases, which of­ten means be­ing more in­fluenced by ev­i­dence than an un­bi­ased agent.

If you still want to try rea­son­ing prob­a­bil­is­ti­cally, I’d look into Tet­lock’s good judg­ment pro­ject and start plan­ning how to prac­tice my prob­a­bil­ity es­ti­ma­tion. Oh, and check out the cal­ibra­tion game.

• I was say­ing things like, “I re­ally doubt the garbage dis­posal is re­ally bro­ken, we just had two other ma­jor things re­placed, what are the odds that an­other thing would break so quickly?” [...] I am prob­a­bly a poster child for “do­ing prob­a­bil­is­tic think­ing wrong” in some ob­vi­ous way that I am blind to.

You are in­deed do­ing it very wrong. As far as proab­lisi­tic rea­son­ing goes the fact that one item broke doesn’t re­duce the chances that a sec­ond item breaks at all.

• Yeah, okay, I worded that stupidly. It’s more like this:

“This 20-sided-die just came up 20 twice in a row. The odds of three con­sec­u­tive rolls of 20 is 0.0125%. I ac­knowl­edge that this next roll has a 120 chance of com­ing up 20, as­sum­ing the die is fair. How­ever, if this next roll comes up 20, we are wit­ness­ing an ex­tremely im­prob­a­ble se­quence, so im­prob­a­ble that I have to start con­sid­er­ing that the die is loaded.”

• How­ever, if this next roll comes up 20, we are wit­ness­ing an ex­tremely im­prob­a­ble se­quence, so im­prob­a­ble that I have to start con­sid­er­ing that the die is loaded.”

The equiv­a­lent of “con­sid­er­ing that the die is loaded” in your ex­am­ple is “the pre­vi­ous own­ers did a bad job of main­tain­ing the house”. It’s in­deed makes sense to come to that con­clu­sion. That’s also ba­si­cally what your wife did.

Apart from that the differ­ence be­tween se­quences picked by hu­mans to look ran­dom and real ran­dom data is that real ran­dom data more fre­quently con­tains such im­prob­a­ble se­quences.

• The “how­ever” part seems ir­rele­vant.

I mean, re­gard­less of what were the pre­vi­ous two rolls—let’s call them “X” and “Y”—if the next roll comes up 20, we are wit­ness­ing a se­quence “X, Y, 20”, which has a prob­a­bil­ity 0.0125%. That’s true even when “X” and “Y” are differ­ent than 20.

You could make the se­quence even more im­prob­a­ble by say­ing “if this next roll comes up, we are wit­ness­ing an ex­tremely im­prob­a­bly se­quence—we are liv­ing in a uni­verse whose con­di­tions al­low cre­ation of mat­ter, we hap­pen to be on a planet where life ex­ists, di­nousaurs were kil­led by a comet, I de­cided to roll the 20-sided-die three times, the first two rolls were 20… and now the third roll is also 20? Well this all just seems very very un­likely.”

Or you could de­cide that the past is fixed, if you hap­pen to be in some branch of the uni­verse you are already there, and you are only go­ing to es­ti­mate the prob­a­bil­ity of fu­ture events.

Even bet­ter, what Chris­ti­anKl said. A bet­ter model would be that de­pend­ing on the ex­ist­ing state of the house there is a prob­a­bil­ity P say­ing how fre­quently things will break. At the be­gin­ning there is some prior dis­tri­bu­tion of P, but when things start break­ing too fast, you should up­date that P is prob­a­bly greater than you origi­nally thought… and now you should ex­pect things to break faster than you ex­pected origi­nally.

• re­gard­less of what were the pre­vi­ous two rolls—let’s call them “X” and “Y”—if the next roll comes up 20, we are wit­ness­ing a se­quence “X, Y, 20”, which has a prob­a­bil­ity 0.0125%. That’s true even when “X” and “Y” are differ­ent than 20.

Yes, all se­quences X,Y,Z are equally (im)prob­a­ble if the d20 is a fair one. But some se­quences—in par­tic­u­lar those with X=Y=Z, and in more-par­tic­u­lar those with X=Y=Z=1 or X=Y=Z=20, are more likely if the die is un­fair be­cause they’re rel­a­tively easy and/​or rel­a­tively use­ful/​amus­ing for a die-fixer to in­duce.

As you con­sider longer and longer se­quences 20,20,20,… their prob­a­bil­ity con­di­tional on a fair d20 goes down rapidly, whereas their prob­a­bil­ity con­di­tional on a dishon­est d20 goes down much less rapidly be­cause there’s some nonzero chance that some­one’s made a d20 that al­most always rolls 20s.

• I can’t help but no­tice, in an slightly off-topic fugue, that the dish­washer, the garbage dis­posal, and prob­a­bly the sump pump share a drainage sys­tem. You may wish to con­sider the pos­si­bil­ity that these are not in­de­pen­dent break­ages, and that un­til you fix the un­der­ly­ing prob­lem, you should ex­pect fur­ther break­ages (i.e., check you drains).

Also, the sid­ing need­ing to be re­painted and a sec­tion of fence need­ing to be re­placed doesn’t re­ally sound like “things break­ing” (I could be wrong). Could you have been ig­nor­ing some im­por­tant in­for­ma­tion right from the start?

• My prior prob­a­bil­ity that any given house­hold ap­pli­ance or fix­ture will break and/​or need main­te­nance in a given month is on the or­der of 5%, ob­vi­ously with some vari­abil­ity de­pend­ing on what ap­pli­ance we’re talk­ing about.

There’s your prob­lem right there. Note, that this prior effec­tively as­signs zero prob­a­bil­ity to the “prior owner didn’t main­tain the house” hy­poth­e­sis.

What you should have done is as­sign some (non-zero) prob­a­bil­ity to that hy­poth­e­sis, then when some­thing breaks, one would up­dates to­wards the “poor main­te­nance” hy­poth­e­sis.

• Blow­ing the whis­tle on the uc berkeley math­e­mat­ics department

This re­mark that I should al­ign more with de­part­ment stan­dards has been the re­sound­ing theme of my time at Berkeley, and Arthur Ogus’s com­ment in the April 18th, 2014 memo was not an iso­lated slip. On Septem­ber 22nd, 2013 he wrote in an email “But I do think it that it [sic] is very im­por­tant that you not de­vi­ate too far from the de­part­ment norms.” On Novem­ber 12th, 2014 he wrote “I hope that, on the ba­sis of our con­ver­sa­tion, you can fur­ther ad­just to the norms of our de­part­ment.” This raises the ques­tion: What does it mean to ad­here to de­part­ment norms if one has the high­est stu­dent eval­u­a­tion scores in the de­part­ment, stu­dents perform­ing statis­ti­cally sig­nifi­cantly bet­ter in sub­se­quent courses, and fac­ulty ob­ser­va­tions uni­ver­sally re­port­ing “ex­traor­di­nary skills at lec­tur­ing, pre­sen­ta­tion, and en­gag­ing stu­dents”?

This ques­tion is one that I asked, and in re­sponse it was made very clear to me what is meant by the norms of the de­part­ment. It means teach from the text­book. It means stop emailing stu­dents with en­courage­ment, hand­writ­ten notes and home­work prob­lems, and in­stead as­sign prob­lems from the text­book at the start of the semester. It means stop us­ing ev­i­dence-based prac­tices like for­ma­tive as­sess­ment. It means micro-man­age the Grad­u­ate Stu­dent In­struc­tors rather than al­low­ing them to use their own, con­sid­er­able, tal­ent and cre­ativity. And most of all it means this: Stop mo­ti­vat­ing stu­dents to work hard and at­tend class by be­ing en­gag­ing, en­courag­ing and in­spiring, by shar­ing with them a pas­sion for the beauty and won­der of math­e­mat­ics, but in­stead by forc­ing them into obe­di­ence with end­less busy­work in the form of GPA-af­fect­ing home­work and quizzes and as­sess­ments, day af­ter day, semester af­ter semester.

In a nut­shell: Stop mak­ing us look bad. If you don’t, we’ll fire you.

• Some peo­ple dis­agree with his ver­sion of events.

• “But I do think it that it [sic] is very im­por­tant that you not de­vi­ate too far from the de­part­ment norms.”

Yep.

Bureau­cra­cies chew up and spit out peo­ple who de­vi­ate from norms. You ap­par­ently think that you are a bet­ter teacher. How rele­vant is that to your suc­cess in the bu­reau­cracy? Is it nec­es­sar­ily benefi­cial? Do your stu­dents get a vote on whether you get tenure? Get a raise? Get a lab?

Some peo­ple at work work on the pur­ported pur­pose of the bu­reau­cracy Others work the bu­reau­cratic re­ward and pun­ish­ment sys­tem.

Pour­nelle’s Iron Law of Bureau­cracy states that in any bu­reau­cratic or­ga­ni­za­tion there will be two kinds of peo­ple”:

First, there will be those who are de­voted to the goals of the or­ga­ni­za­tion. Ex­am­ples are ded­i­cated class­room teach­ers in an ed­u­ca­tional bu­reau­cracy, many of the en­g­ineers and launch tech­ni­ci­ans and sci­en­tists at NASA, even some agri­cul­tural sci­en­tists and ad­vi­sors in the former Soviet Union col­lec­tive farm­ing ad­minis­tra­tion.

Se­condly, there will be those ded­i­cated to the or­ga­ni­za­tion it­self. Ex­am­ples are many of the ad­minis­tra­tors in the ed­u­ca­tion sys­tem, many pro­fes­sors of ed­u­ca­tion, many teach­ers union offi­cials, much of the NASA head­quar­ters staff, etc.

The Iron Law states that in ev­ery case the sec­ond group will gain and keep con­trol of the or­ga­ni­za­tion. It will write the rules, and con­trol pro­mo­tions within the or­ga­ni­za­tion.

• It’s also worth point­ing out that con­flict­ing in­sti­tu­tional loy­alties are a huge source of con­flict. The “stan­dard” prac­tice in or­ga­ni­za­tions is to col­lude with your di­rect man­age­ment against their man­age­ment—do things that fa­vor your boss over your boss’s boss. Coward is do­ing things the ‘hon­est’ way, fa­vor­ing his boss’s boss (i.e. the uni­ver­sity as a whole) in­stead of his boss (the math de­part­ment), which leads to both the con­flict and his ex­pec­ta­tion that he’ll get sup­port by mak­ing an ‘in­ter­nal af­fair’ pub­lic.

But, of course, that also means he has lots of ready-made al­lies, re­gard­less of the facts on the ground. We’ll see how this shakes out when more voices and de­tails are added.

• fa­vor­ing his boss’s boss (i.e. the uni­ver­sity as a whole) in­stead of his boss (the math de­part­ment)

Fa­vor­ing the “goals” of the or­ga­ni­za­tion as an ab­strac­tion over the ac­tual pun­ish­ment/​re­ward struc­ture of the liv­ing, breath­ing, and in­ter­act­ing cogs of the or­ga­ni­za­tion.

I’ve come to look at bu­reau­cra­cies as par­a­sites on the host or­ga­ni­za­tion.

Align­ing the goals of the bu­reau­cracy with the goals of the org is ac­tu­ally a very hard, very in­ter­est­ing, and very im­por­tant prob­lem.

• Strange. Tenured pro­fes­sors get paid the same re­gard­less of how many stu­dents they teach so it helps them if an­other in­struc­tor at­tracts lots of stu­dents thereby re­duc­ing the tenured pro­fes­sor’s teach­ing bur­den.

• Are we sure this man is tel­ling the truth?

• The poli­ti­cal fault lines he’s de­scribing ex­ist at ev­ery flag­ship state pub­lic uni­ver­sity, and so I’m not at all sur­prised to hear that a quake has hap­pened along those lines at Berkeley.

But also most perform­ers have a flair for the dra­matic, and Coward’s ex­cel­lent stu­dent re­views seem to come in part from his tal­ent at perfor­mance. So his in­ter­pre­ta­tions are likely mas­saged in some form, and the ob­ject-level claims could be eas­ily ex­ag­ger­ated.

But he claims that he and the de­part­ment differ on a fairly sim­ple statis­ti­cal claim—how to es­ti­mate the effect of his courses on stu­dents’ fu­ture perfor­mance. The re­lated email cor­re­spon­dence is here, and well worth read­ing, both to judge that spe­cific mat­ter your­self, and get a sense of how defen­sive Coward can seem. (He’s definitely es­ca­lat­ing emo­tion­ally, but jus­tifi­ably is harder to know.)

My sum­mary: In a re­port, Stark, a statis­ti­cian, makes a three-way com­par­i­son be­tween the three 1A classes (two of which were taught by Coward), and finds that they are not statis­ti­cally sig­nifi­cantly differ­ent. Coward asks why a three-way com­par­i­son is done, in­stead of com­par­ing the Coward group to the non-Coward group. Stark replies that since the stu­dents were as­signed non-ran­domly, we can’t sep­a­rate the di­rect effect of in­struc­tion from any con­found­ing vari­ables.

Which is, of course, cor­rect—it’s very likely that the stu­dents who got into the class with the in­struc­tor widely be­lieved to be su­pe­rior by stu­dents are more com­pe­tent than the stu­dents who didn’t, and so should be ex­pected to do bet­ter in fu­ture classes—but an equally valid point against the three-way com­par­i­son.

• What I ex­pect: even if we find a nat­u­rally ran­dom­ized sub­set of stu­dents (maybe they are forced into cer­tain sec­tions only due to schedul­ing con­flicts), or even if we find things to ad­just for, we will find no sig­nifi­cant effect. It’s noth­ing about Coward him­self, it’s just hard to find effects.

But I don’t know if UC uses that sort of rea­son­ing any­ways to figure out which con­tracts to re­new, I think ad­juncts are su­per mis­treated in gen­eral. I of­ten defend academia on LW, but I think the tenure-track/​ad­junct sys­tem is su­per dys­func­tional and awful.

• As a more gen­eral ob­ser­va­tion, it’s hard to com­ment on this row with­out hav­ing some idea about the lo­cal office poli­tics. Th­ese, of course, tend to be dom­i­nated by jock­ey­ing for power/​sta­tus/​pres­tige and not by dis­cus­sions of effec­tive teach­ing meth­ods.

• In the short term, yes. In the long term, no, es­pe­cially for ‘sup­port’ de­part­ments. At most large state schools, en­g­ineer­ing is king, and physics and math are both sub­si­dized by en­g­ineer­ing be­cause they need a suffi­cient num­ber of pro­fes­sors to teach non-ma­jor physics and math classes to en­g­ineers. This isn’t to say that there’d be no math or physics with­out en­g­ineer­ing, but that there would be less po­si­tions for math and physics fac­ulty.

The math and physics de­part­ments, typ­i­cally, in­sist on be­ing re­search fac­ulty, i.e. in­de­pen­dent de­part­ments sub­si­dized by the uni­ver­sity as a whole, rather than pure ser­vice or­ga­ni­za­tions. Coward, as a full-time lec­turer, is in the ‘pure ser­vice’ role, and as one would ex­pect the guy that’s spe­cial­ized to­wards teach­ing does a much bet­ter job of teach­ing than the peo­ple spe­cial­ized to­wards re­search. This is good for the en­g­ineer­ing de­part­ment but bad for the math de­part­ment—in­stead of eight pro­fes­sors all teach­ing one non-ma­jor course each, you could have two lec­tur­ers teach­ing four non-ma­jor courses each, with the at­ten­dant loss of pres­tige, fund­ing, and poli­ti­cal clout for the de­part­ment.

So his char­ac­ter­i­za­tion of the de­part­ment’s ap­proach to him as “you’re mak­ing us look bad” seems prob­a­ble to me, es­pe­cially if the math de­part­ment has been play­ing the “our job is hard, you need to fund us more so we can do bet­ter” card.

• This seems strange to me. Eng­ineer­ing de­part­ments should have fac­ulty that are perfectly ca­pa­ble of teach­ing the math and physics that their stu­dents will need. And this hap­pens to a limited ex­tent. For ex­am­ple, at UC Berkeley, the com­puter sci­ence de­part­ment offers its own dis­crete math course in­stead of tel­ling stu­dents to take the roughly equiv­a­lent dis­crete math course offered by the math de­part­ment. Is there some­thing pre­vent­ing this from be­com­ing more wide­spread?

• This is good for the en­g­ineer­ing de­part­ment but bad for the math de­part­ment—in­stead of eight pro­fes­sors all teach­ing one non-ma­jor course each, you could have two lec­tur­ers teach­ing four non-ma­jor courses each, with the at­ten­dant loss of pres­tige, fund­ing, and poli­ti­cal clout for the de­part­ment.

Would the uni­ver­sity re­ally stop sub­si­diz­ing Math and Physics dept’s to the same de­gree if it weren’t for their “ser­vice” obli­ga­tions? I don’t think this is right—I think the ad­minis­tra­tion is broadly happy with the sta­tus quo, in terms of pres­tige, etc. If the de­part­ment has two full-time lec­tur­ers, the only con­se­quence is that they will also hire a bunch of full-time re­searchers to bal­ance things out. By con­trast, the “ser­vice” role is prob­a­bly a lot more im­por­tant poli­ti­cally for de­part­ments which teach lots of fluffy GenEd courses.

• Schools com­pa­rable to Berkeley have one of three com­mon or­ga­ni­za­tions of math teach­ers. One, Berkeley’s old struc­ture, is to em­ploy no lec­tur­ers. Another is to em­ploy a lot of lec­tur­ers, whose job is sim­ply to teach as well as pos­si­ble.

But I think the most com­mon or­ga­ni­za­tion is to em­ploy a small num­ber of lec­tur­ers who do a small amount of teach­ing, but whose real job is to han­dle the ad­minis­tra­tive de­tails of teach­ing, such as place­ment of fresh­men, cur­ricu­lum de­sign, and in­struct­ing grad­u­ate stu­dents in teach­ing. I think the com­plaints make most sense in the con­text of the de­part­ment ex­pect­ing him to grow into such a job.

• “Align more with de­part­ment stan­dards” sounds like short­hand for some more spe­cific con­cerns. Coward doesn’t spell out what those con­cerns are.

• Three hunter-gath­erer and hunter-farmer groups—the Hadza in Tan­za­nia, San in Namibia, and Tsi­mane in Bo­livia, who live roughly the same lifestyle hu­mans did in the Pa­le­olithic were ob­served and it was con­cluded that our an­cient an­ces­tors may not have slept nearly as much we thought—de­spite be­ing healthy.

Any ideas why these tribes might need less sleep?

It looks good, al­though only two groups were sam­pled. It is worth not­ing that the “sleep pe­riod” was from 6.9 to 8.5 hr—that is, while they were rest­ing in bed.

The ar­ti­cle is pretty clear on why this might be the case:

“In these so­cieties, elec­tric­ity and its as­so­ci­ated light­ing and en­ter­tain­ment dis­trac­tions are ab­sent, as are cool­ing and heat­ing sys­tems. In­di­vi­d­u­als are ex­posed, from birth, to sun­light and a con­tin­u­ous sea­sonal and daily vari­a­tion in tem­per­a­ture within the ther­moneu­tral range for much of the daylight pe­riod, but above ther­moneu­tral tem­per­a­tures in the af­ter­noon and be­low ther­moneu­tral­ity at night.”

“The Tsi­mane and San live far enough south of the equa­tor to have sub­stan­tial sea­sonal changes in day length and tem­per­a­ture.”

“Be­cause we no­ticed that the Hadza, Tsi­mane, and San did not ini­ti­ate sleep at sun­set and that their sleep was con­fined to the lat­ter por­tion of the dark pe­riod, we in­ves­ti­gated the role of tem­per­a­ture. We found that the noc­tur­nal sleep pe­riod in the Hadza was always ini­ti­ated dur­ing a pe­riod of fal­ling am­bi­ent tem­per­a­ture, and we saw a similar pat­tern in the Tsi­mane. There­fore, we pre­cisely mea­sured am­bi­ent tem­per­a­ture at the sleep­ing sites along with finger tem­per­a­ture and ab­dom­i­nal tem­per­a­ture in our stud­ies of the San. Figures 4 and S1 show that sleep in both the win­ter and sum­mer oc­curred dur­ing the pe­riod of de­creas­ing am­bi­ent tem­per­a­ture and that wake on­set oc­curred near the nadir of the daily tem­per­a­ture rhythm. A strong vaso­con­stric­tion oc­curred at wake on­set in both sum­mer and win­ter, pre­sum­ably func­tion­ing to aid ther­mo­ge­n­e­sis in rais­ing the brain and core tem­per­a­ture for wak­ing ac­tivity. ”

Edit: I don’t know how to link an ar­ti­cle with paren­the­ses in the URL.

Edit edit: now I do. Thank you, Gun­nar_Zarncke.

• I should add a TLDR for LessWrongers in­ter­ested in sleep pat­terns: if you are hav­ing trou­ble sleep­ing, you should con­sider tem­per­a­ture as a vari­able, per­haps more so than light. Nap­ping was not a sig­nifi­cant fac­tor, but was pre­sent. Seg­mented sleep was not ob­served in this study. Sleep times were longer in the win­ter than in the sum­mer by an av­er­age of 53 min­utes.

• Thank you very much for the ex­pla­na­tion.

You can es­cape the brack­ets by re­plac­ing with with %28 and %29.

• I don’t have enough trou­ble with my own sleep to make the ex­per­i­ment very use­ful or de­ci­sive, but now that the nights are get­ting colder, it would be in­ter­est­ing to see what would hap­pen if some LessWrongers ex­per­i­mented with space heaters on timers; set them to go off about 15 min­utes be­fore you want to wake up, and see it helps.

• Mul­ling the Fermi para­dox and es­cape ve­loc­ity—the higher a species’ home planet’s es­cape ve­loc­ity, the harder it is to get off the planet. I think there’s an es­cape ve­loc­ity which is so high that chem­i­cal rocket fuels just don’t have enough en­ergy.

I have no idea whether there’s a plau­si­ble re­la­tion­ship be­tween the like­li­hood of tech­nolog­i­cal species and the es­cape ve­loc­ity of their planet, ex­cept that I doubt that there’d be in­tel­li­gent life on planets with­out at­mo­sphere. Or am I be­ing too parochial?

Thoughts about tech­nolog­i­cal species and es­cape ve­loc­ity?

• Highly spec­u­la­tive thoughts off the top of my head (only with what lit­tle I can re­mem­ber from my high school physics):

• The main fac­tor that de­ter­mines es­cape ve­loc­ity is the mass of the planet (there’s also at­mo­spheric drag, but it’s gen­er­ally man­age­able un­less the world is a per­pet­ual hur­ri­cane hell, in which case I doubt it has any civ­i­liza­tion). After a cer­tain mass thresh­old, the planet is like­lier to be gaseous than rocky. I don’t think Nep­tune-like or Jupiter-like wor­lds are suit­able for life (but their moons are an­other story). In gen­eral, I’d say if the world is too big to jump out of, it’s too gaseous for any­thing to have walked on it any­way. Edited to add: In­hab­ited moons of Jupiter-like wor­lds would also need to take into ac­count the planet’s es­cape ve­loc­ity, even if it’s lower where they are.

• If the planet is a big Earth (that is, quite mas­sive but still mostly rocky), the greater grav­ity will re­sult in a thicker and denser at­mo­sphere, but I don’t know enough aero­dy­nam­ics to tell how much, if any, this de­tail will add to the prob­lem of es­cape ve­loc­ity. But this differ­ence may change the rules as to which fuels will be solid, liquid or gaseous un­der that planet’s nor­mal con­di­tions.

• Another, re­lated prob­lem is pay­load. For ex­am­ple, if the planet’s in­tel­li­gent species is aquatic, the space­ship will need to be filled with wa­ter in­stead of air; this will in­crease the to­tal mass hor­ribly and re­quire a much more po­tent fuel (but all this is as­sum­ing that an aquatic species has had the op­por­tu­nity to dis­cover fire in the first place).

• In wor­lds too big to es­cape by propul­sion, peo­ple may come up with the idea of the space ele­va­tor, but the ex­tra grav­ity will re­quire tak­ing into ac­count the struc­ture’s weight. The coun­ter­weight at the up­per end will need to be heav­ier and/​or farther. Is­sues re­lated to which ma­te­rial is best suited for this build­ing sce­nario and whether there’s a limit to how big a space ele­va­tor you can build are be­yond my knowl­edge. Ac­cord­ing to Wikipe­dia, nan­otubes ap­pear to be a work­able choice on Earth.

• Some world out there may have a ridicu­lously tall moun­tain that ex­tends into the up­per at­mo­sphere. Grav­ity at the top will be lower, and if a launch plat­form can be built there, take­off will be eas­ier. Of course, this is an “if” big­ger than said moun­tain.

• In­dia has a huge coastline, but for myth­i­cal/​cul­tural rea­sons, Hin­duism used to have a taboo against sea travel. In the worst sce­nario, our heavy aliens may stay on ground, not be­cause they can’t, but be­cause they won’t; maybe their at­mo­sphere looks too scary or their planet at­tracts too many me­te­orites or it has sev­eral om­i­nous-look­ing moons or some­thing.

• Thank you. I’m also in­ter­ested in planets with less mass/​lower es­cape ve­loc­ity and non-chem­i­cal fuel meth­ods. Atomic or nu­clear fuel? Laser launch?

• The small­est planet you can prob­a­bly main­tain an at­mo­sphere on for gi­gayears of time is prob­a­bly half to a third of an earth mass (bar­ring the effects of ge­ol­ogy). That gives you an es­cape ve­loc­ity be­tween 70 and 80 % that of here given similar com­po­si­tion and no thou­sand km thick hot ice lay­ers or any­thing.

EDIT: If you as­sume an es­cape ve­loc­ity of Earth’s and a spe­cific im­pulse similar to a Mer­lin en­g­ine and ig­nore all grav­ity drag and at­mo­sphere, us­ing the rocket equa­tion an SSTO to LEO re­quires a fuel to pay­load+struc­ture mass ra­tio of at least 12.0. If you as­sume an es­cape ve­loc­ity of 75% that of Earth, it re­quires a mass ra­tio of at least 6.5. Prob­a­bly dou­bles your mass to or­bit per unit fuel. If you have an es­cape ve­loc­ity of 1.25x that of Earth, your SSTO re­quires a mass ra­tio of 22.4. Mars, by com­par­i­son, reads as a mass ra­tio of 3.1 un­der these op­ti­mistic as­sump­tions.

Of course stag­ing im­proves all of these num­bers and squishes them to­gether some, as does us­ing bet­ter fuel than kero­sine, while deal­ing with an at­mo­sphere and grav­ity drag and pro­pel­lants worse than kero­sine makes things much worse. For a re­al­ity check, ex­ist­ing real mul­ti­stage Earthly launch sys­tems I just quickly looked up have mass ra­tios be­tween ~35 and ~15 (though the 15 in­cludes the to­tal mass of the space shut­tle not just the pay­load, while the up­per stage is not in­cluded in other higher num­bers for other sys­tems).

• As­sum­ing an ad­vanced civ­i­liza­tion, the main limit­ing fac­tor for the vi­able com­mer­cial use of nu­clear en­ergy would be the abun­dance of ra­dioac­tive el­e­ments in the planet. Dur­ing the for­ma­tion of the planet, its mass will have an effect on which el­e­ments get cap­tured. Un­for­tu­nately, Wikipe­dia isn’t helpful on the speci­fics of planet mass vs. planet com­po­si­tion, but we know it de­pends on the com­po­si­tion of the pro­to­plane­tary neb­ula, which de­pends on the type of star. Too many fac­tors.

• Nit­pick: It wouldn’t have to be com­mer­cial use of nu­clear en­ergy. Even if we’re limited to hu­man in­sti­tu­tions, it could be gov­ern­men­tal use, and I have a no­tion that re­li­gion might be the best sort of in­sti­tu­tion for get­ting peo­ple off the planet. Reli­gions have a po­ten­tial for big, long term pro­jects that don’t make prac­ti­cal sense.

Thanks for look­ing into the ques­tion of plane­tary mass and get­ting off the planet—once the ques­tion oc­curred to me, it ex­ploded into a lot of ad­di­tional ques­tions, and we haven’t even got­ten to whether plane­tary mass might have an effect on the evolu­tion of life.

• One ad­di­tional fac­tor: the amount of ra­dioac­tive el­e­ments still us­able (that is, not com­pletely de­cayed) vs. how many billion years it took to evolve from alien amoeba to alien tool-users.

• Gi­ant ca­pac­i­tor plates and you sud­denly re­move the in­su­la­tion?

• Good anal­y­sis! A few re­marks:

In prac­tice even for a planet with as thin an at­mo­sphere as Earth, get­ting past the at­mo­sphere is more difficult than ac­tu­ally reach­ing es­cape ve­loc­ity. One of the most com­mon times for a rocket to break up is near Max Q which is where max­i­mum aero­dy­namic stress oc­curs. This is gen­er­ally in the range of about 10 km to 20 km up.

In wor­lds too big to es­cape by propul­sion, peo­ple may come up with the idea of the space ele­va­tor, but the ex­tra grav­ity will re­quire tak­ing into ac­count the struc­ture’s weight.

Get­ting enough mass up there to build a space ele­va­tor is it­self a very tough prob­lem.

Some world out there may have a ridicu­lously tall moun­tain that ex­tends into the up­per at­mo­sphere. Grav­ity at the top will be lower, and if a launch plat­form can be built there, take­off will be eas­ier. Of course, this is an “if” big­ger than said moun­tain.

Whether grav­ity is stronger or weaker on top of a moun­tain is sur­pris­ingly com­pli­cated and de­pends a lot on the in­di­vi­d­ual planet’s makeup. How­ever, at least on Earth-like planets it is weaker. See here. Note though that if a planet is re­ally mas­sive it is less likely to have large moun­tains. You can more eas­ily get large moun­tains when a planet is small. (e.g. Olym­pus Mons on Mars).

In­dia has a huge coastline, but for myth­i­cal/​cul­tural rea­sons, Hin­duism used to have a taboo against sea travel. In the worst sce­nario, our heavy aliens may stay on ground, not be­cause they can’t, but be­cause they won’t; maybe their at­mo­sphere looks too scary or their planet at­tracts too many me­te­orites or it has sev­eral om­i­nous-look­ing moons or some­thing.

This would re­quire ev­ery­one on the planet to take this same at­ti­tude. This seems un­likely to be com­mon.

• You got me cu­ri­ous, and I read a bit more, and found this on Wikipe­dia:

A rocket mov­ing out of a grav­ity well does not ac­tu­ally need to at­tain es­cape ve­loc­ity to es­cape, but could achieve the same re­sult (es­cape) at any speed with a suit­able mode of propul­sion and suffi­cient pro­pel­lant to provide the ac­cel­er­at­ing force on the ob­ject to es­cape. Es­cape ve­loc­ity is only re­quired to send a bal­lis­tic ob­ject on a tra­jec­tory that will al­low the ob­ject to es­cape the grav­ity well of the mass M.

In lay terms, I guess this means that, un­like a can­non ball, which only gets one ini­tial “push”, a rocket is be­ing “pushed” con­tinu­ally and thus doesn’t need to worry about es­cape ve­loc­ity.

Be­cause of the at­mo­sphere it is not use­ful and hardly pos­si­ble to give an ob­ject near the sur­face of the Earth a speed of 11.2 km/​s (40,320 km/​h), as these speeds are too far in the hy­per­sonic regime for most prac­ti­cal propul­sion sys­tems and would cause most ob­jects to burn up due to aero­dy­namic heat­ing or be torn apart by at­mo­spheric drag. For an ac­tual es­cape or­bit a space­craft is first placed in low Earth or­bit (160–2,000 km) and then ac­cel­er­ated to the es­cape ve­loc­ity at that al­ti­tude, which is a lit­tle less — about 10.9 km/​s. The re­quired change in speed, how­ever, is far less be­cause from a low Earth or­bit the space­craft already has a speed of ap­prox­i­mately 8 km/​s (28,800 km/​h).

So first they get the rocket high enough to be safe from the air, and then they speed it up.

• Con­stant den­sity planet: Es­cape ve­loc­ity scales with the cube root of mass.

Real planets: Goes up faster than that since the in­side crunches down as mass in­creases. Also the ge­ol­ogy could start get­ting… in­ter­est­ing at large masses due to the whole square cube law thing and rapidly in­creas­ing pri­mor­dial heat of for­ma­tion.

• Just found out: the “Real­is­tic World Build­ing” sec­tion of this ar­ti­cle cov­ers many of the top­ics you men­tion.

• Thank you.

• I don’t think “harder to get off the planet” means more than “spend an ad­di­tional 1000 years” de­vel­op­ing tech.

• The Flash player for the video of Max Teg­mark and Nick Bostrom speak­ing at the UN is su­per an­noy­ing. Any­one know how to ex­tract the raw video file so I can watch it more con­ve­niently? Thanks!

• For many pur­poses, but es­pe­cially for video, it is use­ful to pre­tend to be an iphone. Just set your user-agent to iphone and it will give you rather than flash. That’s not as good as ac­tu­ally get­ting the video file. If you want to do that, start here.

Added: I’m us­ing a Mac, us­ing Sa­fari, which is ba­si­cally the same web browser as on an iphone, so pre­tend­ing to be an iphone works great for me. Also, Sa­fari has a user-agent switcher built-in, in the Devel­oper menu, which can be turned on in the Ad­vanced tab of Prefer­ences. I have not tried a user-agent switcher in Chrome, and maybe that would work. But I have failed to get the video to play di­rectly in Chrome, so maybe this is an Ap­ple stream­ing for­mat that Sa­fari im­ple­ments and Chrome doesn’t. In that case the flash player has an im­por­tant role of im­ple­ment­ing it.

• Looks like you can just aim youtube-dl at the URL and it’ll start down­load­ing.

• I’ve lost my cu­ri­os­ity. I have no­ticed that over the course of the last year, I have be­come sig­nifi­cantly less cu­ri­ous. I no longer feel the need to know any­thing un­less I need it, I don’t un­der­stand how it is pos­si­ble to de­sire knowl­edge for the sake of knowl­edge (even though the past me definitely did), I gen­er­ally find my­self un­able to em­pathize with knowl­edge-seek­ers and the virtue of cu­ri­os­ity. That wor­ries me a lot, be­cause if you asked me two years ear­lier, I would have named cu­ri­os­ity as my main char­ac­ter­is­tic and the de­sire for knowl­edge my main driv­ing force. Think­ing over the last year, I can’t re­mem­ber any life-chang­ing ex­pe­riences that would have war­ranted the change. May it have been the foods I ate, or some neu­rolog­i­cal dam­age? I would have at­tributed it to brain ag­ing, if I weren’t twenty. What hap­pened? How to re­verse it? I find it crip­pling.

• Listen to your­self. You want to know what hap­pened to you. You’re still a cu­ri­ous per­son.

Even if you don’t feel like you want to learn in gen­eral, you want to want to learn. You’re on the path to switch­ing from undi­rected to di­rected, from chaotic to pur­pose­ful cu­ri­os­ity. You already know how to pur­sue a ques­tion; now you need to find what ques­tions mat­ter to you.

• The source of my want­ing is con­science rather than pas­sion, though. It’s a com­pletely differ­ent thing, and learn­ing is a tiring ac­tivity which im­por­tance I re­al­ize, rather than some­thing that em­pow­ers me or some­thing I look for­ward to. That’s the prob­lem.

• You could be de­pressed.

• I don’t feel de­pressed at all. In the con­trary, I am quite mo­ti­vated, ag­i­tated and sort of happy.

• I’ve felt that lack of cu­ri­os­ity a fair amount over the past 5-10 years. I sus­pect the biggest change that re­duced my cu­ri­os­ity was be­com­ing fi­nan­cially se­cure. Or maybe some other changes which made me feel more se­cure.

I doubt that I ever sought knowl­edge for the sake of knowl­edge, even when it felt like I was do­ing that. It seems more plau­si­ble that I had hid­den mo­tives such as the de­sire to im­press peo­ple with the breadth or so­phis­ti­ca­tion of my knowl­edge.

LessWrong at­ti­tudes to­ward poli­tics may have re­duced some as­pects of my cu­ri­os­ity by mak­ing it clear that my cu­ri­os­ity in many ar­eas had been mo­ti­vated by a de­sire to sig­nal tribal mem­ber­ship. That hasn’t en­abled me to redi­rect cu­ri­os­ity to­ward more pro­duc­tive ar­eas, but I’m prob­a­bly bet­ter off with­out those as­pects of cu­ri­os­ity.

• I am definitely not bet­ter off with­out what I lost. Gen­uine cu­ri­os­ity had tremen­dously pow­er­ful effect on my learn­ing.

• con­sider: ex­plo­ra­tion/​ex­ploita­tion. Maybe some part of you has de­cided that it’s time to stop ex­plor­ing ed­u­ca­tion and its time to ex­ploit the knowl­edge you already have? Do you feel like you have a lot of knowl­edge now? Or that you know enough? Is your re­la­tion­ship to knowl­edge seek­ing now in the form of “dis­in­ter­est”, “too busy for it”, “sick of it” or some other sen­ti­ment...

(also as Ar­tax­erxes said—de­pres­sion, or other brain chem­i­cal things that this could be a symp­tom of)

• In our col­lege, stu­dents of the first four years were ru­moured to be go­ing through the ex­plo­ra­tion phase, and then—satiety and ex­ploita­tion. It cer­tainly felt that way to me, and anec­do­tally a per­son a year younger, but of course it might be just be­cause of spe­cific cur­ricu­lum struc­ture. (I am a botanist.)

• Maybe some part of you has de­cided that it’s time to stop ex­plor­ing ed­u­ca­tion and its time to ex­ploit the knowl­edge you already have? Do you feel like you have a lot of knowl­edge now? Or that you know enough

No, I definitely didn’t learn ev­ery­thing I think I need. I am very much in need to learn a lot of things, des­per­ately, in fact.

Is your re­la­tion­ship to knowl­edge seek­ing now in the form of “dis­in­ter­est”, “too busy for it”, “sick of it” or some other sen­ti­ment...

I still pur­sue knowl­edge from prag­matic stand­point. “This is use­ful, this is not, there­fore I need to learn this and can com­pletely dis­re­gard that”. There is just no “drive” in it, no gen­uine force of cu­ri­os­ity that used to be so mo­ti­vat­ing. From prag­matic stand­point, my abil­ity to learn suffered a great hit.

• Have you tried to look at any new ar­eas re­cently? Per­haps you are get­ting kind of “bored” by the rep­e­ti­tion.

• Sort of yes. Maybe not suffi­ciently new. I shall look into it.

• I had to con­sciously make my­self read ar­ti­cles on the topic of my PhD topic (and not un­re­lated stuff, so much more in­ter­est­ing), so you just might be lucky! Or even if you don’t think so, you can use this prop­erty, at least.

• Re­cently I sent a mes­sage to an old friend who had stopped talk­ing to me a while ago. I asked if he was done ig­nor­ing me and he said some­thing along the lines of ‘you’re tem­per­a­men­tal, clearly delu­sional and gullible which is some­thing I can live with­out’. Now, I was won­der­ing about how I could im­prove on with re­spect to their im­pres­sions so­cially, since I am do­ing well cur­rently in man­ag­ing them with re­spect to per­sonal wellbe­ing. I’d like to step past how his com­ments are hurt­ful, and recog­nise bet­ter how my be­havi­our may have hurt and con­tinues to hurt peo­ple I know, and what I can do to im­prove. All tips wel­come.

• “Are you done ig­nor­ing me?” is at­tribut­ing a bad mo­tive to nor­mal hu­man be­hav­ior (peo­ple lose con­tact with old friends on a pretty reg­u­lar ba­sis). So that’s a very bad way to start such a con­ver­sa­tion, and may in­di­cate some­thing about why he re­sponded the way that he did.

• I guess that you per­son­ally would profit from more fil­ter­ing of your thoughts be­fore you ex­press them to other peo­ple. On LW you could eas­ily have a higher pos­i­tive karma rat­ing than 53% by think­ing more about how other peo­ple are likely to re­ceive your posts. LW karma isn’t perfect but it’s an easy sig­nal you can use as feed­back.

When it comes to face to face in­ter­ac­tion I think high feed­back work­shops are good. I would avoid PUA style train­ing that cen­ters around an­tag­o­nis­tic in­ter­ac­tions. If you want to speak with­out much fil­ter­ing Rad­i­cal Hon­esty work­shops and Authen­tic Re­lat­ing/​Cir­cling work­shops can help you to com­mu­ni­cate in a so­cially ac­cept­able way.

• Don’t gen­er­al­ize from a sam­ple of one. You should pay at­ten­tion to in­ter­ac­tions on a mo­ment to mo­ment ba­sis and keep track of out­comes. If you do find that peo­ple start to glaze over when you start talk­ing about alien ab­duc­tions, you might hy­poth­e­size that “delu­sional and gullible” is some­thing that mul­ti­ple peo­ple would agree to (or, al­ter­na­tively, that it is bor­ing sub­ject, which is also use­ful in­for­ma­tion). If peo­ple seem sur­prised when you ex­press an­noy­ance, this may in­di­cate that they would agree that you are tem­per­a­men­tal. If you don’t see this when in­ter­act­ing with other peo­ple, it is pos­si­ble that it is your old friend who is ac­tu­ally tem­per­a­men­tal and delu­sional.

• Now, I was won­der­ing about how I could im­prove on with re­spect to their im­pres­sions so­cially, since I am do­ing well cur­rently in man­ag­ing them with re­spect to per­sonal wellbe­ing.

Per­haps I’m mis­in­ter­pret­ing you, but I read the above to be ask­ing, “How can I im­prove at x even though im­prov­ing at x won’t in­crease my wellbe­ing?”

• Have you tried ask­ing him why he thought these things?

• I asked him prompted by this sug­ges­tion:

I asked him why he be­lieves that and he has now re­spond­ing to say he’s been tel­ling me for years why and it just goes right over my head and he doesn’t want to be part of that any­more.

So not re­ally sure why...

• I’m rather frus­trated that there’s not a guide to be­ing gen­er­ally healthier that uses prob­a­bil­ities and pay­offs and such to con­vince read­ers that they should bother to do any spe­cific ac­tivity, or adopt any spe­cific in­ter­ven­tion to make them­selves healthier. Health in­for­ma­tion is so di­s­or­ga­nized—which is fine for the cut­ting edge stuff, but for stuff that many peo­ple get that we’ve known how to treat for a while, such as cav­i­ties, acid re­flux, and so on, I feel like it should be way the buck eas­ier to find de­tailed info on how much cer­tain ac­tivi­ties in­crease or de­crease your risk of get­ting that prob­lem by, and what the base rate is.

For ex­am­ple, a week ago, I would have guessed that maybe 5% of adults in the US had ever had a cav­ity, but a quick Google search sug­gests that the ac­tual num­ber is closer to 95%. I’ve gone from rarely floss­ing to floss­ing daily since find­ing this out!

• Agreed. I re­ally wish that there was a site like we­bMD that ac­tu­ally in­cluded rates of the dis­eases and the symp­toms. I don’t think it would be a big step to go from there to some­thing that would ac­tu­ally pro­pose cost-effec­tive tests for you based on your symp­toms.

e.g. You se­lect sore-throat and fever as symp­toms and it says that out of peo­ple with those symp­toms, 70% have a cold, 25% have a strep in­fec­tion and 5% have some­thing else (these num­bers are com­pletely made up). An even bet­ter sys­tem would then look at which tests you could do to bet­ter nail down the prob­a­bil­ities, which could be as sim­ple as ask­ing some ques­tions like “Do you have any visi­ble rashes?” or ask­ing for test re­sults like a quick strep test.

• rates of the diseases

There is a not in­sur­mountable but a pretty large prob­lem here. Rates for which groups? There are a LOT of rele­vant sub­groups (sex, age, eth­nic­ity, so­cial group, ge­o­graphic group, cur­rent med­i­cal con­di­tions, pre­vi­ous med­i­cal con­di­tions, diet, etc.).

Med­i­cal di­ag­nos­tic ex­pert sys­tems ex­ist and do rea­son­ably well, but they are not triv­ial.

On a prac­ti­cal note, the doc­tors’ guild is likely to take a lud­dite po­si­tion to­wards this X-/​

• what the base rate is

There is a not in­sur­mountable but a pretty large prob­lem here. Rates for which groups? There are a LOT of rele­vant sub­groups (sex, age, eth­nic­ity, so­cial group, ge­o­graphic group, cur­rent med­i­cal con­di­tions, pre­vi­ous med­i­cal con­di­tions, diet, etc.).

If you just want an over­all pic­ture, CDC pub­lishes mor­tal­ity and mor­bidity ta­bles, I be­lieve, which should sup­ply you with some sort of base rates.

Med­i­cal di­ag­nos­tic ex­pert sys­tems ex­ist and do rea­son­ably well, but they are not triv­ial.

On a prac­ti­cal note, the doc­tors’ guild is likely to take a lud­dite po­si­tion to­wards this X-/​

• Same with knowhow about how so­ciety ac­tu­ally runs. School should tell you how to use so­cial ser­vices and a lot of ba­sic law.

• That is why we have the stupid ques­tions thread and the bor­ing ad­vice repos­i­tory.

• I’ve been read­ing about the difficult prob­lem of build­ing an in­tel­li­gent agent A that can prove a more in­tel­li­gent ver­sion of it­self, A’, will be­have ac­cord­ing to A’s val­ues. It made me start won­der­ing: what does it mean when a per­son “proves” some­thing to them­selves or oth­ers? Is it the men­tal state change that’s im­por­tant? The ex­ter­nal ma­nipu­la­tion of sym­bols?

• Proof, in this case, means that us­ing only a re­stricted set of rules, you are able to rewrite a set of ini­tial as­sump­tions to get the de­sired con­clu­sion. The rules are sup­posed to con­serve, ev­ery time they are used, the truth sta­tus of the as­ser­tions they are ap­plied to.
In this case, if the deriva­tion is cor­rect and both agents be­lieve in the same en­vi­ron­ment logic, then the men­tal state change should be a con­se­quence of the strict sym­bols ma­nipu­la­tion. Note that ‘two agents’ might mean ‘the same agent in the past and in the fu­ture of the deriva­tion’.

• At least two ma­jor classes of ex­is­ten­tial risk, AI and physics ex­per­i­ments, are ar­eas where a lot of math can come into play. In the case of AI, this is un­der­stand­ing whether hard take-offs are pos­si­ble or likely and whether an AI can be prov­ably Friendly. In the case of physics ex­per­i­ments, the is­sues con­nected to are anal­y­sis that the ex­per­i­ments are safe.

In both these cases, lit­tle at­ten­tion is made to the pre­cise ax­io­matic sys­tem be­ing used for the re­sults. Should this be con­cern­ing? If for ex­am­ple some sort of re­sult about Friendli­ness is proven rigor­ously, but the proof lives in ZFC set the­ory, then there’s the risk that ZFC may turn out to be in­con­sis­tent. Similar re­marks ap­ply to anal­y­sis that var­i­ous physics ex­per­i­ments are un­likely to cause se­ri­ous prob­lems like a false vac­uum col­lapse.

In this con­text, should more re­sources be spent on mak­ing sure that proofs oc­cur in their ab­solute min­i­mum ax­io­matic sys­tems, such as con­ser­va­tive ex­ten­sions of Peano Arith­metic or near con­ser­va­tive ex­ten­sion?

• I would think it faster to search for proofs of any kind, then sim­plify to an el­e­men­tary/​con­struc­tive/​ma­chine ver­ifi­able proof.

• What do you mean?

• If you’re at the state where the worst thing about a proof is that it re­lies on the ax­iom of choice, you’re prac­ti­cally at the finish line (at least com­pared to here). Once proofs has been dis­cov­ered, math­e­mat­i­ci­ans have a pretty good track record of whit­tling them down to rest on fewer as­sump­tions. From my (un­in­formed dilet­tante’s) per­spec­tive, it’s not worth limit­ing your toolset un­til you’ve found some solu­tion to your prob­lem. Any solu­tion, even ones which rest on un­proven con­jec­tures, will teach you a lot.

• Ah, yes, I think that makes sense. And ob­vi­ously a proof of say Friendli­ness in ZFC is a lot bet­ter than no proof at all.

• Not that it counts much, but I do be­lieve that the ZFC is in­con­sis­tent.

• Why do you be­lieve that? And do you also be­lieve that ZF is in­con­sis­tent?

• Yes. It’s not the Choice ax­iom which is prob­le­matic, but the in­finity it­self. So it doesn’t mater if ZF or ZFC.

Why do I be­lieve this? It’s known for some time now, that you can’t have an uniform prob­a­bil­ity dis­tri­bu­tion over the set of all nat­u­rals. That would be an ex­press road to para­doxes.

The prob­lem is, that even if you have a prob­a­bil­ity dis­tri­bu­tion where P(0)=0.5, P(1)=0.25, P(2)=0.125 and so on … you can then in­vite a su­per-task of swap­ping two ran­dom nat­u­rals (us­ing this dis­tri­bu­tion) at the time 0. Then the next swap­ping at 0.5. Then the next swap­ping at 0.75 … and so on.

The ques­tion is, what is the prob­a­bil­ity that 0 will re­main in its place? It can’t be more than 0, af­ter the com­ple­tion of the su­per-task af­ter just a sec­ond. On the other hand, for ev­ery other num­ber, that prob­a­bil­ity of be­ing on the left­most po­si­tion is also zero.

We ap­par­ently can con­struct an uniform dis­tri­bu­tion over the nat­u­rals. Which is bad.

• The limit of your dis­tri­bu­tions is not a dis­tri­bu­tion so there’s no prob­lem.

If there’s any sort of in­con­sis­tency in ZF or PA or any other ma­jor sys­tem cur­rently in use, it will be much harder to find than this. At a meta level, if there were this ba­sic a prob­lem, don’t you think it would have already been no­ticed?

• If there’s any sort of in­con­sis­tency in ZF or PA or any other ma­jor sys­tem cur­rently in use, it will be much harder to find than this.

In­deed, since you can prove ZFC con­sis­tent with the aid of an in­ac­cessible car­di­nal. And you can prove the con­sis­tency of an in­ac­cessible car­di­nal with a Mahlo car­di­nal, and so on.

• I’m not sure that’s strong ev­i­dence for the the­sis in ques­tion. If ZFC had a low-ly­ing in­con­sis­tency, ZFC+an in­ac­cessible car­di­nal would still prove ZFC con­sis­tent, but it would be it­self an in­con­sis­tent sys­tem that was effec­tively ly­ing to you. Same re­marks ap­ply to any large car­di­nal ax­iom.

• What can one ex­pect af­ter this su­per-task is done to see?

Noth­ing?

At a meta level, if there were this ba­sic a prob­lem, don’t you think it would have already been no­ticed?

It has been no­ticed, but never re­solved prop­erly. A con­sen­sus among top math­e­mat­i­ci­ans, that ev­ery­thing is/​must be okay pre­vails.

One dis­si­dent.

• What can one ex­pect af­ter this su­per-task is done to see?

This ques­tion pre­sup­poses that the task will ever be done. Since, if I un­der­stand cor­rectly, you are do­ing an in­finite num­ber of swaps, you will never be done.

You could similarly define a su­per-task (what­ever that is) of adding 1 to a num­ber. Start with 0, at time 0 add 1, add one more at time 0.5, and again at 0.75. What is the value when you are done? Clearly you are count­ing to in­finity, so even though you started with a nat­u­ral num­ber, you don’t end up with one. That is be­cause you don’t “end up” at all.

• This ques­tion pre­sup­poses that the task will ever be done

“a su­per­task is a countably in­finite se­quence of op­er­a­tions that oc­cur se­quen­tially within a finite in­ter­val of time.”

You can’t avoid su­per­tasks, when you en­dorse in­finity.

There­fore, I don’t.

• What you are do­ing in many ways amounts to the 18th and early 19th cen­tury ar­gu­ments over whether 1-1+1-1+1-1… con­verged and if so to what. First for­mal­ize what you mean, and then get an an­swer. And a rough in­tu­ition of what should for­mally work that leads to a prob­lem is not at all the same thing as an in­con­sis­tency in ei­ther PA or ZFC.

• There are no ax­ioms of ZFC that im­ply that such a task can be com­pleted.

• This ques­tion pre­sup­poses that the task will ever be done Sure. It’s called su­per-tasks.

From math­e­mat­ics we know that not all se­quences con­verge. So the se­quence of dis­tri­bu­tions that you gave, or my ex­am­ple of the se­quence 0,1,2,3,4,… both don’t con­verge. Cal­ling them a su­per­task doesn’t change that fact.

What math­e­mat­i­ci­ans of­ten do in such cases is to define a new ob­ject to de­note the hy­po­thet­i­cal value at the end of se­quence. This is how you end up with real num­bers, dis­tri­bu­tions (gen­er­al­ized func­tions), etc. To be fully for­mal you would have to keep track of the se­quence it­self, which for real num­bers gives you Cauchy se­quences for in­stance. In most cases these ob­jects be­have a lot like the el­e­ments of the se­quence, so real num­bers are a lot like ra­tio­nal num­bers. But not always, and some­times there is some weird­ness.

In philos­o­phy, a su­per­task is a countably in­finite se­quence of op­er­a­tions that oc­cur se­quen­tially within a finite in­ter­val of time.

This refers to some­thing called “time”. Most of math­e­mat­ics, ZFC in­cluded, has no no­tion of time. Now, you could take a vari­able, and call it time. And you can say that a given countably in­finite se­quences “takes place” in finite “time”. But that is just you putting se­man­tics on this se­quence and this vari­able.

• So the se­quence of dis­tri­bu­tions that you gave, or my ex­am­ple of the se­quence 0,1,2,3,4,… both don’t con­verge. Cal­ling them a su­per­task doesn’t change that fact.

I don’t un­der­stand you.

• Your ques­tion of “af­ter finish­ing the su­per­task, what is the prob­a­bil­ity that 0 stays in place” doesn’t yet parse as a ques­tion in ZFC, be­cause you haven’t speci­fied what is meant by “af­ter finish­ing the su­per­task”. You need to for­mal­ize this no­tion be­fore we can say any­thing about it.

If you’re say­ing that there is no for­mal­iza­tion you know of that makes sense in ZFC, then that’s fine, but that’s not nec­es­sar­ily a strike against ZFC un­less you have a com­pet­i­tive al­ter­na­tive you’re offer­ing. The prob­lem could just be that it’s an ill-defined con­cept to be­gin with, or you just haven’t found a good for­mal­iza­tion. Just be­cause your brain says “that sounds like it make sense”, doesn’t mean it ac­tu­ally makes sense.

To show that ZFC is in­con­sis­tent, you would need to dis­play a for­mal con­tra­dic­tion de­duced from the ZFC ax­ioms. “I can’t write down a for­mal­iza­tion of this nat­u­ral sound­ing con­cept” isn’t a for­mal con­tra­dic­tion; the failure is at the mod­el­ing step, not in­side the log­i­cal calcu­lus.

• Define the se­quence S by

S(0) = 0
S(n+1) = 1 + S(n)


This is a se­quence of nat­u­ral num­bers. This se­quence does not con­verge, which means that the limit as n goes to in­finite of S(n) is not a nat­u­ral num­ber (nor a real num­ber for that mat­ter).

You could try to write it as a func­tion of time, S’(t) such that S’(1-0.5^n) = S(n). That is, S’(0)=0, S’(0.5)=1, S’(0.75)=2, etc. A pos­si­ble for­mula is S’(t) = -log_2(1-t). You could then ask what is S’(1). The an­swer is that this is the same as the limit S(in­finity), or as log(0), which are both not defined. So in fact S’ is not a func­tion from num­bers be­tween 0 and 1 in­clu­sive to nat­u­ral or real num­bers, since the do­main ex­cludes 1.

You can similarly define a se­quence of dis­tri­bu­tions over the nat­u­ral num­bers by

T(0) = {i → 0.5 * 0.5^i}
T(n+1) = the same as T(n) ex­cept two val­ues swapped


This is the ex­am­ple that you gave above. The se­quence T(n) doesn’t con­verge (I haven’t checked, but the dis­cus­sion above sug­gests that it doesn’t), mean­ing that the limit “lim_{n->inf} T(n)” is not defined.

• Thomas, please read and un­der­stand query’s re­sponse above. In at­tempt­ing to dis­man­tle a con­cept you don’t like, you’ve lost pre­ci­sion. For­mal­ize your ques­tions and con­cerns rigor­ously and then see if a seem­ing con­tra­dic­tion is still there.

• Phras­ing it as a “su­per-task” re­lies on in­tu­itions that are not eas­ily for­mal­ized in ei­ther PA or ZFC. Think in­stead in terms of a limit, where your nth dis­tri­bu­tion and let n go to in­finity. This avoids the in­tu­itive is­sues. Then just ask what mean by the limit. You are tak­ing what amounts to a poin­t­wise limit. At this point, what mat­ters then is that it does not fol­low that a poin­t­wise limit of prob­a­bil­ity dis­tri­bu­tions is it­self a prob­a­bil­ity dis­tri­bu­tion.

If you pre­fer a differ­ent ex­am­ple that doesn’t obfus­cate as much what is go­ing on we can do it just as well with the re­als. Con­sider the situ­a­tion where the nth dis­tri­bu­tion is uniform on the in­ter­val from n to n+1. And look at the limit of that (or if you in­sist move back to hav­ing it speed up over time to make it a su­per­task). Vi­su­ally what is hap­pen­ing each step is a lit­tle 1 by 1 square mov­ing one to the right. Now note that the limit of these dis­tri­bu­tions is zero ev­ery­where, and not in the nice sense of zero at any spe­cific point but in­te­grates to a finite quan­tity, but gen­uinely zero.

This is es­sen­tially the same situ­a­tion, so noth­ing in your situ­a­tion has to do with spe­cific as­pects of countable sets.

• Wild­berger’s com­plaints are well known, and frankly not tak­ing very se­ri­ously. The most pos­i­tive thing one can say about it is that some of the ideas in his ra­tio­nal trig­nom­e­try do have some in­ter­est­ing math be­hind them, but that’s it. Pretty much no math­e­mat­i­can who has listened to what he has to say have taken any of it se­ri­ously.

• Sure, I know he is not taken very se­ri­ously. That is his own point, too.

In the time of Carl Sa­gan, in the year 1986 or so, I be­came an anti Sa­ganist. I re­al­ized that his mil­lion civ­i­liza­tion in our galaxy alone is an ut­ter bul­lshit. Most likely only one ex­ists.

Every sin­gle as­tro-biol­o­gist or biol­o­gist would have said to a dis­si­dent like my­self—you don’t un­der­stand evolu­tion, sire, it’s manda­tory!

20 years later, on this site, Rare Earth is a dom­i­nant po­si­tion. Or at least—no aliens po­si­tion.

On the Na­tional Geo­graphic chan­nel and el­se­where, you still listen “how pre­vi­ously un­ex­pected num­ber of Earth like planets will be de­tected”.

I am not afraid of math­e­mat­i­ci­ans more than of as­tro­biol­o­gists. Largely unim­pressed.

• I’m not sure what your point is here. Yes, ex­perts some­times have a con­sen­sus that turns out to be wrong. If one is lucky one can even turn out to be right when the ex­perts are wrong if one takes suffi­ciently many con­trar­ian po­si­tions (al­though the idea that many mil­lions of civ­i­liza­tions in our galaxy was a uni­ver­sal among both biol­o­gists and as­tro-biol­o­gists is definitely ques­tion­able), but in this case, the ex­perts have re­ally thought about these ideas a lot, and haven’t got­ten any­where.

If you pre­fer an ex­am­ple other than Wild­berger, when Ed­ward Nel­son claimed to have a con­tra­dic­tion in PA, many se­ri­ous math­e­mat­i­ci­ans looked at what he had done. It isn’t like there’s some spe­cial math­e­mat­i­cal mob which goes around sup­press­ing these things. I liter­ally had a lunch-time con­ver­sa­tion a few days ago with some other math­e­mat­i­cian where the pri­mary topic was es­sen­tially if there is an in­con­sis­tency in ZFC where would we ex­pect to find it and how much of math would likely be sal­vage­able? In fact, that con­ver­sa­tion was one of the things that lead me along to the ini­tial ques­tion in this sub­thread.

I am not afraid of math­e­mat­i­ci­ans more than of as­tro­biol­o­gists. Largely unim­pressed.

Nei­ther of these groups are groups you should be afraid of and I’m a lit­tle con­fused as why you think fear should be rele­vant.

• Yes. It’s not the Choice ax­iom which is prob­le­matic, but the in­finity it­self. So it doesn’t mater if ZF or ZFC.

I doubt that any proof in FAI will use in­fini­tary meth­ods.

• I’m not sure why you think that. This may de­pend strongly on what you mean by an in in­fini­tary method. Is in­duc­tion in­fini­tary? Is trans­finite in­duc­tion in­fini­tary?

• Physics is only good, when you ex­pel all the in­fini­ties out of it.

Even more so for a sub­set of physics, such as FAI or molec­u­lar dy­nam­ics or some­thing.

Well, some of us think that this should be ap­plied to the math­e­mat­ics it­self.

• Physics is only good, when you ex­pel all the in­fini­ties out of it.

I’m not sure what you mean by this, and in so far as I can un­der­stand it doesn’t seem to be true. Physi­cists use the real num­bers all the time which are an in­finite set. They use in­te­gra­tion and differ­en­ti­a­tion which in­volves limits. So what do you mean?

• I’m not sure what you mean by this, and in so far as I can un­der­stand it doesn’t seem to be true. Physi­cists use the real num­bers all the time which are an in­finite set.

https://​​physics.aps.org/​​ar­ti­cles/​​v2/​​70

http://​​blogs.dis­cov­er­magaz­ine.com/​​crux/​​2015/​​02/​​20/​​in­finity-ru­in­ing-physics/​​#.Vh0LnHqqpBc

Now, when there in no God, the In­finity is its sub­sti­tute, most peo­ple would love to ex­ist. But it’s just an­other blun­der.

• I’m not sure what you mean by this, and in so far as I can un­der­stand it doesn’t seem to be true. Physi­cists use the real num­bers all the time which are an in­finite set.

https://​​physics.aps.org/​​ar­ti­cles/​​v2/​​70

The prob­lem there is that cer­tain spe­cific mod­els of physics end up giv­ing in­finite val­ues for mea­surable quan­tities—this is a known prob­lem and has been an area of ac­tive re­search since early work with renor­mal­iza­tion in the 1930s. This is not at all an at­tempt to ban­ish in­finity in any gen­eral sense.

Now, when there in no God, the In­finity is its sub­sti­tute, most peo­ple would love to ex­ist. But it’s just an­other blun­der.

This is rhetoric with­out con­tent.

• This is not at all an at­tempt to ban­ish in­finity in any gen­eral sense.

Of course it is. Noth­ing in­finite has been spot­ted so far.

This is rhetoric with­out con­tent.

Is it? Is this same “rhetoric” against aliens also with­out a con­tent? If I say that peo­ple want aliens, be­cause they have lost an­gels, is this re­ally with­out a con­tent?

Not only that there is no in­finite God, even in­finite sets are prob­a­bly just a mir­a­cle.

• This is not at all an at­tempt to ban­ish in­finity in any gen­eral sense.

Of course it is. Noth­ing in­finite has been spot­ted so far.

I’m not sure how your sen­tence is a re­sponse to my sen­tence.

This is rhetoric with­out con­tent.

Is it? Is this same “rhetoric” against aliens also with­out a con­tent? If I say that peo­ple want aliens, be­cause they have lost an­gels, is this re­ally with­out a con­tent?

Not only that there is no in­finite God, even in­finite sets are prob­a­bly just a mir­a­cle.

Gen­er­ally, yes, the con­tent level is pretty low. It es­sen­tially amounts to Bul­verism, where one is fo­cus­ing on claimed in­tents and mo­tives rather than fo­cus­ing on the sub­stan­tive is­sue of whether there’s an in­con­sis­tency in PA or ZFC that can arise due to is­sues with su­per­tasks or other ideas re­lated to in­finity.

It may well be that spe­cific peo­ple or groups are adopted aliens in a way that is es­sen­tially re­plac­ing deities. The Raeli­ans and other New Age groups cer­tainly fall into that cat­e­goyr. But it is a mis­take to there­fore claim that in gen­eral, peo­ple be­lieve in aliens as a re­place­ment for be­lief in a de­ity. And it is an even more se­ri­ous mis­take to make such claims about in­finite sets. If you see physi­cists pray­ing to in­finite sets, or claiming that in­finite sets are re­spon­si­ble for the cre­ation of the uni­verse or hu­man­ity, or claim that in­finite sets will some­how save us, or claim that in­finite sets have an agency to them, or claim that in­finite sets have a spe­cial mys­tery and majesty to them that mer­its wor­ship, or if they start wars with or ex­com­mu­ni­cate peo­ple who don’t be­lieve in in­finite sets or be­lieve in a differ­ent type of in­finite set, then there would be an ar­gu­ment.

• I don’t give a damn about in­finity. If it is doable, why not? But is it? That’s the only ques­tion.

Then, a su­per­task mixes the in­finite set of nat­u­rals and we are wit­ness­ing “the ir­re­sistible force act­ing on an un­mov­able ob­ject”. What the Hell will hap­pen? Will we have finite num­bers on the first 1000 places? We should, but big­ger, no mat­ter which will be.

The “ir­re­sistible force” is just an empty word. And so is “un­mov­able ob­ject”. And so is “in­finity” and so is “su­per­task”.

Empty words. So ev­ery the­ory which en­com­passes them is flawed. More than likely.

And yes, su­per­task can be es­tab­lished in ZFC.

The topic is also ex­er­cised here:

• There is an ar­gu­ment there, but it cer­tainly is not one based on ZFC, since no ax­iom of set the­ory says any­thing about time or what can be ac­com­plished in time.

• So you say, ZFC has noth­ing to do with time? Time in physics is un­cov­ered in ZFC?

• Physics is built on top of math­e­mat­ics, and al­most all of math­e­mat­ics can be built on top of ZFC (there are other choices). But there is as much time in ZFC as there are words in a sin­gle pixel on your screen.

• I don’t give a damn about in­finity. If it is doable, why not? But is it? That’s the only ques­tion.

I’m not sure what you mean by this, es­pe­cially given your ear­lier fo­cus on whether in­finity ex­ists and whether us­ing it in physics is akin to re­li­gion. I’m also not sure what “it” is in your sen­tence, but it seems to be the su­per­task in ques­tion. I’m not sure in that con­text what you mean by “doable.”

Then, a su­per­task mixes the in­finite set of nat­u­rals and we are wit­ness­ing “the ir­re­sistible force act­ing on an un­mov­able ob­ject”. What the Hell will hap­pen? Will we have finite num­bers on the first 1000 places? We should, but big­ger, no mat­ter which will be.

The “ir­re­sistible force” is just an empty word. And so is “un­mov­able ob­ject”. And so is “in­finity” and so is “su­per­task”.

I’m not at all sure what this means. Can you please stop us­ing analo­gies can make a spe­cific ex­am­ple of how to for­mal­ize this con­tra­dic­tion in ZFC?

The topic is also ex­er­cised here:

This seems to be es­sen­tially the same ar­gu­ment and it seems like the ex­act same prob­lem: an as­sump­tion that an in­tu­itive limit must ex­ist. Limits don’t always ex­ist when you want them to, and we have a lot of the­o­rems about when a point-wise limit makes sense. None of them ap­ply here.

• Just an­swer me a sim­ple ques­tion.

How do the first 1000 nat­u­rals look like, af­ter mix­ing su­per­task de­scribed above has finished its job,

You may say that this su­per­task is im­pos­si­ble.

You may say that there is no set of all nat­u­rals.

What­ever you think about it. Every­thing else is pretty re­dun­dant.

• I don’t think this con­ver­sa­tion is be­ing very pro­duc­tive so this is likely my fi­nal re­ply.

Just an­swer me a sim­ple ques­tion.

? How do the first 1000 nat­u­rals look like, af­ter mix­ing su­per­task de­scribed above has finished its job,

You may say that this su­per­task is im­pos­si­ble.

You may say that there is no set of all nat­u­rals.

The re­sult­ing poin­t­wise limit ex­ists, and it gives each pos­i­tive in­te­ger a prob­a­bil­ity of zero. This is fine be­cause the poin­t­wise limit of a dis­tri­bu­tion on a countable set is not nec­es­sar­ily it­self a dis­tri­bu­tion. Please take a ba­sic real anal­y­sis course.

• One of the open prob­lems MIRI is work­ing on for FAI is ex­actly this type of log­i­cal un­cer­tainty. It should be able to mod­ify it­self if it finds out the logic un­der­ly­ing it’s ba­sic pro­gram­ming is in­cor­rect.

• Why isn’t there a good way of do­ing sym­bolic math on a com­puter?

I want to brush up on my prob­a­bil­ity the­ory. I hate us­ing a pen and pa­per, I lose them, they get dam­aged, and my hand­writ­ing is slow and messy.

In my mind I can en­visage a sim­ple sym­bolic math ed­i­tor with key­board short­cuts for com­mon sym­bols, that would al­low you to edit nice, neat la­tex style equa­tions, as eas­ily as I can edit text. Mark­down would be ac­cept­able as long as I can see the equa­tion in it’s pretty form next to it. This doesn’t seem to ex­ist. Python based sym­bolic math sys­tems, like ‘sage­math’, are hope­lessly clunky. Math­e­mat­ica, al­though I can’t af­ford it, doesn’t seem to be what I want ei­ther. I want to be able to write math fast, to aid my think­ing while prov­ing the­o­rems and do­ing prob­lems from a text­book, not have the com­puter do the think­ing for me. La­tex equa­tion ed­i­tors I’ve seen are all similarly un­wieldy—wait­ing 10 sec­onds for it to build the pdf doc­u­ment is to­tally dis­rup­tive to my thought pro­cess.

Why isn’t this a solved prob­lem? Is it just that no­body does this kind of thing on a com­puter? Do I have to over­come my ha­tred of dead tree me­dia and buy a pen­cil sharp­ener?

• What would be re­ally nice is tablet soft­ware that can trans­late hand­writ­ten math into la­tex, and com­pile that into pdf.

By the way, what I think you want is not “do­ing sym­bolic math on a com­puter,” but “hav­ing a good in­put method for equa­tions.”

edit: Also can some­one please write a good mod­ern pro­gram­ming lan­guage for type­set­ting? With all due re­spect to Dr. Knuth, tex is awful.

• Also can some­one please write a good mod­ern pro­gram­ming lan­guage for type­set­ting? With all due re­spect to Dr. Knuth, tex is awful.

TeX as a lan­guage is awful, but what it can do is won­der­ful. And of course ev­ery­one uses LaTeX (TeX made us­able by Lam­port), or at least I do, so I see lit­tle of the TeX lan­guage it­self. There was noth­ing like it when Knuth cre­ated it, and al­most forty years on, there is still noth­ing like it. As far as I know, the only other type­set­ting lan­guage that has gained even a niche is the hideous SGML, in com­par­i­son to which TeX is a thing of su­per­la­tive el­e­gance and beauty. TeX has a spe­cial­ised sub­lan­guage for math­e­mat­ics, both us­able for in­put (so far as lin­ear text can be) and gen­er­at­ing high-qual­ity out­put, so it be­came the stan­dard for doc­u­ment prepa­ra­tion in the math­e­mat­i­cally based sci­ences. It’s still in­fe­rior to hu­man type­set­ting, but that’s only available for fi­nal printer’s copy. What you had to do back then, well, trip down mem­ory lane omit­ted for brevity.

To do bet­ter than TeX, at this point, needs a lot more than com­ing up with a bet­ter lan­guage to think about type­set­ting with. It will have to repli­cate the TeX ecosys­tem, provide two-way con­ver­sion be­tween it and TeX, and have a vi­sual in­ter­face. Vi­sual in­ter­faces for pro­gram­ming lan­guages are re­ally hard, and they gen­er­ally don’t get de­vel­oped be­yond de­mos that wow au­di­ences and then go nowhere.

And it has to be done by one per­son, be­cause a com­mit­tee will just cre­ate a bloated, Tur­ing-com­plete mess.

Which is why it hasn’t hap­pened. It needs some­one with an ex­pert pas­sion for pro­gram­ming, tech­ni­cal type­set­ting, de­sign, and lan­guages con­sid­ered as a medium of thought. Knuth, Jony Ive, and Dijk­stra all in one. But any­one like that would have big­ger things to do with their tal­ents.

• Yes, I un­der­stand all that. It is hard to move away from shitty lan­guages once they gained mar­ket share.

But la­tex, while im­prov­ing on many things com­pared to base tex is hob­bled by tex as well (for ex­am­ple, why do I need to re­com­pile to re­solve refer­ences, haven’t we in­vented mul­ti­pass com­pila­tion like half a cen­tury ago?) I am happy to dou­ble down on “(La)tex is a shitty lan­guage.” It’s very use­ful of course, but the state of type­set­ting to­day is sort of like if ev­ery­one pro­grammed in Cobol for some rea­son.

• But any­one like that would have big­ger things to do with their tal­ents.

That de­pends on what you con­sider to be big. It’s not big by the stan­dards of academia. But it might be big by the stan­dards of real world im­pact.

• I tend to use TeX­macs for this. It’s a WYSIWYG doc­u­ment ed­i­tor; you can en­ter math­e­mat­ics us­ing (La)TeX syn­tax, but there are also menus and key­board short­cuts. It’s free in both senses. No sym­bolic-ma­nipu­la­tion ca­pa­bil­ities of its own, but it has some abil­ity to con­nect to other things that do; I haven’t tried those out.

Math­e­mat­ica isn’t that far from what you want, I think, and it has the ad­van­tage of be­ing able to do a lot of the sym­bolic ma­nipu­la­tion for you. But, as you say, it’s re­ally ex­pen­sive—though if you haven’t checked out the home and (if ap­pli­ca­ble) stu­dent edi­tions, you should do so; they’re much cheaper. Any­way, the fact that to me it sounds close to what you want makes me sus­pect that I’m miss­ing or mi­s­un­der­stand­ing some of your re­quire­ments; if you could clar­ify how it doesn’t meet your needs it may help with sug­gest­ing other op­tions.

• YES. Thank you so much. Tex­macs seems to be ex­actly what I wanted.

• Ex­cel­lent! I will men­tion that I have oc­ca­sion­ally had it crash on me (this was in the past, prob­a­bly an older ver­sion of the soft­ware, so take it with a grain of salt—but you might want to be slightly more para­noid about sav­ing your work reg­u­larly than you would be with, say, a sim­ple text ed­i­tor).

• Been us­ing it for an hour now,and yes, it’s crashed on me once, but no more than half the other pro­grams I use. Already see­ing the benefits of it when I spent half an hour do­ing some­thing, re­al­ised there was a mis­take at the start, and could then just find/​re­place stuff in­stead of scrunch­ing the pa­per up into a ball and curs­ing Pierre Laplace. Also I don’t have to deal with the aes­thetic trauma of view­ing my own hand­writ­ing. Out­stand­ing.

• Would any of these be use­ful? That’s just a list I found by Googling /​MathJax ed­i­tor/​. I’m not fa­mil­iar with any of them. MathJax is a Javascript library for ren­der­ing math­e­mat­ics on web pages. The math­e­mat­ics is writ­ten in MathML.

I use pen and pa­per, and switch to LaTeX when I have some­thing I need to pre­serve. It’s not very satis­fac­tory, but since any­thing I might want to pub­lish will have to go through LaTeX at some point, there’s no point in us­ing any other for­mat, un­less it had a LaTeX ex­porter. And pen and pa­per is far more in­stant than any method I can imag­ine of pok­ing math­e­mat­ics in through a key­board.

• pen and pa­per is far more in­stant than any method I can imag­ine of pok­ing math­e­mat­ics in through a key­board.

Yeah… I think I just have to bite this bul­let. If you do math pro­fes­sion­ally and the peo­ple you know work onto pen and pa­per, then that’s the an­swer.

It’s just.… I feel like I can imag­ine a sys­tem that would be bet­ter than pen and pa­per. There’s so much te­dious rep­e­ti­tion of sym­bols when I do alge­bra on pa­per, and in­evitably while sim­plify­ing some big in­te­gral I write some­thing wrong, and have to scratch it out, and the whole thing be­comes a con­fus­ing mess. writ­ing my ver­bal thoughts down with a key­board is just as quick and in­tu­itive as a pen and pa­per. There must be a bet­ter way...

• Would it make sense to write on a tablet and have the com­puter do OCR? (Hy­po­thet­i­cal sys­tem.)

• Yes, that would also be great, but I a) I can’t af­ford such a tablet, and b) I strongly sus­pect that the OCR would be in­ac­cu­rate enough that I’d end up wish­ing for a key­board any­way. Hell ac­cu­rate voice recog­ni­tion would be bet­ter, but I’m still wait­ing for that to hap­pen...

• Now that I think about it, OCR would be much harder for math than for text.

• It’s just.… I feel like I can imag­ine a sys­tem that would be bet­ter than pen and pa­per.

That means there’s a pos­si­ble startup.

• Ha, in the­ory, but it looks like the guys at TeX­macs are already sel­l­ing the product for free, so no dice...

• I made with my Kin­dle the ex­pe­rience that it’s bet­ter than reg­u­lar pa­per books while read­ing books on a smart­phone isn’t. Cur­rently most math­mat­i­ci­ans use pa­per. If some­one would de­sign a math­e­mat­i­cal ed­i­tor that’s bet­ter than pa­per, I think that could be a huge com­mer­cial suc­cess.

• I don’t know ei­ther of a pro­gram that solves your prob­lem. But writ­ing a transcom­piler from math­e­mat­i­cal mark­down (math­down?) to La­tex should not be that difficult in F#. It should be a fun ex­cer­cise, if you write the for­mal gram­mar.

• Yeah I can imag­ine do­ing that all right—I wouldn’t ac­tu­ally mind writ­ing in la­tex even, the prob­lem is the lag. Build­ing a la­tex doc­u­ment af­ter each change takes time. If the la­tex was be­ing built in a win­dow next to it, in real time, (say a 1 sec­ond lag would prob­a­bly be fine) there’d be no prob­lem. I’m not look­ing to pub­lish the math, I just want a thought-aid.

• I be­lieve that there is an ed­i­tor called lyx that lets you do this.

• What ar­eas of math­e­mat­ics do I need to learn if I want to spe­cial­ize in for­mal episte­mol­ogy?

• Lin­ear alge­bra, func­tion op­ti­miza­tion, prob­a­bil­ity the­ory.

:)

• That’s it?

• I sup­pose modal log­ics of be­lief.

• Thanks! Ok, so now a more de­tailed ques­tion:

As I said, I’d like to do for­mal episte­mol­ogy. I’m an un­der­grad right now, and I need to de­cide on my ma­jor. If that’s about all the for­mal stuff I’ll need then there are a bunch of differ­ent ma­jors that in­clude that, and the ques­tion be­comes which ad­di­tional courses could help with for­mal episte­mol­ogy or re­lated dis­ci­plines.

Here’s what I’ve come up with so far:

• Choice 1: Ap­plied Statis­tics. This al­lows sev­eral elec­tives in other sub­jects, so I could do e.g. a minor in CS with only one or two ex­tra course re­quire­ments.

• Choice 2: Math­e­mat­i­cal Statis­tics. Less elec­tives in other sub­jects, more elec­tives in math/​stats. I could still prob­a­bly do a CS minor along with it if I wanted.

• Choice 3: Math de­gree, pos­si­bly with a stats fo­cus.

• Choice 4: Some other de­gree (e.g., CS, eco­nomics) and just make sure to get the prob­a­bil­ity the­ory in at some point.

I’m any­way do­ing a minor in philos­o­phy, which in­cludes at least some logic.

• “Math so­phis­ti­ca­tion” is good, as is fa­mil­iar­ity with ba­sic stats and ML. In com­puter sci­ence depts., ML is of­ten taught at the grad level, though. Spe­cific ma­jor not so im­por­tant.

I found read­ing and do­ing proofs paid a lot of div­i­dends.

• In­spired by an in­ter­view an­swer given by Thiel to Fer­ris, I ask:

1. How can you be­come less com­peti­tor in or­der that You be­come more suc­cess­ful?

2. Who are the smartest peo­ple you talk to on an on­go­ing ba­sis and do you learn from them?

• Thiel goes a bit deeper on 1. in his book.

1. How can you be­come less com­peti­tor in or­der that You be­come more suc­cess­ful?

Not sure what ex­actly you meant here. But if you want to avoid be­ing “one of many peo­ple do­ing the same stuff”, your op­tions are, ap­prox­i­mately:

• find some­thing no one else does. Prob­lem is, other peo­ple may fol­low you, so this it­self is not enough.

• build a brand. No one else can pro­duce your brand, so now you meta-com­pete with other brands.

• es­tab­lish a monopoly. Try to put your­self in a po­si­tion where other peo­ple can’t com­pete with you be­cause they lack some crit­i­cal re­source.

• make a car­tel with your com­peti­tors, or bribe a gov­ern­ment offi­cial to make com­pe­ti­tion ille­gal. This is tech­ni­cally ille­gal, but not un­usual. Be sure you have the right friends, oth­er­wise you may risk prison.

• The clos­est thing my coun­try has a func­tion­ing liber­tar­ian poli­ti­cal party is con­is­der­ing sign­ing up to a cam­paign tragedy firm from the USA called i320. They col­lect data on vot­ers then analyse it and spit out recom­men­da­tions. But the data col­lec­tion is done by vol­un­teers from the party. I reckon it will a bad idea be­cause the party won’t be able to switch data an­a­lyt­ics firms in the fu­ture with­out los­ing ac­cess to data. I hap­pens to meet the party’s pres­i­dent the other day and he said to talk to his Vice Pres­i­dent. I reckon they could work out a con­tract to give them data own­er­ship but. I doubt the firm will judge on that front. Any ad­vice?

• I’ve been do­ing a ver­sion of in­ter­mit­tent fast­ing in which I eat one meal per day for around three months now, and I’ve lost a lot of weight. How­ever, I’ve been hav­ing acid re­flux (minus the heart­burn) for slightly longer than this, and de­spite hav­ing been on a strong dose of generic Pro­ton Pump In­hibitor for the last two and a half months, I’m still suffer­ing quite a bit. It also seems like eat­ing a lot at once can ex­ac­er­bate acid re­flux, so I’m con­sid­er­ing go­ing back to a reg­u­lar diet for a while to see what hap­pens. Maybe I’ll try eat­ing ex­actly twice a day, first. Since it seems like in­ter­mit­tent fast­ing is some­what com­mon here, has any­one else had similar is­sues?

• Ju­nior doc­tor here.

Differ­ent PPIs tend to work the same as each other. PPIs are pretty safe drugs, but hav­ing on­go­ing acid re­flux is it­self not that good for your health. You could try to re­duce it by stay­ing ver­ti­cal for a while af­ter eat­ing, by spac­ing your meals into at least two per day (even two within a cou­ple hours), and do­ing any other sim­ple sug­gested lifestyle mea­sures.

Ad­ding or switch­ing to a differ­ent class of an­tire­flux drug seems rash if you can just fix things with a lifestyle change.

• This seems like good ad­vice; thanks! I hadn’t looked into switch­ing drugs, but I had been cu­ri­ous as to whether switch­ing PPI’s might be helpful, so that’s good to know.

• I started an IF sched­ule where I eat from 4pm un­til 8pm a few months ago. I did have acid re­flux is­sues in the be­gin­ning, but that stopped af­ter a cou­ple of weeks. In my ex­pe­rience, the acid re­flux is worse if you eat shortly be­fore go­ing to bed. (In the be­gin­ning I ate un­til 9pm and went to bed at 10pm. Now I’m eat­ing from 5pm to about 6:30 and go to bed at 10, with no prob­lems. (I’ve had a sore throat for the last 4 years or so, but other than the acid re­flux thing when I started IF, this has pretty much re­mained un­changed, so I’m as­sum­ing the in­ter­mit­tent fast­ing isn’t mak­ing it bet­ter or worse.))

So you could try tak­ing a ≈2h break be­fore go­ing to bed (if you’re not do­ing that already), eat­ing twice a day, ex­per­i­ment­ing with differ­ent foods, talk­ing to a doc­tor, and if you still feel bad af­ter that, I would sug­gest go­ing back to a reg­u­lar diet. Three months seems like enough time for the body to ad­just as much as it’s ever go­ing to.

• Have you tried sleep­ing on your left side?

• Too many pro­ton pump in­hibitors may in­terfere with your ab­sorp­tion of min­er­als. You may want to have your blood tested for defi­cien­cies.

• I’m writ­ing to so­licit any par­tic­u­lar ques­tions you may have that I can keep in mind as I read, with a view to clear­ing up your ques­tion hav­ing pon­dered them while listen­ing to the books.

I’m listen­ing to 1 of 3 au­dio­books to­mor­row (I’m run­ning a 40k marathon so I have plenty of time, and gen­er­ally if I find an au­dio­book un­com­pel­ling that is a stop­ping rule for me and I will shift to an­other book.

• Su­per­in­tel­li­gence: Paths, Dangers, Strate­gies by Nick Bostrom

• Ex­pert Poli­ti­cal Judg­ment: How Good is it? How can We Know? by Philip E. Tetlock

• Zero to One by Peter Thiel, Blake Masters

Fur­ther, I would like to take this op­por­tu­nity to so­licit com­mu­nity re­views of the book Mind­ware by Nis­bett. I have read com­pel­ling re­views:

The book is in many ways similar to Kah­ne­man’s book “Think­ing fast and slow”, in that it ex­plains where our rea­son­ing, de­duc­tions and in­fer­ences tend to go wrong. How­ever, Nis­bett takes the ex­tra step of try­ing to for­mu­late sim­ple laws that one can fol­low to avoid the psy­cholog­i­cal pit­falls that peo­ple of­ten fall into.

I am also in­ter­ested to hear Luke­prog style notes on each of these books, if any.

Some of my favourite’s of his:

Wired for war 1

WFW 2

WFW 3

Bet­ter an­gels of our nature

Bet­ter an­gels of our na­ture 2

BAOON 3

• Things that make us happy now may not make us happy in the fu­ture

• How much, if any­thing, would you be pre­pared to pre­com­mit to donate to MIRI in re­turn for a pub­lic an­nounce­ment from their part to pub­lish their com­plete and un­cen­sored tech­ni­cial re­search agenda?

• in­iDewa.net Agen Poker Dom­ino QQ Ceme Black­jack On­line In­done­sia Mar­spoker Si­tus Judi Poker Dom­ino On­line Terpercaya

• How can one as­sem­ble the next ‘Pay­pal Mafia’?

• I don’t know, but if you could get a work­ing plan by ask­ing on pub­lic boards, I’m pretty sure it wouldn’t be worth billions of dol­lars.

• Get a bunch of re­ally smart peo­ple that are good at what they do and want to change the world, and get them to work for you,

• Please take this as a given: it is the job of adult males to im­preg­nate as many fe­males as pos­si­ble, and it is the job of adult fe­males to find a mate with re­sources, re­sources mean­ing wits, speed, strength, savvy, . . .what­ever.

To some ex­tent, think­ing log­i­cally runs counter to this.

Ergo, FWIW, use your head as a sec­ond opinion, not the first.

• Please take this as a given:

Why?

• High school is hard.