# asr

Karma: 1,199
• It’s a tempt­ing thought. But I think it’s hard to make the math work that way.

I have a lovely lap­top here that I am go­ing to give you. Sup­pose you as­sign some util­ity U to it. Now in­stead of giv­ing you the lap­top, I give you a lot­tery ticket or the like. With prob­a­bil­ity P I give you the lap­top, and with prob­a­bil­ity 1 - P you get noth­ing. (The lot­tery draw­ing will hap­pen im­me­di­ately, so there’s no time-prefer­ence as­pect here.) What util­ity do you at­tach to the lot­tery ticket? The nat­u­ral an­swer is P * U, and if you ac­cept some rea­son­able as­sump­tions about prefer­ences, you are in fact forced to that an­swer. (This is the ba­sic in­tu­ition be­hind the von Neu­mann-Mor­gen­stern Ex­pected Utility The­o­rem.)

Given that prob­a­bil­ities are real num­bers, it’s hard to avoid util­ities be­ing real num­bers too.

• This is be­cause the cur­rent po­si­tion, di­rec­tion, and speed of an atom (and all other mea­sure­ments that can be done phys­i­cally) are only pos­si­ble with one and only one spe­cific his­tory of ev­ery­thing else in the uni­verse.

This seems al­most cer­tainly false. You can mea­sure those things to only finite pre­ci­sion—there is a limit to the num­ber of bits you can get out of such a mea­sure­ment. Sup­pose you mea­sure po­si­tion and ve­loc­ity to one part in a billion in each of three di­men­sions. That’s only around 200 bits—hardly enough to dis­t­in­guish all pos­si­ble uni­ver­sal his­to­ries.

• Good point. A time limit of 3:54 does seem too ar­bi­trary to be hard-coded.

Hrm. Maybe it’s ex­actly one At­lantean time unit? Un­safe to as­sume that the units we are used to are the same units that the Stone’s maker would find nat­u­ral.

• I bet Hermione is just go­ing to love be­ing the cen­ter of all the at­ten­tion and scrutiny this will bring on her.

She came back from the dead. Gonna be a lot of at­ten­tion and scrutiny re­gard­less.

• I have this im­pres­sion—par­ent­ing hardly ever dis­cussed on LW—that most of the com­mu­nity has no chil­dren.

Let me give you an al­ter­nate ex­pla­na­tion. Be­ing a par­ent is very time-con­sum­ing. It also tends to draw one’s in­ter­est to differ­ent top­ics than are typ­i­cally dis­cussed here. In con­se­quence, LW read­ers aren’t a ran­dom sam­ple of nerds or even of peo­ple in the gen­eral so­cial or­bit of the LW crowd. I would not draw any ad­verse in­fer­ences from the fact that a non-par­ent­ing-re­lated in­ter­net fo­rum tend to be de­pleted of par­ents.

• 21 Jan 2015 21:24 UTC
4 points

data sci­en­tists /​ statis­ti­ci­ans mostly need ac­cess to com­put­ing power, which is fairly cheap these days.

This is true for each marginal data sci­en­tist. But there’s a catch, which is that those folks need data. Col­lect­ing and pro­mul­gat­ing that data, in the ap­pli­ca­tion do­mains we care about, can some­times be very costly. You might want to con­sider some of those as part of the cost for the data sci­ence.

For ex­am­ple, many coun­tries are spend­ing a huge amount of money on elec­tronic health records, in part to al­low bet­ter data min­ing. The health records aren’t pri­mar­ily for sci­en­tific pur­poses, but mak­ing them re­searcher-friendly is a big in­di­rect cost. Similarly, the cen­sus is a very ex­pen­sive data-col­lec­tion pro­cess that en­ables a lot of “cheap” an­a­lyt­ics down­stream.

While each data sci­en­tist might be cheap, there was a big up-front in­vest­ment, at the na­tional level, to en­able them.

• Um, yes for most defi­ni­tions of “ra­tio­nal”. That’s why [autism] is con­sid­ered a dis­abil­ity.

Hrm? A dis­abil­ity is a thing that is limits the dis­abled in­di­vi­d­ual from a so­cially-rec­og­nized set of nor­mal ac­tions. The term ‘dis­abil­ity’ alone doesn’t im­ply any­thing about rea­son­ing or cog­ni­tive skills. It seems at best un-ob­vi­ous, and more likely false, that “ra­tio­nal­ity” en­com­passes all cog­ni­tive func­tions.

Some peo­ple have dyslexia; that is cer­tainly a cog­ni­tive dis­abil­ity. It would be strange (not to say offen­sive) to de­scribe dyslexic in­di­vi­d­u­als as per se ir­ra­tional. I sus­pect similarly for, say, dyscal­cu­lia. Or for that mat­ter, short-term mem­ory prob­lems.

Autism is a big com­pli­cated bun­dle of traits and be­hav­iors. Why are those be­hav­iors “ir­ra­tional” in a way that dyslexia isn’t?

• One of the un­for­tu­nate limi­ta­tions of mod­ern com­plex­ity the­ory is that a set of prob­lems that look iso­mor­phic some­times have very differ­ent com­plex­ity prop­er­ties. Another awk­ward­ness is that worst-case com­plex­ity isn’t a re­li­able guide to prac­ti­cal difficulty. “This sorta feels like a col­or­ing prob­lem” isn’t enough to show it’s in­tractable on the sort of in­stances we care about.

Separately, it’s not ac­tu­ally clear to me whether com­plex­ity is good or bad news. If you think that pre­dict­ing hu­man de­sires and mo­ti­va­tions is in­fea­si­ble com­pu­ta­tion­ally, you should prob­a­bly worry less about su­per in­tel­li­gent AI, since that com­plex­ity bar­rier will pre­vent the AI from be­ing rad­i­cally effec­tive at ma­nipu­lat­ing us.

It would seem to re­quire an un­usu­ally mal­i­cious uni­verse for a su­per­hu­man AI to be fea­si­ble, for that AI to be able to ma­nipu­late us effi­ciently, but for it to be in­fea­si­ble for us to write a pro­gram to spec­ify con­straints that we would be happy with in ret­ro­spect.

• I just ob­serve that a lot of cos­mol­ogy seems to be rid­ing on the the­ory that the red shift is caused by an ex­pand­ing uni­verse.

This seems wrong to be. There’s at least two in­de­pen­dent lines of ev­i­dence for the Big Bang the­ory be­sides red­shifts—iso­tope abun­dances (par­tic­u­larly for light el­e­ments) and the cos­mic back­ground ra­di­a­tion.

What if it light just loses en­ergy as it trav­els, so that the fre­quency shifts lower?

We would have to aban­don our be­lief in en­ergy con­ser­va­tion. And we would then won­der why en­ergy seems to be con­served ex­actly in ev­ery in­ter­ac­tion we can see. Also we would won­der why we see spon­ta­neous red­shifts not spon­ta­neous blue shifts. Every known micro-scale phys­i­cal pro­cess in the uni­verse is re­versible [1], and by the CPT the­o­rem, we ex­pect this to be true always. A lot would have to be wrong with our no­tions of physics to have light “just lose en­ergy.”

That seems like a perfectly nat­u­ral solu­tion. How do we know it isn’t true?

This solu­tion re­quires light from dis­tant galax­ies to be­have in ways to­tally differ­ent from ev­ery other phys­i­cal pro­cess we know about—in­clud­ing phys­i­cal pro­cesses in dis­tant galax­ies. It seems un­nat­u­ral to say “the red­shift is ex­plained by a to­tally new phys­i­cal pro­cess, and this pro­cess vi­o­lates a lot of nat­u­ral laws that hold ev­ery­where else.”

[1] I should say, re­versible as­sum­ing you also flip the charges and par­i­ties. That’s ir­rele­vant here, though, since pho­tons are un­charged and don’t have any spe­cial po­lariza­tion.

• Speak­ing as a former al­gorithms-and-com­plex­ity TA --

Prov­ing some­thing is in NP is usu­ally triv­ial, but prob­a­bly would be worth a point or two. The peo­ple tak­ing com­plex­ity at a top-tier school have gen­er­ally mas­tered the art of par­tial credit and know to write down any­thing plau­si­bly rele­vant that oc­curs to them.

• What if it light just loses en­ergy as it trav­els, so that the fre­quency shifts lower? That seems like a perfectly nat­u­ral solu­tion. How do we know it isn’t true?

As gjm men­tions, the gen­eral name for this sort of the­ory is “tired light.” And these the­o­ries have been stud­ied ex­ten­sively and they are bro­ken.

We have a very ac­cu­rate, very well-tested the­ory that de­scribes the way pho­tons be­have, quan­tum elec­tro­dy­nam­ics. It pre­dicts that pho­tons in the vac­uum have a con­stant fre­quency and don’t sud­denly van­ish. Nor do pho­tons have any sort of in­ter­nal “clock” for how long they have been prop­a­gat­ing. As near as I can tell, any sort of tired light model means giv­ing up QED in fairly fun­da­men­tal ways, and the ev­i­den­tiary bar to over­turn that the­ory is very high.

Worse, tired light seems to break lo­cal en­ergy con­ser­va­tion. If pho­tons just van­ish or spon­ta­neously red­shift, where does the en­ergy go?

I can con­ceive of there be­ing a tired light model that isn’t ruled out by ex­per­i­ment, but I would like to see that the­ory be­fore I junk all of 20th cen­tury cos­mol­ogy and fun­da­men­tal physics.

Most sci­en­tific the­o­ries, most of the time, have a whole bunch of quirky ob­ser­va­tions that they don’t ex­plain well. Mostly these anoma­lies grad­u­ally go away as peo­ple find bugs in the ex­per­i­ments, or take into ac­count var­i­ous effects they hadn’t con­sid­ered. The as­tro­nom­i­cal anoma­lies you point to don’t seem re­motely prob­le­matic enough to give up on mod­ern physics.

• 24 Dec 2014 5:16 UTC
3 points

“Fal­ling in love” isn’t this sud­den thing that just hap­pens, it’s a pro­cess and it’s a pro­cess that is as­sisted if the other per­son is en­courag­ing and feels like­wise. Put an­other way, when the ob­ject of your af­fec­tion is un­in­ter­ested, that’s of­ten a turnoff, and so one then looks el­se­where.

• There is a pe­cu­liar con­se­quence of this, pointed out by Cosma Shal­izi. Sup­pose we have a de­ter­minis­tic phys­i­cal sys­tem S, and we ob­serve this sys­tem care­fully over time. We are steadily gain­ing in­for­ma­tion about its microstates, and there­fore by this defi­ni­tion, its en­tropy should be de­creas­ing.

You might say, “the sys­tem isn’t closed, be­cause it is be­ing ob­served.” But con­sider the sys­tem “S plus the ob­server.” Say­ing that en­tropy is non­de­creas­ing over time seems to re­quire that the ob­server is in doubt about its own microstates. What does that mean?

• Rus­sell is an en­tirely re­spectable and main­stream re­searcher, at one of the top CS de­part­ments. It’s strik­ing that he’s now ba­si­cally ar­tic­u­lat­ing some­thing pretty close to the MIRI view. Can some­body com­ment on whether Rus­sell has per­son­ally in­ter­acted with MIRI?

If MIRI’s work played a role in con­vinc­ing peo­ple like Rus­sell, that seems like an ma­jor ac­com­plish­ment and demon­stra­tion that they have ar­rived as part of the aca­demic re­search com­mu­nity. If Rus­sell came to that con­clu­sion on his own, MIRI should still get a fair bit of praise for get­ting there first and say­ing it be­fore it was re­spectable.

In ei­ther case, my con­grat­u­la­tions to the folks at MIRI and I will up my cre­dence in them, go­ing for­wards. (They’ve been ris­ing steadily in my es­ti­ma­tion for the last sev­eral years; this is just one of the more dra­matic bumps.)

• Did the sur­vey. Mischief man­aged.

• Did you read about Google’s part­ner­ship with NASA and UCSD to build a quan­tum com­puter of 1000 qubits?

Tech­nolog­i­cally ex­cit­ing, but … imag­ine a world with­out en­cryp­tion. As if all locks and keys on all houses, cars, banks, nu­clear vaults, what­ever, dis­ap­peared, only in­com­pa­rably more con­se­quen­tial.

My un­der­stand­ing is that quan­tum com­put­ers are known to be able to break RSA and el­lip­tic-curve-based pub­lic-key crypto sys­tems. They are not known to be able to break ar­bi­trary sym­met­ric-key ci­phers or hash func­tions. You can do a lot with sym­met­ric-key sys­tems—Ker­beros doesn’t re­quire pub­lic-key au­then­ti­ca­tion. And you can sign things with Merkle sig­na­tures.

There are also a num­ber of can­di­date pub­lic-key cryp­tosys­tems that are be­lieved se­cure against quan­tum at­tacks.

So I think we shouldn’t be too apoc­a­lyp­tic here.

• Tak­ing up on the “level above mine” com­ments—Scott is a very tal­ented and suc­cess­ful re­searcher. He also has tenure and can work on what he likes. The fact that he con­sid­ers this sort of philo­soph­i­cal in­ves­ti­ga­tion worth his time and at­ten­tion makes me up­wardly re­vise my im­pres­sion of how worth­while the topic is.

• Points 1 and 2 are rea­son­ably clear. Point 3 is un­helpfully vague. If I were mod­er­a­tor, I would have no idea how far that pushes, and as a com­menter I wouldn’t have a lot of in­sight as to what to avoid.

I don’t mind giv­ing a catch-all au­thor­ity to a mod­er­a­tor, but if there are spe­cific things you have in mind that are to be avoided, it’s prob­a­bly bet­ter to enu­mer­ate them.

I would add an ex­plicit “noth­ing ille­gal, noth­ing per­son­ally threat­en­ing” clause. Those haven’t been prob­lems, but it seems bet­ter to re­mind peo­ple and to make clear we all agree on that as a stan­dard.

• In­ter­est­ing. Can you say more about how your work com­pares to ex­ist­ing VMs, such as the JVM, and what sorts of things you want to prove about ex­e­cu­tions?