# Rolf_Nelson2

Karma: 70
Page 1
• One pos­si­bil­ity, given my (prob­a­bly wrong) in­ter­pre­ta­tion of the ground rules of the fic­tional uni­verse, is that the hu­mans go to the baby-eaters and tell them that they’re be­ing in­vaded. Since we co­op­er­ated with them, the baby-eaters might con­tinue to co­op­er­ate with us, by agree­ing to:

1. re­duce their baby-eat­ing ac­tivi­ties, and/​or

2. send their own baby-eaters ship to blow up the star (since the fic­tional char­ac­ters are prob­a­bly barred by the au­thor from re­duc­ing the dilemma by blow­ing up Huy­gens or send­ing a probe ship), so that the hu­mans don’t have to sac­ri­fice them­selves.

• @Wei: p(n) will ap­proach ar­bi­trar­ily close to 0 as you in­crease n.

This doesn’t seem right. A se­quence that re­quires knowl­edge of BB(k), has O(2^-k) prob­a­bil­ity ac­cord­ing to our Solomonoff In­duc­tor. If the in­duc­tor com­pares a BB(k)-based model with a BB(k+1)-based model, then BB(k+1) will on av­er­age be about half as prob­a­ble as BB(k).

In other words, P(a par­tic­u­lar model of K-com­plex­ity k is cor­rect) goes to 0 as k goes to in­finity, but the con­di­tional prob­a­bil­ity, P(a par­tic­u­lar model of K-com­plex­ity k is cor­rect | a sub-model of that par­tic­u­lar model with K-com­plex­ity k-1 is cor­rect), does not go to 0 as k goes to in­finity.

• If hu­man­ity un­folded into a fu­ture civ­i­liza­tion of in­finite space and in­finite time, cre­at­ing de­scen­dants and hy­per­de­scen­dants of un­limit­edly grow­ing size, what would be the largest Busy Beaver num­ber ever agreed upon?

Sup­pose they run a BB eval­u­a­tor for all of time. They would, in­deed, have no way at any point of be­ing cer­tain that the cur­rent cham­pion 100-bit pro­gram is the ac­tual cham­pion that pro­duces BB(100). How­ever, if they de­cide to an­throp­i­cally rea­son that “for any time t, I am prob­a­bly al­ive af­ter time t, even though I have no di­rect ev­i­dence one way or the other once t be­comes too large”, then they will be­lieve (with ar­bi­trar­ily high prob­a­bil­ity) that the cur­rent cham­pion pro­gram is the ac­tual cham­pion pro­gram, and an ar­bi­trar­ily high per­centage of them will be cor­rect in their be­lief.

1. One differ­ence be­tween op­ti­miza­tion power and the folk no­tion of “in­tel­li­gence”: Sup­pose the Village Idiot is told the pass­word of an enor­mous aban­doned on­line bank ac­count. The Village Idiot now has vastly more op­ti­miza­tion power than Ein­stein does; this op­ti­miza­tion power is not based on so­cial sta­tus nor raw might, but rather on the ac­tions that the Village Idiot can think of tak­ing (most of which start with log­ging in to ac­count X with pass­word Y) that don’t oc­cur to Ein­stein. How­ever, we wouldn’t la­bel the Village Idiot as more in­tel­li­gent than Ein­stein.

2. Is the Prin­ci­ple of Least Ac­tion in­finitely “in­tel­li­gent” by your defi­ni­tion? The PLA con­sis­tently picks a phys­i­cal solu­tion to the n-body prob­lem that sur­prises me in the same way Kas­parov’s brilli­ant moves sur­prise me: I can’t come up with the ex­act path the n ob­jects will take, but af­ter I see the path that the PLA chose, I find (for each ob­ject) the PLA’s path has a smaller ac­tion in­te­gral than the best path I could have come up with.

3. An AI whose only goal is to make sure such-and-such coin will not, the next time it’s flipped, turn up heads, can ap­ply only (slightly less than) 1 bit of op­ti­miza­tion pres­sure by your defi­ni­tion, even if it va­por­izes the coin and then builds a Dyson sphere to provide in­fras­truc­ture and re­sources for its on­go­ing efforts to probe the Uni­verse to en­sure that it wasn’t tricked and that the coin ac­tu­ally was va­por­ized as it ap­peared to be.

• Count me in.

• Chip, I don’t know what you mean by “The AI In­sti­tute”, but such dis­cus­sion would be more on-topic at the SL4 mailing list than in the com­ments sec­tion of a blog post­ing about op­ti­miza­tion rates.

• The ques­tion of whether try­ing to con­sis­tently adopt meta-rea­son­ing po­si­tion A will raise the per­centage of time you’re cor­rect, com­pared with meta-rea­son­ing po­si­tion B, is of­ten a difficult one.

When some­one uses a dis­liked heuris­tic to pro­duce a wrong re­sult, the temp­ta­tion is to pro­nounce the heuris­tic “toxic”. When some­one uses a fa­vored heuris­tic to pro­duce a wrong re­sult, the temp­ta­tion is to shrug and say “there is no safe har­bor for a ra­tio­nal­ist” or “such a per­son is bi­ased, stupid, and be­yond help; he would have got­ten to the wrong con­clu­sion any­way, no mat­ter what his meta-rea­son­ing po­si­tion was. The idiot rea­soner, rather than my beau­tiful heuris­tic, has to be dis­carded.” In the ab­sence of hard data, con­sen­sus seems difficult; the prob­lem is ex­ac­er­bated when a novel meta-rea­son­ing ar­gu­ment is brought up in the mid­dle of a de­bate on a sep­a­rate dis­agree­ment, in which case the op­pos­ing sides have even more temp­ta­tion to “dig in” to sep­a­rate meta-rea­son­ing po­si­tions.

• CERN on its LHC:

Stud­ies into the safety of high-en­ergy col­li­sions in­side par­ti­cle ac­cel­er­a­tors have been con­ducted in both Europe and the United States by physi­cists who are not them­selves in­volved in ex­per­i­ments at the LHC… CERN has man­dated a group of par­ti­cle physi­cists, also not in­volved in the LHC ex­per­i­ments, to mon­i­tor the lat­est spec­u­la­tions about LHC collisions

Things that CERN is do­ing right:

1. The safety re­views were done by peo­ple who do not work at the LHC.

2. There were mul­ti­ple re­views by in­de­pen­dent teams.

3. There is a group con­tin­u­ing to mon­i­tor the situ­a­tion.

• Wilczek was asked to serve on the com­mit­tee “to pay the wages of his sin, since he’s the one that started all this with his let­ter.”

Mo­ral: if you’re a prac­tic­ing sci­en­tist, don’t ad­mit the pos­si­bil­ity of risk, or you will be pun­ished. (No, this isn’t some­thing I’ve drawn from this case study alone; this is also ev­i­dent from other case stud­ies, NASA be­ing the most egre­gious.)

• @Vladimir: We can’t bother to in­ves­ti­gate ev­ery crazy dooms­day sce­nario suggested

This is a straw­man; no­body is sug­gest­ing in­ves­ti­gat­ing “ev­ery crazy dooms­day sce­nario sug­gested”. A strangelet catas­tro­phe is qual­i­ta­tively pos­si­ble ac­cord­ing to ac­cepted phys­i­cal the­o­ries, and was pro­posed by a prac­tic­ing physi­cist; it’s only af­ter do­ing quan­ti­ta­tive calcu­la­tions that they can be dis­missed as a threat. The point is that such im­por­tant quan­ti­ta­tive calcu­la­tions need to be pro­duced by less bi­ased pro­cesses.

• if you man­age to get your­self stuck in an ad­vanced rut, du­tifully play­ing Devil’s Ad­vo­cate won’t get you out of it.

It’s not a bi­nary ei­ther/​or propo­si­tion, but a spec­trum; you can be in a suffi­ciently shal­low rut that a me­chan­i­cal rule of “when rea­son­ing, search for ev­i­dence against the propo­si­tion you’re cur­rently lean­ing to­wards” might res­cue you in a situ­a­tion where you would oth­er­wise fail to come to the cor­rect con­clu­sion. That said, yes, it would in­deed be prefer­able to con­duct the search be­cause you ac­tu­ally have “true doubt” and lack over­con­fi­dence, rather than by rote, and rather than for the odd rea­sons that Michael Rose gives.

Dad was an avid skep­tic and Martin Gard­ner /​ James Randi fan, as well as be­ing an Ortho­dox Jew. Let that be a les­son on the anti-heal­ing power of compartmentalization

Why do you think that, if he had not com­part­men­tal­ized, he would have re­jected Ortho­dox Ju­daism, rather than re­ject­ing skep­ti­cism?

• “Oh, look, Eliezer is over­con­fi­dent be­cause he be­lieves in many-wor­lds.”

I can agree that this is ab­solutely non­sen­si­cal rea­son­ing. The cor­rect rea­son to be­lieve Eliezer is over­con­fi­dent is be­cause he’s a hu­man be­ing, and the prior that any given hu­man is over­con­fi­dent is ex­tremely large.

One might pro­pose heuris­tics to de­ter­mine whether per­son X is more or less over­con­fi­dent, but “X dis­agrees strongly with me per­son­ally on this con­tro­ver­sial is­sue, there­fore he is over­con­fi­dent” (or stupid or ig­no­rant) is the ex­act type of flawed rea­son­ing that comes from self-serv­ing bi­ases.

• Some physi­cists speak of “el­e­gance” rather than “sim­plic­ity”. This seems to me a bad idea; your judg­ments of el­e­gance are go­ing to be marred by evolved aes­thetic crite­ria that ex­ist only in your head, rather than in the ex­te­rior world, and should only be trusted inas­much as they point to­wards smaller, rather than larger, Kol­mogorov com­plex­ity.

Ex­am­ple:

In the­ory A, the ra­tio of tiny di­men­sion #1 to tiny di­men­sion #2 is finely-tuned to sup­port life.

In the­ory B, the ra­tio of the mass of the elec­tron to the mass of the neu­trino is finely-tuned to sup­port life.

An “el­e­gance” ad­vo­cate might fa­vor A over B, whereas a “sim­plic­ity” ad­vo­cate might be neu­tral be­tween them.

• can you tell me why the sub­jec­tive prob­a­bil­ity of find­ing our­selves in a side of the split world, should be ex­actly pro­por­tional to the square of the thick­ness of that side?

Po’mi runs a trillion ex­per­i­ments, each of which have a one-trillionth 4D-thick­ness of say­ing B but is oth­er­wise A. In his “main­line prob­a­bil­ity”, he sees the all trillion ex­per­i­ments com­ing up A. (If he ran a sex­til­lion ex­per­i­ments he’d see about 1 come up B.)

Pre­sum­ably an ex­ter­nal four-di­men­sional ob­server sees it differ­ently: He sees only one-trillionth of Po’mi com­ing up all-A, and the rest of Po’mi saw about 1 B and are hud­dled in a cor­ner cry­ing that the uni­verse has no or­der. (Maybe the 4D ob­server would be un­able to see Po’mi at all be­cause Po’mi and all other in­hab­itants of the lawful “main­line prob­a­blity” that we’re talk­ing about have al­most in­finites­i­mal thick­ness from the 4D ob­server’s point of view.)

If I were Po’mi, I would start look­ing for a fifth di­men­sion.

• It seems worth­while to also keep in mind other quan­tum me­chan­i­cal de­grees of free­dom, such as spin

Only if the spin’s ba­sis turns out to be rele­vant in the fi­nal ToEILEL (The­ory of Every­thing In­clud­ing Lab­o­ra­tory Ex­per­i­men­tal Re­sults) that gives a me­chan­i­cal al­gorithm for what prob­a­bil­ities I an­ti­ci­pate.

In con­trast, if some­one had a demon­stra­bly-cor­rect the­ory that could tell you the macro­scopic po­si­tion of ev­ery­thing I see, but doesn’t tell you the spin or (di­rectly) the spa­tial or an­gu­lar mo­men­tum, then the QM Mea­sure­ment Prob­lem would still be marked “com­pletely solved”. In such a po­si­tion-ba­sis the­ory, the an­swer to any ques­tion about spin would be “Mu, it only mat­ters if it af­fects the po­si­tion of my macro­scopic read­out.”

• Robin: is there a pa­per some­where that elab­o­rates this ar­gu­ment from mixed-state am­bi­guity?

Scott should add his own recom­men­da­tions, but I would say here is a good start­ing in­tro­duc­tion.

To my mind, the fact that two differ­ent situ­a­tions of un­cer­tainty over true states lead to the same phys­i­cal pre­dic­tions isn’t ob­vi­ously a rea­son to re­ject that type of view re­gard­ing what is real.

The anti-MWI po­si­tion here is that MWI pro­duces differ­ent pre­dic­tions de­pend­ing on what ba­sis is ar­bi­trar­ily picked by the pre­dic­tor; and that the var­i­ous MWI efforts to “patch” this prob­lem with­out pos­tu­lat­ing a new law of physics, are like squar­ing the cir­cle. I think the anti-MWI’ers math is cor­rect, but I’m not an ex­pert enough to be 100% sure; what re­ally makes me think MWI is wrong is the in­abil­ity of the MWI’ers, af­ter many decades, to pro­duce an al­gorithm that you can “turn the crank” on to get the cor­rect prob­a­bil­ities that we see in ex­per­i­ments; they have the ten­dency of try­ing to patch this “ba­sis prob­lem” by pro­duc­ing a new frame­work, which it­self con­tains an ar­bi­trary choice that’s just as bad as the ar­bi­trary choice of ba­sis.

More suc­cinctly, in vanilla MWI you have to pick the cor­rect ba­sis to get the cor­rect ex­per­i­men­tal re­sults, and you have to peek at the re­sults to get the cor­rect ba­sis.

• In many of your prior posts where you bring up MWI, your in­ter­pre­ta­tion doesn’t fun­da­men­tally mat­ter to the over­all point you’re try­ing to make in that post; that is, your over­all con­clu­sion for that post held or failed re­gard­less of which in­ter­pre­ta­tion is cor­rect, pos­si­bly to a greater de­gree than you tend to re­al­ize.

For ex­am­ple: “We used a true ran­dom­ness source—a quan­tum de­vice.” The philoso­phers’ point could equally have been made by choos­ing the first 2^N digits of pi and find­ing they cor­re­spond by chance to some­one’s GLUT.

• the colony is in the fu­ture light cone of your cur­rent self, but no fu­ture ver­sion of you is in its fu­ture light cone.

Right, and if any­one’s still con­fused how this is pos­si­ble: wikipe­dia and a longer explanation

• * That-which-we-name “con­scious­ness” hap­pens within physics, in a way not yet un­der­stood, just like what hap­pened the last three thou­sand times hu­man­ity ran into some­thing mys­te­ri­ous.

not yet un­der­stood? Is your po­si­tion that there’s a math­e­mat­i­cal or phys­i­cal dis­cov­ery wait­ing out there, that will cause you, me, Chalmers, and ev­ery­one else to slap our heads and say, “of course, that’s what the an­swer is! We should have re­al­ized it all along!”

Ques­tion for all: How do you ap­ply Oc­cam’s Ra­zor to cases where there are two com­pet­ing hy­pothe­ses:

1. A and B are in­de­pen­dently true

2. A is true, and im­plies B, but in some mys­te­ri­ous way we haven’t yet de­ter­mined. (For ex­am­ple, “heat is caused by molec­u­lar mo­tion” or “quarks are caused by grav­i­ta­tion”, to pick two in­fer­ences at op­po­site ends of the plau­si­bil­ity spec­trum.)

I don’t know what the best an­swer is. Maybe the prac­ti­cal an­swer is a var­i­ant of Solo­moff in­duc­tion: some­how com­pare “P(A) P(B)” with “P(A) P(B fol­lows log­i­cally from A, and we were too dumb to re­al­ize that)”, where the P’s are some type of Solomonoff-ish a-pri­ori “2^short­est pro­gram” prob­a­bil­ities. But the best an­swer cer­tainly isn’t, “A is sim­pler than A + B, so we know hy­poth­e­sis 2 is cor­rect, with­out even hav­ing to glance at the like­li­hood that B fol­lows from A.” Other­wise, you would have to con­clude that, log­i­cally, quarks are caused by grav­i­ta­tion, in some cur­rently-mys­te­ri­ous way that fu­ture math­e­mat­i­ci­ans will be cer­tain to dis­cover.

For the record, my be­lief is that many of the de­baters have be­liefs that are iso­mor­phic to their op­po­nent’s be­liefs. When I hear things like, “You said this is a phys­i­cal law with­out ma­te­rial con­se­quences, but I define phys­i­cal laws as things that have ma­te­rial con­se­quences, so you’re wrong QED!” then that’s a sign that we’re in “does a tree fal­ling in the for­est make a noise” ter­ri­tory. Does a con­cious­ness map­ping rule “ac­tu­ally ex­ist”? Does the real world “ac­tu­ally ex­ist”? Does pi “ac­tu­ally ex­ist”? Why should I care?

In the end, I care about ac­tions and out­comes, and the al­gorithms that pro­duce those ac­tions. I don’t care whether you la­bel con­cious­ness as “part of re­al­ity” (be­cause it’s some­thing you ob­serve), or “part of your util­ity func­tion” (be­cause it’s not deriv­able by an in­tel­li­gence-in-gen­eral), or “part of this com­plete nu­tri­tious break­fast” (be­cause, tech­ni­cally, any­thing that’s not poi­sonous can be com­bined with sep­a­rate un­re­lated nu­tri­tious items to form a com­plete nu­tri­tious break­fast.)

• @spin­dizzy:

No, this hasn’t been “ar­gued out”, and even if it had been in the past, the “sin­gle best an­swer” would differ from per­son to per­son and from year to year. I would sug­gest start­ing a thread on SL4 or on SIAI’s Sin­gu­lar­ity Dis­cus­sion list.