Karma: 1,628
• Is there a way to make drafts available to spe­cific peo­ple, for re­view?

• Found a nice proof for Sperner’s lemma (#9):

First some defi­ni­tions. Call a d-sim­plex with ver­tices col­ored in (d+1) differ­ent col­ored chro­matic. Call the par­ity of the num­ber of chro­matic sim­plices the chro­matic par­ity.

It’s eas­ier to prove the fol­low­ing gen­er­al­iza­tion: Take a com­plex of d-sim­plices that form a d-sphere: then any (d+1)-col­or­ing of the ver­tices will have even chro­matic par­ity.

Proof by in­duc­tion on d:

Base d=-1: vac­u­ously true.

As­sume true for d-1: Say you have an ar­bi­trary com­plex of d-sim­plices form­ing a d-sphere, with an ar­bi­trary d+1-col­or­ing. Choose a ver­tex. W.L.O.G. we will call the color of the cho­sen ver­tex blue.

Take the com­plex of sim­plices that con­tain this ver­tex. Since a sphere has no bound­ary or branches, this com­plex will be a d-ball. Delete the cho­sen ver­tex, and keep only the op­po­site (d-1)-sim­plices that are left, which will form a (d-1)-sphere, call it the shell.

We need to choose a sec­ond color, say red. We’ll call a (d-1)-sim­plex with ver­tices of all d+1 col­ors ex­cept red an R-chro­matic face, and similarly with blue.

Now, re­place all the red ver­tices in the shell with blue ver­tices, so that the shell is now R-chro­matic. By in­duc­tion it has an even num­ber of R-chro­matic faces. Con­sider what hap­pens when we re­con­vert a ver­tex on the shell back to red: since the ver­tex was pre­vi­ously blue, any R-chro­matic faces will get turned into B-chro­matic faces. Let r be the num­ber of R-chro­matic faces on the shell, and b be the num­ber of B-chro­matic faces. The par­ity of r-b will re­main even as we con­tinue this pro­cess.

Let’s go back to the ver­tex in the cen­ter of the shell. All cur­rently chro­matic sim­plices with this ver­tex have op­po­site faces which are B-chro­matic, since this ver­tex is blue. We’ll flip the ver­tex to red, which de­stroys chro­matic sim­plices with op­po­site B-chro­matic faces and cre­ates chro­matic sim­plices with op­po­site R-chro­matic faces. Since r-b is even, the chro­matic par­ity is pre­served!

Since we’ve shown that ar­bi­trary re­col­or­ing of ver­tices pre­serves the chro­matic par­ity, it’s clear that the chro­matic par­ity will be even for any col­or­ing.

Corol­lary: Sperner’s lemma

Start with a d-sim­plex which has been di­vided into d-sim­plices, and where each face of the large sim­plex has one color which ver­tices on it are for­bid­den from tak­ing. Take a point of each color, and match it with a face of the sim­plex that that color is al­lowed on. Then con­nect that ver­tex to each point on that face. This will cre­ate a bunch of non-chro­matic sim­plices. Fi­nally, cre­ate a sim­plex of all of the new points. This will cre­ate one chro­matic sim­plex.

This will form a d-sphere, and thus will have an even chro­matic par­ity. That means the origi­nal sim­plex must have had odd chro­matic par­ity.

• Ex 8: (in Python, us­ing a re­ver­sal func­tion)

def f(s):
re­turn s[::-1]

dlmt = ’”“”‘
code = “””def f(s):
re­turn s[::-1]

dlmt = ’{}’
code = {}{}{}
code = code.for­mat(dlmt, dlmt, code, dlmt)
print(f(code))”″”
code = code.for­mat(dlmt, dlmt, code, dlmt)
print(f(code))

• Ex 6:

If at any point , then we’re done. So as­sume that we get a strict in­crease each time up to . Since there are only el­e­ments in the en­tire poset, and is mono­tone, has to equal .

Ex 7:

For a limit or­di­nal , define as the least up­per bound of for all . If , then the set for is a set of size that maps into a set of size by tak­ing the value of the el­e­ment. Since there are no in­jec­tions be­tween these sets, there must be two or­di­nals such that. Since is mono­tone, that im­plies that for ev­ery or­di­nal , and thus is a fixed point. Since this proves the ex­er­cise.

Ex 8:

Start­ing from , we can cre­ate a fixed point via iter­a­tion by tak­ing , and iter­at­ing times as demon­strated in Ex 7. Call this fixed point . Sup­pose there was a fixed point such that and . Then at some point , but , which breaks the mono­ton­ic­ity of un­less . So gen­er­ated this way is always the small­est fixed point greater than .

Say we have fixed points . Then let be the least up­per bound of , and gen­er­ate a fixed point from . So will be greater than each el­e­ment of since is mono­tone, and is the small­est such fixed point as shown in the above para­graph. So the poset of fixed points is semi-com­plete with up­per bounds.

Now take our fixed points again. Now let be the great­est lower bound of , and gen­er­ate a fixed point . Since and is mono­tonic, , and so is a lower bound of . It has to be the great­est such bound be­cause it­self is already the great­est such bound in our poset, and is mono­tonic.

Thus the lat­tice of fixed points has all least up­per bounds and all great­est lower bounds, and is thus com­plete!

• Cur­ry­ing doesn’t pre­serve sur­jec­tivity. As a sim­ple ex­am­ple, you can eas­ily find a sur­jec­tive func­tion , but there are no sur­jec­tive func­tions .

• Ab­strac­tion can be un­der­stood as a re­la­tion­ship be­tween mod­els. You can have a model of what it means to “brush your teeth”, and you can have mod­els of what it means to “pre­pare tooth­brush”, etc… The mod­els of the sub­tasks can be com­posed into a model of the whole se­quence, and the ab­strac­tion re­la­tion­ship tells you how this model is a re­al­iza­tion of the ab­stract “brush your teeth” model. Similarly, we can have a model of what an “an­i­mal” is, and mod­els for “dogs”, “hu­mans”, “cats”. The ab­strac­tion re­la­tion­ship tells you how each of these mod­els are re­al­iza­tions of the “an­i­mal” model. Re­mov­ing prop­er­ties is an easy way to cre­ate mod­els with this re­la­tion­ship, but it’s not the only way. For ex­am­ple, you can also re­place con­tin­u­ous time with dis­crete time.

You can chain this re­la­tion­ship to cre­ate a hi­er­ar­chy, and for hu­mans, the most con­crete mod­els we typ­i­cally use are the ones that are “mod­el­ing” our raw sen­sory ex­pe­riences. I think that ad­e­quately ex­plains why peo­ple use it this way, but that this isn’t what ab­strac­tion is.

• Pre­sum­ably, any agent which we man­age to build will be com­putable. So to the ex­tent our agent is us­ing util­ity func­tions, they will be con­tin­u­ous.

If an agent is only ca­pa­ble of com­putable ob­ser­va­tions, but has a dis­con­tin­u­ous util­ity func­tion, then if the uni­verse is in a state where the util­ity func­tion is dis­con­tin­u­ous, the agent will need to spend an in­finite amount of time (or as long as the uni­verse state re­mains at such a point) de­ter­min­ing the util­ity of the cur­rent state. I think it might be pos­si­ble to use this to cre­ate a more con­crete ex­ploit.

• This post has caused me to up­date my prob­a­bil­ity of this kind of sce­nario!

Another is­sue re­lated to the in­for­ma­tion leak­age: in the in­dus­trial rev­olu­tion era, 30 years was plenty of time for peo­ple to un­der­stand and repli­cate leaked or stolen knowl­edge. But if the slower team man­aged to ob­tain the lead­ing team’s source code, it seems plau­si­ble that 3 years, or es­pe­cially 0.3 years, would not be enough time to learn how to use that in­for­ma­tion as skil­lfully as the lead­ing team can.

• I think peo­ple make de­ci­sions based on ac­cu­rate mod­els of other peo­ple all the time. I think of New­comb’s prob­lem as the limit­ing case where Omega has ex­tremely ac­cu­rate pre­dic­tions, but that the solu­tion is still rele­vant even when “Omega” is only 60% likely to guess cor­rectly. A fun illus­tra­tion of a com­puter pro­gram ca­pa­ble of pre­dict­ing (most) hu­mans this ac­cu­rately is the Aaron­son or­a­cle.

• This rings re­ally true with my own ex­pe­riences; glad to see it writ­ten up so clearly!

I think that lots of med­i­ta­tion stuff (in par­tic­u­lar The Mind Illu­mi­nated) is point­ing at some­thing like this. One of the goals is to train all of your sub­minds to pay at­ten­tion to the same thing, which leads to in­creas­ing your abil­ity to have an in­ten­tion shared across sub­minds (which feels re­lated to Romeo’s post). Any­way, I think it’s re­ally great to have mul­ti­ple differ­ent frames for ap­proach­ing this kind of goal!

# Op­ti­miza­tion Provenance

23 Aug 2019 20:08 UTC
41 points