# ZankerH

Karma: 487
• Square er­ror has been used in­stead of ab­solute er­ror in many di­verse op­ti­miza­tion prob­lems in part be­cause its deriva­tive is pro­por­tional to the mag­ni­tude of the er­ror, whereas the deriva­tive of the ab­solute er­ror is con­stant. When you’re try­ing to solve a smooth op­ti­miza­tion prob­lem with gra­di­ent meth­ods, you gen­er­ally benefit from loss func­tions with a smooth gra­di­ent than tends to­wards zero along with the er­ror.

• Sounds like you need to work on that time prefer­ence. Have you con­sid­ered set­ting up an ac­countabil­ity sys­tem or self-black­mailing to make sure you’re not hav­ing too much fun?

• This is why anti-semitism ex­ists.

• Define “op­ti­mal”. Op­ti­miz­ing for the util­ity func­tion of min(my effort), I could mi­suse more com­pany re­sources to run ran­dom search on.

• In which case, best I can do is 10 lines

``````MakeIn­tVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
``````
• Well, that does com­pli­cate things quite a bit. I threw those lines out of my al­gorithm gen­er­a­tor and the fre­quency of valid pro­grams gen­er­ated dropped by ~4 or­ders of mag­ni­tude.

• Pre­limi­nary solu­tion based on ran­dom search

``````MakeIn­tVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1
``````

I’ve hit on a bunch of similar solu­tions, but `2 * (1 + 34^2)` seems to be the com­mon thread.

• My men­tal model of what could pos­si­bly drive some­one to EA is too poor to an­swer this with any de­gree of ac­cu­racy. Speak­ing for my­self, I see no rea­son why such in­for­ma­tion should have any in­fluence on fu­ture hu­man ac­tions.

• I’d ar­gue that this is not the case, since the vast ma­jor­ity of peo­ple who don’t ex­pect to be “clerks” still end up in similar po­si­tions.

• Is there any rea­son to think that % in prison “should” be more equal?

Since we’re talk­ing about op­ti­miz­ing for “equal­ity” be­tween two fun­da­men­tally un­equal things, why not?

Are you say­ing hav­ing the same amount of men and women in prison would be detri­men­tal to the en­force­ment of gen­der equal­ity? How does that fol­low?

• Hav­ing ac­tu­ally lived un­der a regime that pur­ported to “change hu­man be­havi­our to be more in line with re­al­ity”, my prior for such an at­tempt be­ing made in good faith to be­gin with is ac­cord­ingly low.

At­tempts to change so­ciety in­vari­ably re­sult in se­lec­tion pres­sure for effec­tive­ness out­match­ing those for hon­esty and benev­olence. In a cou­ple of gen­er­a­tions, the only peo­ple left in charge are the kind of peo­ple you definitely wouldn’t want in charge, un­less you’re the kind of per­son no­body wants in charge in the first place.

I’m think­ing about lo­cat­ing spe­cific cen­ters of our brains and re­duc­ing cer­tain ac­tivi­ties which un­doubt­edly make us less al­igned with re­al­ity and in­crease the ac­ti­va­tions of oth­ers.

This is the kind of think­ing that, given a few years of unchecked power and pri­mate group com­pe­ti­tion, leads to mass pro­grams of re­ar­rang­ing peo­ple’s brain cen­tres with 15th cen­tury tech­nol­ogy.

Why don’t you spend some time in­stead think­ing about how your forced ra­tio­nal­ity pro­gramme is go­ing to avoid the pit­fall all oth­ers so far fell into, mega­lo­ma­nia and geno­cide? And why are you so sure your be­liefs are the fi­nal and cor­rect ones to force on ev­ery­one through brain ma­nipu­la­tion? If we had the tech­nol­ogy to en­force be­liefs a few cen­turies ago, would you con­sider it a moral good to freeze the progress of hu­man thought at that point? Be­cause that’s es­sen­tially what you’re propos­ing from the point of view of all po­ten­tial fu­tures where you fail.

• Even if it kill all hu­mans, it will be one hu­man which will sur­vive.

Un­less it self-mod­ifies to the point where you’re stretch­ing any mean­ingful defi­ni­tion of “hu­man”.

Even if his val­ues will evolve it will be nat­u­ral evolu­tion of hu­man val­ues.

Again, for suffi­ciently broad defi­ni­tions of “nat­u­ral evolu­tion”.

As most hu­man be­ings don’t like to be alone, he would cre­ate new friends that is hu­man simu­la­tions. So even worst cases are not as bad as pa­per clip max­imiser.

If we’re to be­lieve Han­son, the first (and pos­si­bly only) wave of hu­man em tem­plates will be the most in­tro­vert worka­holics we can find.

• acausal

• arational

• agnonstic

• Gnon

• gnonstic

• Moloch

• outreact

• postrational

• postrationalist

• underreact

• Two things:

• all other points have a nega­tive x co­or­di­nate, and the x range passed to the tes­sel­la­tion al­gorithm is [-124, −71]. You prob­a­bly for­got the minus sign for that point’s x co­or­di­nate.

• as men­tioned above, the al­gorithm fails to con­verge be­cause the weights are poorly scaled. For a bet­ter graph­i­cal rep­re­sen­ta­tion, you will want to scale them to the range be­tween one and one half of the near­est point dis­tance, but to make it run, just in­crease the di­vi­sion con­stant.

• The range is speci­fied by the box ar­gu­ment to the `com­pute_2d_voronoi` func­tion, in form `[[min_x, max_x], [min_y, max_y]]`. Points and weights can be speci­fied as 2d and 1d ar­rays, e.g., as `np.ar­ray([[x1,y1], [x2, y2], [x3, y3], …, [xn, yn]])` and `np.ar­ray([w1, w2, w3, …, wn])`. Here’s an ex­am­ple that takes speci­fied points, and also al­lows you to plot point radii for de­bug­ging pur­poses: http://​​paste­bin.com/​​h2fDLXRD