# Psy-Kosh

Karma: 2,658
Page 1
• Ah, nev­er­mind then. I was think­ing some­thing like let b(x,k) = 1/​sqrt(2k) when |x| < k and 0 otherwise

then define in­te­gral B(x)f(x) dx as the limit as k->0+ of in­te­gral b(x,k)f(x) dx

I was think­ing that then in­te­gral (B(x))^2 f(x) dx would be like in­te­gral delta(x)f(x) dx.

Now that I think about it more care­fully, es­pe­cially in light of your com­ment, per­haps that was naive and that wouldn’t ac­tu­ally work. (Yeah, I can see now my rea­son­ing wasn’t ac­tu­ally valid there. Whoops.)

Ah well. thank you for cor­rect­ing me then. :)

• I’m not sure com­mis­sion/​omis­sion dis­tinc­tion is re­ally the key here. This be­comes clearer by in­vert­ing the situ­a­tion a bit:

Some third party is about to forcibly wire­head all of hu­man­ity. How should your moral agent rea­son about whether to in­ter­vene and pre­vent this?

• Aaaaarggghh! (sorry, that was just be­cause I re­al­ized I was be­ing stupid… speci­fi­cally that I’d been think­ing of the deltas as or­thonor­mal be­cause the in­te­gral of a delta = 1.)

Though… it oc­curs to me that one could con­struct some­thing that acted like a “square root of a delta”, which would then make an or­thonor­mal ba­sis (though still not part of the hilbert space).

(EDIT: hrm… maybe not)

Any­ways, thank you.

• Meant to re­ply to this a bit back, this is prob­a­bly a stupid ques­tion, but...

The un­countable set that you would in­tu­itively think is a ba­sis for Hilbert space, namely the set of func­tions which are zero ex­cept at a sin­gle value where they are one, is in fact not even a se­quence of dis­tinct el­e­ments of Hilbert space, since all these func­tions are el­e­ments of , and are there­fore con­sid­ered to be equiv­a­lent to the zero func­tion.

What about the semi in­tu­itive no­tion of hav­ing the dirac delta dis­tri­bu­tions as a ba­sis? ie, a ba­sis delta(X—R) pa­ram­e­ter­ized by the vec­tor R? How does that fit into all this?

• Ah, alright.

Ac­tu­ally, come to think about it, even spec­i­fy­ing the de­sired be­hav­ior would be tricky. Like if the agent as­signed a prob­a­bil­ity of 12 to the propo­si­tion that to­mor­row they’d tran­si­tion from v to w, or some other form of mixed hy­poth­e­sis re pos­si­ble fu­ture tran­si­tions, what rules should an ideal moral-learn­ing rea­soner fol­low to­day?

I’m not even sure what it should be do­ing. mix over nor­mal­ized ver­sions of v and w? what if at least one is un­bounded? Yeah, on re­flec­tion, I’m not sure what the Right Way for a “con­serves ex­pected moral ev­i­dence” agent is. There’re some spe­cial cases that seem to be well speci­fied, but I’m not sure how I’d want it to be­have in the gen­eral case.

• Really in­ter­est­ing, but I’m a bit con­fused about some­thing. Un­less I mi­s­un­der­stand, you’re claiming this has the prop­erty of con­ser­va­tion of moral ev­i­dence… But near as I can tell, it doesn’t.

Con­ser­va­tion of moral ev­i­dence would im­ply that if it ex­pected that to­mor­row it would tran­si­tion from v to w, then right now it would be act­ing on w rather than v (ex­cept for be­ing in­differ­ent as to whether or not it ac­tu­ally tran­si­tions to w), but what you have here would, if I un­der­stood what you said cor­rectly, will act on v un­til that mo­ment it tran­si­tions to w, even though it knew in ad­vance it was go­ing to tran­si­tion to w.

• Yeah, found that out dur­ing the fi­nal in­ter­view. Sadly, found out sev­eral days ago they re­jected me, so it’s sort of moot now.

• Alter­nately, you might have al­ter­na­tive hy­poth­e­sis that ex­plain the ab­sence equally well, but with a much higher com­plex­ity cost.

• Hey there, I’m mid ap­pli­ca­tion pro­cess. (They’re hav­ing me do the prep work as part of the ap­pli­ca­tion). Any­ways,,,

B) If you don’t mind too much: stay at App Academy. It isn’t com­fortable but you’ll greatly benefit from be­ing around other peo­ple learn­ing web de­vel­op­ment all the time and it will keep you from slack­ing off.

I’m con­fused about that. App Academy has hous­ing/​dorms? I didn’t see any­thing about that. Or did I mi­s­un­der­stand what you meant?

• Cool! (Though does seem that a li­cense would be use­ful for longer trips, so you’d at least have the op­tion of rent­ing a ve­hi­cle if needed.)

And in­ter­est­ing point re so­cial en­vi­ron­ment.

• I’m just go­ing to say I par­tic­u­larly liked the idea of the house ca­ble trans­port sys­tem.

• Yeah, that was my very first thought re the tun­nels. Ex­ca­va­tion is ex­pen­sive. (and main­te­nance costs would be rather higher as well.)

OTOH, we don’t even need full solu­tion (in­clud­ing le­gal solu­tion) to self driv­ing cars to im­prove stuff. The ob­vi­ous solu­tion to the “but I might need to go on a 200 mile trip” is “rent a long dis­tance car as needed, and oth­er­wise own a com­muter car.”

That needs far less of co­or­di­na­tion prob­lems, be­cause that’s some­thing that one can pretty much do right now. Next time one goes to pur­chase/​lease/​what­ever a ve­hi­cle, get one ap­pro­pri­ate/​effi­cient/​etc for short dis­tances, and just rent a long haul ve­hi­cle as needed.

(Or, if liv­ing in place with de­cent pub­lic trans­port, po­ten­tially no need to own a ve­hi­cle at all, of course.)

• Well, I could bring a few ex­tra chairs if wanted. (Although are we even still on for to­mor­row given how the roads are? (Ad­mit­tedly, sun­day will prob­a­bly be worse...))

• As of now, I’m plan­ning on com­ing.

Any­thing I should be bring­ing? (ie, ex­tra chairs, what­ever?)

• Hrm… The whole ex­ist vs non ex­ist thing is odd and con­fus­ing in and of it­self. But so far it seems to me that an al­gorithm can mean­ingfully note “there ex­ists an al­gorithm do­ing/​per­ceiv­ing X”, where X rep­re­sents what­ever it it­self is do­ing/​per­ceiv­ing/​think­ing/​etc. But there doesn’t seem there’d be any differ­ence be­tween 1 and N of them as far as that.

• That seems to be se­ri­ously GAZP vi­o­lat­ing. Try­ing to figure out how to put my thoughts on this into words but… There doesn’t seem to be any­where that the data is stored that could “no­tice” the differ­ence. The ac­tual pro­gram that is be­ing the per­son doesn’t con­tain a “re­al­ness counter”. There’s nowhere in the data that could “no­tice” the fact that there’s, well, more of the per­son. (What­ever it even means for there to be “more of a per­son”)

Per­son­ally, I’m in­clined in the op­po­site di­rec­tion, that even N sep­a­rate copies of the same per­son is the same as 1 copy of the same per­son un­til they di­verge, and how much differ­ence be­tween is, well, how sep­a­rate they are.

(Though, of course, those funky Born stats con­fuse me even fur­ther. But I’m fairly in­clined to­ward the “ex­tra copies of the ex­act same mind don’t add more per­son-ness. But as they di­verge from each other, there may be more per­son-ness. (Though per­haps it may be mean­ingful to talk about ad­di­tional frac­tions of per­son­ness rather than just one then sud­denly two hole per­sons. I’m less sure on that.)