# Psy-Kosh

Karma: 2,858
• Ah, nevermind then. I was thinking something like let b(x,k) = 1/​sqrt(2k) when |x| < k and 0 otherwise

then define integral B(x)f(x) dx as the limit as k->0+ of integral b(x,k)f(x) dx

I was thinking that then integral (B(x))^2 f(x) dx would be like integral delta(x)f(x) dx.

Now that I think about it more carefully, especially in light of your comment, perhaps that was naive and that wouldn’t actually work. (Yeah, I can see now my reasoning wasn’t actually valid there. Whoops.)

Ah well. thank you for correcting me then. :)

• I’m not sure commission/​omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

• Aaaaarggghh! (sorry, that was just because I realized I was being stupid… specifically that I’d been thinking of the deltas as orthonormal because the integral of a delta = 1.)

Though… it occurs to me that one could construct something that acted like a “square root of a delta”, which would then make an orthonormal basis (though still not part of the hilbert space).

(EDIT: hrm… maybe not)

Anyways, thank you.

• Meant to reply to this a bit back, this is probably a stupid question, but...

The uncountable set that you would intuitively think is a basis for Hilbert space, namely the set of functions which are zero except at a single value where they are one, is in fact not even a sequence of distinct elements of Hilbert space, since all these functions are elements of , and are therefore considered to be equivalent to the zero function.

What about the semi intuitive notion of having the dirac delta distributions as a basis? ie, a basis delta(X—R) parameterized by the vector R? How does that fit into all this?

• Ah, alright.

Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 12 to the proposition that tomorrow they’d transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?

I’m not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I’m not sure what the Right Way for a “conserves expected moral evidence” agent is. There’re some special cases that seem to be well specified, but I’m not sure how I’d want it to behave in the general case.

• Really interesting, but I’m a bit confused about something. Unless I misunderstand, you’re claiming this has the property of conservation of moral evidence… But near as I can tell, it doesn’t.

Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.

• Hey there, I’m mid application process. (They’re having me do the prep work as part of the application). Anyways,,,

B) If you don’t mind too much: stay at App Academy. It isn’t comfortable but you’ll greatly benefit from being around other people learning web development all the time and it will keep you from slacking off.

I’m confused about that. App Academy has housing/​dorms? I didn’t see anything about that. Or did I misunderstand what you meant?

• Cool! (Though does seem that a license would be useful for longer trips, so you’d at least have the option of renting a vehicle if needed.)

And interesting point re social environment.

• I’m just going to say I particularly liked the idea of the house cable transport system.

• Yeah, that was my very first thought re the tunnels. Excavation is expensive. (and maintenance costs would be rather higher as well.)

OTOH, we don’t even need full solution (including legal solution) to self driving cars to improve stuff. The obvious solution to the “but I might need to go on a 200 mile trip” is “rent a long distance car as needed, and otherwise own a commuter car.”

That needs far less of coordination problems, because that’s something that one can pretty much do right now. Next time one goes to purchase/​lease/​whatever a vehicle, get one appropriate/​efficient/​etc for short distances, and just rent a long haul vehicle as needed.

(Or, if living in place with decent public transport, potentially no need to own a vehicle at all, of course.)

• As of now, I’m planning on coming.

Anything I should be bringing? (ie, extra chairs, whatever?)

• Hrm… The whole exist vs non exist thing is odd and confusing in and of itself. But so far it seems to me that an algorithm can meaningfully note “there exists an algorithm doing/​perceiving X”, where X represents whatever it itself is doing/​perceiving/​thinking/​etc. But there doesn’t seem there’d be any difference between 1 and N of them as far as that.

• That seems to be seriously GAZP violating. Trying to figure out how to put my thoughts on this into words but… There doesn’t seem to be anywhere that the data is stored that could “notice” the difference. The actual program that is being the person doesn’t contain a “realness counter”. There’s nowhere in the data that could “notice” the fact that there’s, well, more of the person. (Whatever it even means for there to be “more of a person”)

Personally, I’m inclined in the opposite direction, that even N separate copies of the same person is the same as 1 copy of the same person until they diverge, and how much difference between is, well, how separate they are.

(Though, of course, those funky Born stats confuse me even further. But I’m fairly inclined toward the “extra copies of the exact same mind don’t add more person-ness. But as they diverge from each other, there may be more person-ness. (Though perhaps it may be meaningful to talk about additional fractions of personness rather than just one then suddenly two hole persons. I’m less sure on that.)