Are you favouring wireheading then? (See hyporational’s comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?
I think he wants to do things for which his brain whispers “this is altruistic” right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.
No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn’t wirehead into constantly feeling sweetness in my mouth.
Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.
Assuming his case is similar to mine: the altruism-sense favours wireheading—it just wants to be satisfied—while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams “THIS IS FAKE, YOU GOTTA WAKE UP, NEO”. And that part wouldn’t shut up unless I actually believed I was out (or it’s shut off, naturally).
When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: “I want to help people/be a good person” and “I want it to actually be real” are separate urges. What the OP seems to be appealing to is a system which says “I want to actually help people” in one go—sympathy, perhaps, as opposed to satisfying your altruism self-image.
Are you favouring wireheading then? (See hyporational’s comment.) That is, finding it oppressively tedious that you can only get that feeling by actually going out and helping people, and wishing you could get it by a direct hit?
I think he wants to do things for which his brain whispers “this is altruistic” right now. It is true that wireheading would lead his brain to whisper that about everything. But from his current position, wireheading is not a benefit, because he values future events according to his current brain state, not his future brain state.
No, just as I eat sweets for sweet pleasure, not for getting sugar into my body, but I still wouldn’t wirehead into constantly feeling sweetness in my mouth.
I find this a confusing position. Please expand
Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.
Assuming his case is similar to mine: the altruism-sense favours wireheading—it just wants to be satisfied—while other moral intuitions say wireheading is wrong. When I imagine wireheading (like timujin imagines having a constant taste of sweetness in his mouth), I imagine still having that part of the brain which screams “THIS IS FAKE, YOU GOTTA WAKE UP, NEO”. And that part wouldn’t shut up unless I actually believed I was out (or it’s shut off, naturally).
When modeling myself as sub-agents, then in my case at least the anti-wireheading and pro-altruism parts appear to be independent agents by default: “I want to help people/be a good person” and “I want it to actually be real” are separate urges. What the OP seems to be appealing to is a system which says “I want to actually help people” in one go—sympathy, perhaps, as opposed to satisfying your altruism self-image.