Proof: your preference-relation is transitive or whatever?
(Maybe weird things happen if others can cause you to kill people with your bare hands, but that’s no different from threatening a utilitarian with disutility. Actually, I assume you’re assuming you’re able to just-decide-not-to-kill-people-with-your-bare-hands, because otherwise maybe you fanatically minimize P(bare-hands-kill) or whatever.)
Weird things CAN happen if others can cause you to kill people with your bare hands (See Lexi-Pessimist Pump here). But assuming you can choose to never be in a world where you kill someone with your bare hands, I also don’t think there are problems? The world states may as well just not exist.
(Also, not money pump, but consider: Say I have 10^100 perfectly realistic mannequin robots and one real human captive. I give the constrained utilitarian the choice between choking one of the bodies with their bare hands or let me wipe out humanity. Does the agent really choose to not risk killing someone themself?)
No?
Proof: your preference-relation is transitive or whatever?
(Maybe weird things happen if others can cause you to kill people with your bare hands, but that’s no different from threatening a utilitarian with disutility. Actually, I assume you’re assuming you’re able to just-decide-not-to-kill-people-with-your-bare-hands, because otherwise maybe you fanatically minimize P(bare-hands-kill) or whatever.)
Weird things CAN happen if others can cause you to kill people with your bare hands (See Lexi-Pessimist Pump here). But assuming you can choose to never be in a world where you kill someone with your bare hands, I also don’t think there are problems? The world states may as well just not exist.
(Also, not money pump, but consider: Say I have 10^100 perfectly realistic mannequin robots and one real human captive. I give the constrained utilitarian the choice between choking one of the bodies with their bare hands or let me wipe out humanity. Does the agent really choose to not risk killing someone themself?)