I guess it seems bizarre because you’re changing your behavior in response to a piece of information that tells you nothing about moral philosophy and nothing about the consequences of the behavior. Or is the idea that there are good consequences from timeless cooperation between conflicting selves, or something? But I’m not seeing any gains from trade here, and cooperation isn’t Bostrom and Ord’s original justification, as far as I know. The original scenario is about an agent whole-heartedly committed to doing the right thing as defined by some procedure he doesn’t know the outcome of. And what if you found out the earlier donation had been a pure behavioral tic of a sort that doesn’t respond to cooperation? Would you still treat it as though it had been made by you, or would you treat it as though it had been made by something else? If the Parliamentary Model tells you to put 30% of your effort into saving puppies, is it good enough if 30% of your Everett copies put all their effort into it and 70% put none of their effort into it? If so, how much effort should you expend on research into what your parallel selves are currently up to? I’m very confused here, and I’m sure it’s partly because I don’t understand the parliamentary model, but I’m not convinced it’s wholly because of that.
I guess you’re right, the Parliamentary Model seems a better model for moral conflict than moral uncertainty. It doesn’t affect my original point too much (that it’s not necessarily irrational to diversify charitable giving), since we do have moral conflict as well as moral uncertainty, but we should probably keep thinking about how to deal with moral uncertainty.
I think if you apply this reasoning to moral conflict between different kinds of altruism, it becomes a restatement of “purchase fuzzies and utilons separately”, except with more idealized assumptions about partial selves as rational strategists. It seems to me that if I’m the self that wants utilons, then “purchase fuzzies and utilons separately” is a more realistic strategy for me to use in that it gives up only what is needed to placate the other selves, rather than what the other selves could bargain for if they too were rational agents. With parliament-like approaches to moral conflict it sometimes feels to me as though I’m stuck in a room with a rabid gorilla and I’m advised to turn into half a gorilla to make the room’s output more agenty, when what is really needed is some relatively small amount of gorilla food, or maybe a tranquilizer gun.
You may not be a typical person. Consider instead someone who’s conflicted between egotism, utilitarianism, and deontology, and these moralities get more or less influence from moment to moment in a chaotic manner but has a sort of long term power balance. The Parliamentary Model could be a way for the person to coordinate actions so that he doesn’t work against himself.
I guess it seems bizarre because you’re changing your behavior in response to a piece of information that tells you nothing about moral philosophy and nothing about the consequences of the behavior. Or is the idea that there are good consequences from timeless cooperation between conflicting selves, or something? But I’m not seeing any gains from trade here, and cooperation isn’t Bostrom and Ord’s original justification, as far as I know. The original scenario is about an agent whole-heartedly committed to doing the right thing as defined by some procedure he doesn’t know the outcome of. And what if you found out the earlier donation had been a pure behavioral tic of a sort that doesn’t respond to cooperation? Would you still treat it as though it had been made by you, or would you treat it as though it had been made by something else? If the Parliamentary Model tells you to put 30% of your effort into saving puppies, is it good enough if 30% of your Everett copies put all their effort into it and 70% put none of their effort into it? If so, how much effort should you expend on research into what your parallel selves are currently up to? I’m very confused here, and I’m sure it’s partly because I don’t understand the parliamentary model, but I’m not convinced it’s wholly because of that.
I guess you’re right, the Parliamentary Model seems a better model for moral conflict than moral uncertainty. It doesn’t affect my original point too much (that it’s not necessarily irrational to diversify charitable giving), since we do have moral conflict as well as moral uncertainty, but we should probably keep thinking about how to deal with moral uncertainty.
I think if you apply this reasoning to moral conflict between different kinds of altruism, it becomes a restatement of “purchase fuzzies and utilons separately”, except with more idealized assumptions about partial selves as rational strategists. It seems to me that if I’m the self that wants utilons, then “purchase fuzzies and utilons separately” is a more realistic strategy for me to use in that it gives up only what is needed to placate the other selves, rather than what the other selves could bargain for if they too were rational agents. With parliament-like approaches to moral conflict it sometimes feels to me as though I’m stuck in a room with a rabid gorilla and I’m advised to turn into half a gorilla to make the room’s output more agenty, when what is really needed is some relatively small amount of gorilla food, or maybe a tranquilizer gun.
You may not be a typical person. Consider instead someone who’s conflicted between egotism, utilitarianism, and deontology, and these moralities get more or less influence from moment to moment in a chaotic manner but has a sort of long term power balance. The Parliamentary Model could be a way for the person to coordinate actions so that he doesn’t work against himself.