Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
Also, thinking about this some more:
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.