So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Is it immoral to refuse an irresponsible bet that would have paid off?
Same reasoning.
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Same reasoning.
All right.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
I agree, but this seems entirely tangential to the points either of us were making.