Yes. Presumably a consequentialist should consider the probabilities of various outcomes. This is potentially problematic, since probability is in the eye of the beholder, and it’s not clear who the right beholder is. Is it B? Is it an ideal rational agent with the information available to B? An ideal but computationally limited agent?
My sense is that in the real world, it’s hard to second-guess any particular decision. There’s no good way to account for the difference between what the actor knew, what they could or should have known, and what the evaluator knows.
Yes. Presumably a consequentialist should consider the probabilities of various outcomes. This is potentially problematic, since probability is in the eye of the beholder, and it’s not clear who the right beholder is. Is it B? Is it an ideal rational agent with the information available to B? An ideal but computationally limited agent?
My sense is that in the real world, it’s hard to second-guess any particular decision. There’s no good way to account for the difference between what the actor knew, what they could or should have known, and what the evaluator knows.