Thanks for your kind words and your feedback/commentary!
(I’ll split my reply into multiple comments to make following the threads easier.)
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
I’m not sure I see what you mean by that. Skippable guesses to follow:
Do you mean something like “this brand of non-consequentialist theories seem to basically just be a collection of common sense intuitions”? If so, I think that’s part of the intention for any moral theory.
Or do you mean something like that, plus that that brand of non-consequentialist theory hasn’t abstracted away from those intuitions much (such that they’re liable to something like overfitting, whereas something like classical utilitarianism errs more towards underfitting by stripping everything down to one single strong intuition[1]), and wouldn’t provide preference orderings that satisfy axioms of rationality/expected utility[2]? If so, I agree with that too, and that’s why I personally find something like classical utilitarianism far more compelling, but it’s also an “issue” of lot of smart people of aware of and yet still endorse the non-consequentialist theories, so I think it’s still important for our moral uncertainty framework to be able to handle such theories.
Or do you mean something like “this brand of non-consequentialist theory is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”? Is so, I’m pretty sure I disagree, and one indication that this is probably incorrect is that accounting for moral uncertainty seems likely to lead to fairly different results than just going with an ordinal Kantian theory.
Or is intention behind words not a multiple choice test, in which case please provide your short-answer and/or essay response :p
[1] My thinking here is influenced by pages 26-28 of Nick Beckstead’s thesis, though it was a while ago that I read them.
[2] Disclaimer: I don’t yet understand those axioms in detail myself; I think I get the gist, but often when I talk about them it’s more like I’m extrapolating what conclusions smart people would draw based on others I’ve seen them draw, rather than knowing what’s going on under the hood.
In this case, one relevant smart person is MacAskill, who says in his thesis: “Many theories do provide cardinally measurable choice-worthiness: in general, if a theory orders empirically uncertain prospects in terms of their choice-worthiness, such that the choice-worthiness relation satisfies the axioms of expected utility theory [footnote mentioning von Neumann et al.], then the theory provides cardinally measurable choice-worthiness.” This seems to me to imply (as a matter of how people speak, not by actual logic) that theories that aren’t cardinal, like the hypothesised Kantian theory, don’t meet the axioms of expected utility theory.
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
I’m not sure I see what you mean by that.
The section on the Borda Rule is about how to combine theories under consideration that only rank outcomes ordinally. The lack of information about how these non-consequentialist theories rank outcomes could stem from them being underspecified—or a combination approach as your post describes, though probably one of a different form than described here.
is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”?
I wouldn’t say “all”—though it might be an average across moral theories that could be considered separately. They’re complicated theories, but maybe the pieces make more sense, or it’ll make more sense if disassembled and reassembled.
like the hypothesised Kantian theory, don’t meet the axions of expected utility theory.
This may be true of other non-consequentialist theories. What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
I think that’s one way one could try to adapt Kantian theories, or extrapolate certain key principles from them. But I don’t think it’s what the theories themselves say. I think what you’re describing lines up very well with rule utilitarianism.
(Side note: Personally, “my favourite theory” would probably be something like two-level utilitarianism, which blends both rule and act utilitarianism, and then based on moral uncertainty I’d add some side constraints/concessions to deontological and virtue ethical theories—plus just a preference for not doing anything too drastic/irreversible in case the “correct” theory is one I haven’t heard of yet/no one’s thought of yet.)
Thanks for your kind words and your feedback/commentary!
(I’ll split my reply into multiple comments to make following the threads easier.)
I’m not sure I see what you mean by that. Skippable guesses to follow:
Do you mean something like “this brand of non-consequentialist theories seem to basically just be a collection of common sense intuitions”? If so, I think that’s part of the intention for any moral theory.
Or do you mean something like that, plus that that brand of non-consequentialist theory hasn’t abstracted away from those intuitions much (such that they’re liable to something like overfitting, whereas something like classical utilitarianism errs more towards underfitting by stripping everything down to one single strong intuition[1]), and wouldn’t provide preference orderings that satisfy axioms of rationality/expected utility[2]? If so, I agree with that too, and that’s why I personally find something like classical utilitarianism far more compelling, but it’s also an “issue” of lot of smart people of aware of and yet still endorse the non-consequentialist theories, so I think it’s still important for our moral uncertainty framework to be able to handle such theories.
Or do you mean something like “this brand of non-consequentialist theory is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”? Is so, I’m pretty sure I disagree, and one indication that this is probably incorrect is that accounting for moral uncertainty seems likely to lead to fairly different results than just going with an ordinal Kantian theory.
Or is intention behind words not a multiple choice test, in which case please provide your short-answer and/or essay response :p
[1] My thinking here is influenced by pages 26-28 of Nick Beckstead’s thesis, though it was a while ago that I read them.
[2] Disclaimer: I don’t yet understand those axioms in detail myself; I think I get the gist, but often when I talk about them it’s more like I’m extrapolating what conclusions smart people would draw based on others I’ve seen them draw, rather than knowing what’s going on under the hood.
In this case, one relevant smart person is MacAskill, who says in his thesis: “Many theories do provide cardinally measurable choice-worthiness: in general, if a theory orders empirically uncertain prospects in terms of their choice-worthiness, such that the choice-worthiness relation satisfies the axioms of expected utility theory [footnote mentioning von Neumann et al.], then the theory provides cardinally measurable choice-worthiness.” This seems to me to imply (as a matter of how people speak, not by actual logic) that theories that aren’t cardinal, like the hypothesised Kantian theory, don’t meet the axioms of expected utility theory.
The section on the Borda Rule is about how to combine theories under consideration that only rank outcomes ordinally. The lack of information about how these non-consequentialist theories rank outcomes could stem from them being underspecified—or a combination approach as your post describes, though probably one of a different form than described here.
I wouldn’t say “all”—though it might be an average across moral theories that could be considered separately. They’re complicated theories, but maybe the pieces make more sense, or it’ll make more sense if disassembled and reassembled.
This may be true of other non-consequentialist theories. What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
I think that’s one way one could try to adapt Kantian theories, or extrapolate certain key principles from them. But I don’t think it’s what the theories themselves say. I think what you’re describing lines up very well with rule utilitarianism.
(Side note: Personally, “my favourite theory” would probably be something like two-level utilitarianism, which blends both rule and act utilitarianism, and then based on moral uncertainty I’d add some side constraints/concessions to deontological and virtue ethical theories—plus just a preference for not doing anything too drastic/irreversible in case the “correct” theory is one I haven’t heard of yet/no one’s thought of yet.)