My impression is that popular consequentialist theories are typically cardinal, while popular non-consequentialist theories are typically ordinal. For example, a Kantian theory may simply tell you that lying is worse than not lying, but not by how much, so you cannot directly weigh that “bad” against the goodness/badness of other actions/outcomes (whereas such comparisons are relatively easy under most forms of utilitarianism).
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
8. Please let me know if you think it’d be worth me investing more time, and/or adding more words, to try to make this section—or the prior one—clearer. (Also, just let me know about any other feedback you might have!)
It would be useful to have an example of how Variance Voting works. Also, the examples for the other methods are fantastic!
The post after that will discuss how we can combine the approaches in the first two posts with sensitivity analysis and value of information analysis.
What is sensitivity analysis? And is there any literature or information on how value of information analysis can account for things like unknown unknowns?
or even whether I should spend much time making that decision, because that might trade off to some extent with time and money I could put towards longtermist efforts (which seem more choice-worthy according to other moral theories I have some credence in).
What longtermist efforts might there be according to the theory that (if you were certain of) you’d choose to be vegan?
General:
This post is concerned with MacAskill’s (pure) approaches to moral uncertainty. Here are links to some alternatives:
5. The matter of how to actually assign “units” or “magnitudes” of choice-worthiness to different options, and what these things would even mean, is complex, and I won’t really get into it in this sequence.
(Hybrid procedures will not be discussed in this post; interested readers can refer to MacAskill’s thesis.)
(Link from earlier in the post: Will MacAskill’s 2014 thesis. Alas, its table of contents doesn’t entirely use the same terminology, so its not clear where Hybrid procedures are discussed. However, if they are discussed after Variance Voting, as in the post, then this may be somewhere after page 89.)
3.
7. MacAskill later notes that a simpler method (which doesn’t subtract the number of options that are more choice-worthy) can be used when there are no ties. His calculations for the example I quote and work through in this post use that simpler method. But in this post, I’ll stick to the method MacAskill describes in this quote (which is guaranteed to give the same final answer in this example anyway).
4.
9. In a (readable and interesting) 2019 paper, MacAskill writes “so far, the implications for practical ethics have been drawn too simplistically [by some philosophers.]
(Link from earlier in the post: Will MacAskill’s 2014 thesis. Alas, its table of contents doesn’t entirely use the same terminology, so its not clear where Hybrid procedures are discussed. However, if they are discussed after Variance Voting, as in the post, then this may be somewhere after page 89.)
Oh, good point, should’ve provided page numbers! Hybrid procedures (or Hybrid Views) are primarily discussed in pages 117-122. I’ve now edited my post to include those page numbers. (You can also use the command+f/control+f to find key terms in the thesis you’re after.)
“or even whether I should spend much time making that decision, because that might trade off to some extent with time and money I could put towards longtermist efforts (which seem more choice-worthy according to other moral theories I have some credence in).”
What longtermist efforts might there be according to the theory that (if you were certain of) you’d choose to be vegan?
Not sure I understand this question. I’ll basically expand on/explain what I said, and hope that answers your question somewhere. (Disclaimer: This is fairly unpolished, and I’m trying more to provide you an accurate model of my current personal thinking than provide you something that appears wise and defensible.)
I currently have fairly high credence in the longtermism, which I’d roughly phrase as the view that, even in expectation and given difficulties with distant predictions, most of the morally relevant consequences of our actions lie in the far future (meaning something like “anywhere from 100 years from now till the heat death of the universe”). In addition to that fairly high credence, it seems to me intuitively that the “stakes are much higher” on longtermism than on non-longtermist theories (e.g., a person-affecting view that only cares about people alive today). (I’m not sure if I can really formally make sense of that intuition, because maybe those theories should be seen as incomparable and I should use variance voting to give them equal say, but at least for now that’s my tentative impression.)
I also have probably somewhere between 10-90% credence that at least many nonhuman animals are morally conscious in a relevant sense, with non-negligible moral weights. And this theory again seems to suggest much higher stakes than a human-only view would (there are a bunch of reasons one might object to this, and they do lower my confidence, but it still seems like it makes more sense to say that animal-inclusive views say there’s all the potential value/disvalue the human-only view said there was, plus a whole bunch more). I haven’t bothered pinning down my credence much here, because multiplying 0.1 by the amount of suffering caused by an individual’s contributions to factory farming if that view is correct seems already enough to justify vegetarianism, making my precise credence less decision-relevant. (I may use a similar example in my post on value of information, now I think about it.)
As MacAskill notes in that paper, while vegetarianism is typically cheaper than meat-eating, strict vegetarianism will be almost certainly at least slightly more expensive (or inconvenient/time-consuming) that “vegetarian except when it’s expensive/difficult”. A similar thing would likely apply more strongly to veganism. So the more I more towards strict veganism, the more time and money it costs me. It’s not much, and a very fair argument could be made that I probably spend more on other things, like concert tickets or writing these comments. But it still does trade off somewhat against my longtermist efforts (currently centring on donating to the EA Long Term Future Fund and gobbling up knowledge and skills so I can do useful direct work, but I’ll also be starting a relevant job soon).
To me, it seems that the stakes under “longtermism plus animals matter” seem higher than just under “animals matter”. Additionally, I have fairly high credence in longtermism, and no reason to believing conditioning on animals mattering makes longtermism less likely (so even if I accept “animals matter”, I’d have basically exactly the same fairly high credence in longtermism as before).
It therefore seems that a heuristic MEC type of thinking should make me lean towards what longtermism says I should do, though with “side constraints” or “low hanging fruits being plucked” from a “animals matter” perspective. This seems extra robust because, even if “animals matter”, I still expect longtermism is fairly likely, and then a lot of the policies that seem wise from a longtermist angle (getting us to existential safety, expanding our moral circle, raising our wisdom and ability to prevent suffering and increase joy) seem fairly wise from an animals matter angle too (because they’d help us help nonhumans later). (But I haven’t really tried to spell that last assumption out to check it makes sense.)
This is a somewhat hard to read (at least for me) Wikipedia article on sensitivity analysis. It’s a common tool; my extension of it to moral uncertainty would basically boil down to “Do what people usually advice, but for moral uncertainty too.” I’ll link this comment (and that part of this post) to my post on that once I’ve written it.
Also, sensitivity analysis is extremely easy in Guesstimate (though I’m not yet sure precisely how to interpret the results). Here’s the Guesstimate model that’ll be the central example in my upcoming post. To do a sensitivity analysis, just go to the variable of interest (in this case, the key outcome is “Should Devon purchase a fish meal (0) or a plant-based meal (1)?”), click the cog/speech bubble, click “Sensitivity”. On all the variables feeding into this variable, you’ll now see a number in green showing how sensitive the output is to this input.
In this case, it appears that the variables the outcome is most sensitive too (and thus that are likely most worth gathering info on) are the empirical question of how many fish hedons the fish meal would cause, followed at some distance by the moral question of choice-worthiness of each fish hedon according to T1 (how much does that theory care about fish?) and the empirical question of how much human hedons the fish meal would cause (how much would Devon enjoy the meal?).
And is there any literature or information on how value of information analysis can account for things like unknown unknowns?
Very good question. I’m not sure, but I’ll try to think about and look into that.
It would be useful to have an example of how Variance Voting works. Also, the examples for the other methods are fantastic!
Thanks! I try to use concrete examples wherever possible, and especially when dealing with very abstract topics. Basically, if it was a real struggle for me to get to an understanding from existing sources, that signals to me that examples would be especially useful here, to make things easier on others.
In the case of Variance Voting, I think I stopped short of fully getting to an internalised, detailed understanding, partly because MacAskill doesn’t actually provide any numerical examples (only graphical illustrations and an abstract explanation). I’ll try read the paper he links to and then update this with an example.
I’ve now substantially updated/overhauled this article, partly in response to your feedback. One big thing was reading more about variance voting/normalisation and related ideas, and, based on that, substantially changing how I explain that idea and adding a (somewhat low confidence) worked example. Hope that helps make that section clearer.
If there are things that still seem unclear, and especially if anyone thinks I’ve made mistakes in the Variance Voting part, please let me know.
Thanks for your kind words and your feedback/commentary!
(I’ll split my reply into multiple comments to make following the threads easier.)
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
I’m not sure I see what you mean by that. Skippable guesses to follow:
Do you mean something like “this brand of non-consequentialist theories seem to basically just be a collection of common sense intuitions”? If so, I think that’s part of the intention for any moral theory.
Or do you mean something like that, plus that that brand of non-consequentialist theory hasn’t abstracted away from those intuitions much (such that they’re liable to something like overfitting, whereas something like classical utilitarianism errs more towards underfitting by stripping everything down to one single strong intuition[1]), and wouldn’t provide preference orderings that satisfy axioms of rationality/expected utility[2]? If so, I agree with that too, and that’s why I personally find something like classical utilitarianism far more compelling, but it’s also an “issue” of lot of smart people of aware of and yet still endorse the non-consequentialist theories, so I think it’s still important for our moral uncertainty framework to be able to handle such theories.
Or do you mean something like “this brand of non-consequentialist theory is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”? Is so, I’m pretty sure I disagree, and one indication that this is probably incorrect is that accounting for moral uncertainty seems likely to lead to fairly different results than just going with an ordinal Kantian theory.
Or is intention behind words not a multiple choice test, in which case please provide your short-answer and/or essay response :p
[1] My thinking here is influenced by pages 26-28 of Nick Beckstead’s thesis, though it was a while ago that I read them.
[2] Disclaimer: I don’t yet understand those axioms in detail myself; I think I get the gist, but often when I talk about them it’s more like I’m extrapolating what conclusions smart people would draw based on others I’ve seen them draw, rather than knowing what’s going on under the hood.
In this case, one relevant smart person is MacAskill, who says in his thesis: “Many theories do provide cardinally measurable choice-worthiness: in general, if a theory orders empirically uncertain prospects in terms of their choice-worthiness, such that the choice-worthiness relation satisfies the axioms of expected utility theory [footnote mentioning von Neumann et al.], then the theory provides cardinally measurable choice-worthiness.” This seems to me to imply (as a matter of how people speak, not by actual logic) that theories that aren’t cardinal, like the hypothesised Kantian theory, don’t meet the axioms of expected utility theory.
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
I’m not sure I see what you mean by that.
The section on the Borda Rule is about how to combine theories under consideration that only rank outcomes ordinally. The lack of information about how these non-consequentialist theories rank outcomes could stem from them being underspecified—or a combination approach as your post describes, though probably one of a different form than described here.
is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”?
I wouldn’t say “all”—though it might be an average across moral theories that could be considered separately. They’re complicated theories, but maybe the pieces make more sense, or it’ll make more sense if disassembled and reassembled.
like the hypothesised Kantian theory, don’t meet the axions of expected utility theory.
This may be true of other non-consequentialist theories. What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
I think that’s one way one could try to adapt Kantian theories, or extrapolate certain key principles from them. But I don’t think it’s what the theories themselves say. I think what you’re describing lines up very well with rule utilitarianism.
(Side note: Personally, “my favourite theory” would probably be something like two-level utilitarianism, which blends both rule and act utilitarianism, and then based on moral uncertainty I’d add some side constraints/concessions to deontological and virtue ethical theories—plus just a preference for not doing anything too drastic/irreversible in case the “correct” theory is one I haven’t heard of yet/no one’s thought of yet.)
This post is great.
Commentary:
In this sense, this brand of non-consequentialist theories seems to be an amalgamation of ‘moral theories’.
It would be useful to have an example of how Variance Voting works. Also, the examples for the other methods are fantastic!
What is sensitivity analysis? And is there any literature or information on how value of information analysis can account for things like unknown unknowns?
What longtermist efforts might there be according to the theory that (if you were certain of) you’d choose to be vegan?
General:
This post is concerned with MacAskill’s (pure) approaches to moral uncertainty. Here are links to some alternatives:
Things that aren’t in this post:
1
2:
(Link from earlier in the post: Will MacAskill’s 2014 thesis. Alas, its table of contents doesn’t entirely use the same terminology, so its not clear where Hybrid procedures are discussed. However, if they are discussed after Variance Voting, as in the post, then this may be somewhere after page 89.)
3.
4.
Oh, good point, should’ve provided page numbers! Hybrid procedures (or Hybrid Views) are primarily discussed in pages 117-122. I’ve now edited my post to include those page numbers. (You can also use the command+f/control+f to find key terms in the thesis you’re after.)
Not sure I understand this question. I’ll basically expand on/explain what I said, and hope that answers your question somewhere. (Disclaimer: This is fairly unpolished, and I’m trying more to provide you an accurate model of my current personal thinking than provide you something that appears wise and defensible.)
I currently have fairly high credence in the longtermism, which I’d roughly phrase as the view that, even in expectation and given difficulties with distant predictions, most of the morally relevant consequences of our actions lie in the far future (meaning something like “anywhere from 100 years from now till the heat death of the universe”). In addition to that fairly high credence, it seems to me intuitively that the “stakes are much higher” on longtermism than on non-longtermist theories (e.g., a person-affecting view that only cares about people alive today). (I’m not sure if I can really formally make sense of that intuition, because maybe those theories should be seen as incomparable and I should use variance voting to give them equal say, but at least for now that’s my tentative impression.)
I also have probably somewhere between 10-90% credence that at least many nonhuman animals are morally conscious in a relevant sense, with non-negligible moral weights. And this theory again seems to suggest much higher stakes than a human-only view would (there are a bunch of reasons one might object to this, and they do lower my confidence, but it still seems like it makes more sense to say that animal-inclusive views say there’s all the potential value/disvalue the human-only view said there was, plus a whole bunch more). I haven’t bothered pinning down my credence much here, because multiplying 0.1 by the amount of suffering caused by an individual’s contributions to factory farming if that view is correct seems already enough to justify vegetarianism, making my precise credence less decision-relevant. (I may use a similar example in my post on value of information, now I think about it.)
As MacAskill notes in that paper, while vegetarianism is typically cheaper than meat-eating, strict vegetarianism will be almost certainly at least slightly more expensive (or inconvenient/time-consuming) that “vegetarian except when it’s expensive/difficult”. A similar thing would likely apply more strongly to veganism. So the more I more towards strict veganism, the more time and money it costs me. It’s not much, and a very fair argument could be made that I probably spend more on other things, like concert tickets or writing these comments. But it still does trade off somewhat against my longtermist efforts (currently centring on donating to the EA Long Term Future Fund and gobbling up knowledge and skills so I can do useful direct work, but I’ll also be starting a relevant job soon).
To me, it seems that the stakes under “longtermism plus animals matter” seem higher than just under “animals matter”. Additionally, I have fairly high credence in longtermism, and no reason to believing conditioning on animals mattering makes longtermism less likely (so even if I accept “animals matter”, I’d have basically exactly the same fairly high credence in longtermism as before).
It therefore seems that a heuristic MEC type of thinking should make me lean towards what longtermism says I should do, though with “side constraints” or “low hanging fruits being plucked” from a “animals matter” perspective. This seems extra robust because, even if “animals matter”, I still expect longtermism is fairly likely, and then a lot of the policies that seem wise from a longtermist angle (getting us to existential safety, expanding our moral circle, raising our wisdom and ability to prevent suffering and increase joy) seem fairly wise from an animals matter angle too (because they’d help us help nonhumans later). (But I haven’t really tried to spell that last assumption out to check it makes sense.)
This is a somewhat hard to read (at least for me) Wikipedia article on sensitivity analysis. It’s a common tool; my extension of it to moral uncertainty would basically boil down to “Do what people usually advice, but for moral uncertainty too.” I’ll link this comment (and that part of this post) to my post on that once I’ve written it.
Also, sensitivity analysis is extremely easy in Guesstimate (though I’m not yet sure precisely how to interpret the results). Here’s the Guesstimate model that’ll be the central example in my upcoming post. To do a sensitivity analysis, just go to the variable of interest (in this case, the key outcome is “Should Devon purchase a fish meal (0) or a plant-based meal (1)?”), click the cog/speech bubble, click “Sensitivity”. On all the variables feeding into this variable, you’ll now see a number in green showing how sensitive the output is to this input.
In this case, it appears that the variables the outcome is most sensitive too (and thus that are likely most worth gathering info on) are the empirical question of how many fish hedons the fish meal would cause, followed at some distance by the moral question of choice-worthiness of each fish hedon according to T1 (how much does that theory care about fish?) and the empirical question of how much human hedons the fish meal would cause (how much would Devon enjoy the meal?).
Very good question. I’m not sure, but I’ll try to think about and look into that.
It’s good to have a word for that sort of thing.
Thanks! I try to use concrete examples wherever possible, and especially when dealing with very abstract topics. Basically, if it was a real struggle for me to get to an understanding from existing sources, that signals to me that examples would be especially useful here, to make things easier on others.
In the case of Variance Voting, I think I stopped short of fully getting to an internalised, detailed understanding, partly because MacAskill doesn’t actually provide any numerical examples (only graphical illustrations and an abstract explanation). I’ll try read the paper he links to and then update this with an example.
I’ve now substantially updated/overhauled this article, partly in response to your feedback. One big thing was reading more about variance voting/normalisation and related ideas, and, based on that, substantially changing how I explain that idea and adding a (somewhat low confidence) worked example. Hope that helps make that section clearer.
If there are things that still seem unclear, and especially if anyone thinks I’ve made mistakes in the Variance Voting part, please let me know.
Thanks for your kind words and your feedback/commentary!
(I’ll split my reply into multiple comments to make following the threads easier.)
I’m not sure I see what you mean by that. Skippable guesses to follow:
Do you mean something like “this brand of non-consequentialist theories seem to basically just be a collection of common sense intuitions”? If so, I think that’s part of the intention for any moral theory.
Or do you mean something like that, plus that that brand of non-consequentialist theory hasn’t abstracted away from those intuitions much (such that they’re liable to something like overfitting, whereas something like classical utilitarianism errs more towards underfitting by stripping everything down to one single strong intuition[1]), and wouldn’t provide preference orderings that satisfy axioms of rationality/expected utility[2]? If so, I agree with that too, and that’s why I personally find something like classical utilitarianism far more compelling, but it’s also an “issue” of lot of smart people of aware of and yet still endorse the non-consequentialist theories, so I think it’s still important for our moral uncertainty framework to be able to handle such theories.
Or do you mean something like “this brand of non-consequentialist theory is basically what you’d get if you averaged (or took a credence-weighted average) across all moral theories”? Is so, I’m pretty sure I disagree, and one indication that this is probably incorrect is that accounting for moral uncertainty seems likely to lead to fairly different results than just going with an ordinal Kantian theory.
Or is intention behind words not a multiple choice test, in which case please provide your short-answer and/or essay response :p
[1] My thinking here is influenced by pages 26-28 of Nick Beckstead’s thesis, though it was a while ago that I read them.
[2] Disclaimer: I don’t yet understand those axioms in detail myself; I think I get the gist, but often when I talk about them it’s more like I’m extrapolating what conclusions smart people would draw based on others I’ve seen them draw, rather than knowing what’s going on under the hood.
In this case, one relevant smart person is MacAskill, who says in his thesis: “Many theories do provide cardinally measurable choice-worthiness: in general, if a theory orders empirically uncertain prospects in terms of their choice-worthiness, such that the choice-worthiness relation satisfies the axioms of expected utility theory [footnote mentioning von Neumann et al.], then the theory provides cardinally measurable choice-worthiness.” This seems to me to imply (as a matter of how people speak, not by actual logic) that theories that aren’t cardinal, like the hypothesised Kantian theory, don’t meet the axioms of expected utility theory.
The section on the Borda Rule is about how to combine theories under consideration that only rank outcomes ordinally. The lack of information about how these non-consequentialist theories rank outcomes could stem from them being underspecified—or a combination approach as your post describes, though probably one of a different form than described here.
I wouldn’t say “all”—though it might be an average across moral theories that could be considered separately. They’re complicated theories, but maybe the pieces make more sense, or it’ll make more sense if disassembled and reassembled.
This may be true of other non-consequentialist theories. What I am familiar with of Kant’s reasoning was a bit consequentialist, and if “this leads to a bad circumstance under some circumstance → never do it even under circumstances when doing it leads to bad consequences” (which means the analysis could come to a different conclusion if it was done in a different order or reversed the action/inaction related bias) is dropped in favor of “here are the reference classes, use the policy with the highest expected utility given this fixed relationship between preference classes and policies” then it can be made into one that might meet the axioms.
I think that’s one way one could try to adapt Kantian theories, or extrapolate certain key principles from them. But I don’t think it’s what the theories themselves say. I think what you’re describing lines up very well with rule utilitarianism.
(Side note: Personally, “my favourite theory” would probably be something like two-level utilitarianism, which blends both rule and act utilitarianism, and then based on moral uncertainty I’d add some side constraints/concessions to deontological and virtue ethical theories—plus just a preference for not doing anything too drastic/irreversible in case the “correct” theory is one I haven’t heard of yet/no one’s thought of yet.)