I like the idea of a non-technical explanation of consequentialism, but I worry that many important distinctions will be conflated or lost in the process of generating something that reads well and doesn’t require that the reader to spend a lot of time thinking about the subject by themselves before it makes sense.
The issue that stands out the most to me is what you write about axiology. The point you seem to want to get across, which is what I would consider to be the core of consequentilalism in one sentence, is that “[...]our idea of “the good” should be equivalent or directly linked to our idea of ‘the right’.” But that woefully underspecifies a moral theory; all it does is pick out a group of related theories, which we call “consequentialist”.
It’s important to realize how much possible variation there is among consequentialist theories, and the most straightforward way to do it that I see is to give more serious consideration to the role that axiology plays in a total moral theory. For example, a basic way to taxonomize theories while simplifying away a lot of technical issues that are not important for the kind of overview you’re providing is:
1) Does the theory fundamentally concern itself with outcomes or something other than outcomes?
A theory that fundamentally concerns itself with outcomes (however “outcomes” are defined) is consequentialist. (Other theories have other concerns and other names.)
2) What kinds of outcomes does the theory concern itself with?
Outcomes concerning the satisfaction of people’s preferences.
Outcomes concerning people’s happiness.
Outcomes concerning non-human happiness.
Outcomes concerning ecological sustainability.
Outcomes concerning paperclips.
etc.
All of these describe consequentialist axiologies which lead to different consequentialist theories.
3) For consequentialist theories, the rightness of an action (i.e. the indicator of whether it should be done or not) depends on its consequences or expected consequences. What actions are right for an agent?
Any action that leads to a sufficiently good outcome, where “sufficiently good” is somehow defined...
-...in relation to the current state of the world. E.g. an action is right if it leads to an outcome that is better (according to the theory’s axiology) than the current state of things. “Leave things better than you found them.”
-...in relation to the actions that an agent can actually do. E.g. an action is right if it leads to an outcome that is better than 90% of the other outcomes (according to the theory’s axiology) that an agent can bring into effect through other actions. “Do enough good; sainthood not required.”
Any action that leads to an outcome for which there is no better outcome according to the theory, among all the other outcomes the agent can bring about.
-E.g. You have the task of distributing two bars of chocolate to Ann and Bob. Any bar of chocolate you don’t distribute immediately disappears. Your theory’s complete axiology is “all else being equal, it’s better when one person has more chocolate bars than they otherwise would have had.” Your actions can distribute chocolate bars like this:
Action A --> Outcome A: Ann 0 Bob 0
Action B --> Outcome B:: Ann 1 Bob 0
Action C --> Outcome C: Ann 0 Bob 1
Action D --> Outcome D: Ann 2 Bob 0
Action E --> Outcome E: Ann 0 Bob 2
Action F --> Outcome F: Ann 1 Bob 1
According to your theory, every outcome is better than A. Outcomes D and F are better than B. Outcomes E and F are better than C. Outcomes D, E, and F are neither better than nor worse than nor equal to each other. So Actions D, E, and F would be right, and the rest would be wrong. “Act to maximize value; if that’s undefined, don’t leave any extra value on the table.”
etc.
I’ve left out a lot of issues concerning expected consequences vs. actual consequences, agent knowledge, value measurement and aggregation, satisficing, etc. which I think are not important given the goals of your FAQ. But I’d say it’s important to get across to the non-specialist that the range of consequentialist theories is pretty large, and there are a lot of issues that a consequentialist theory will have to deal with. (In other words, there’s no monolithic theory called “consequentialism” that you can subscribe to which will pass judgements on your actions. If you say believe in “consequentialism”, you have to say more in order to pin down what actions you believe are right and wrong.) If you don’t make this clear, people may fill in the blanks in idiosyncratic ways and then react to the ways they’ve filled them in, which is likely not to lead to anyone being persuaded, or more importantly, anyone being informed. One easy way to resolve this is to define consequentialist theories as those concerned with outcomes, define consequentialist axiologies as theories of what kinds of outcomes are valuable, describe some common methods for determining right actions, and say that a consequentialist moral theory is a consequentialist axiology + a way of determining right actions based on that axiology.
EDIT FOR CLARITY: My point is not that you don’t ever bring up these issues, but that these issues are fundamental (theoretically and pedagogically) and I’d make sure that the structure of the FAQ emphasizes that.
I like the idea of a non-technical explanation of consequentialism, but I worry that many important distinctions will be conflated or lost in the process of generating something that reads well and doesn’t require that the reader to spend a lot of time thinking about the subject by themselves before it makes sense.
The issue that stands out the most to me is what you write about axiology. The point you seem to want to get across, which is what I would consider to be the core of consequentilalism in one sentence, is that “[...]our idea of “the good” should be equivalent or directly linked to our idea of ‘the right’.” But that woefully underspecifies a moral theory; all it does is pick out a group of related theories, which we call “consequentialist”.
It’s important to realize how much possible variation there is among consequentialist theories, and the most straightforward way to do it that I see is to give more serious consideration to the role that axiology plays in a total moral theory. For example, a basic way to taxonomize theories while simplifying away a lot of technical issues that are not important for the kind of overview you’re providing is:
1) Does the theory fundamentally concern itself with outcomes or something other than outcomes?
A theory that fundamentally concerns itself with outcomes (however “outcomes” are defined) is consequentialist. (Other theories have other concerns and other names.)
2) What kinds of outcomes does the theory concern itself with?
Outcomes concerning the satisfaction of people’s preferences.
Outcomes concerning people’s happiness.
Outcomes concerning non-human happiness.
Outcomes concerning ecological sustainability.
Outcomes concerning paperclips.
etc.
All of these describe consequentialist axiologies which lead to different consequentialist theories.
3) For consequentialist theories, the rightness of an action (i.e. the indicator of whether it should be done or not) depends on its consequences or expected consequences. What actions are right for an agent?
Any action that leads to a sufficiently good outcome, where “sufficiently good” is somehow defined...
-...in relation to the current state of the world. E.g. an action is right if it leads to an outcome that is better (according to the theory’s axiology) than the current state of things. “Leave things better than you found them.”
-...in relation to the actions that an agent can actually do. E.g. an action is right if it leads to an outcome that is better than 90% of the other outcomes (according to the theory’s axiology) that an agent can bring into effect through other actions. “Do enough good; sainthood not required.”
Any action that leads to an outcome for which there is no better outcome according to the theory, among all the other outcomes the agent can bring about.
-E.g. You have the task of distributing two bars of chocolate to Ann and Bob. Any bar of chocolate you don’t distribute immediately disappears. Your theory’s complete axiology is “all else being equal, it’s better when one person has more chocolate bars than they otherwise would have had.” Your actions can distribute chocolate bars like this:
Action A --> Outcome A: Ann 0 Bob 0
Action B --> Outcome B:: Ann 1 Bob 0
Action C --> Outcome C: Ann 0 Bob 1
Action D --> Outcome D: Ann 2 Bob 0
Action E --> Outcome E: Ann 0 Bob 2
Action F --> Outcome F: Ann 1 Bob 1
According to your theory, every outcome is better than A. Outcomes D and F are better than B. Outcomes E and F are better than C. Outcomes D, E, and F are neither better than nor worse than nor equal to each other. So Actions D, E, and F would be right, and the rest would be wrong. “Act to maximize value; if that’s undefined, don’t leave any extra value on the table.”
etc.
I’ve left out a lot of issues concerning expected consequences vs. actual consequences, agent knowledge, value measurement and aggregation, satisficing, etc. which I think are not important given the goals of your FAQ. But I’d say it’s important to get across to the non-specialist that the range of consequentialist theories is pretty large, and there are a lot of issues that a consequentialist theory will have to deal with. (In other words, there’s no monolithic theory called “consequentialism” that you can subscribe to which will pass judgements on your actions. If you say believe in “consequentialism”, you have to say more in order to pin down what actions you believe are right and wrong.) If you don’t make this clear, people may fill in the blanks in idiosyncratic ways and then react to the ways they’ve filled them in, which is likely not to lead to anyone being persuaded, or more importantly, anyone being informed. One easy way to resolve this is to define consequentialist theories as those concerned with outcomes, define consequentialist axiologies as theories of what kinds of outcomes are valuable, describe some common methods for determining right actions, and say that a consequentialist moral theory is a consequentialist axiology + a way of determining right actions based on that axiology.
EDIT FOR CLARITY: My point is not that you don’t ever bring up these issues, but that these issues are fundamental (theoretically and pedagogically) and I’d make sure that the structure of the FAQ emphasizes that.