Hey, many of those look like multiple-choice questions! Let’s have a poll. I filled out the lists of possibilities as thoroughly as I could, including both answers that I think are right, and answers that I think are wrong but which other people might vote for, but I can’t have gotten everything; so vote “Other” and reply with your answer, if you have another possibility.
The Practical List
What career choice do you most strongly endorse, when you can’t be person- or skill-specific? (Wording changed: Was “What’s the optimal career choice?”) [pollid:161]
What’s the optimal donation area? [pollid:181]
What are the highest leverage political policies? [pollid:163]
What are the highest value areas of research? [pollid:164]
The Theoretical List
What’s the correct population ethics? Compared to present people, we should value future people [pollid:165] Do people have diminishing marginal value? [pollid:166]
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead? [pollid:167]
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? (If not, why not?) [pollid:168]
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism? [pollid:169]
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence? [pollid:170]
How should intuitions weigh against theoretical virtues in normative ethics? [pollid:171] Is common-sense ethics roughly correct? [pollid:172] Or should we prefer simpler moral theories? A good moral theory is [pollid:173]
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much? [pollid:174]
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? [pollid:175] Which ones? [pollid:176] Insects? [pollid:177] The natural environment? [pollid:178] Artificial intelligences? [pollid:179] Which kinds? [pollid:180]
9. What additional items should be on these lists?
One of these poll items is not like the others. The answer to the career question varies depending on the individual whose career is under consideration. ETA: even given some kind of non-relativistic moral realism.
By my values, that criticism applies to most everything in the list.
Optimal, by whose values? Why should anyone assume that their values dictate what “we should do”? These questions are unsolved because they haven’t been formulated in a way that makes sense. Provide the context of an actual Valuer to these questions of value, and you might make some progress toward answers.
To answer one question according to my values, the biggest problem we face is death.
You’re right. I paid lots of attention to filling in options and not enough to the wording of the questions I was copying. There are multiple interpretations of this question, so I changed it (with 9 votes entered so far, 2-0-1-0-3-3) to “What career choice do you most strongly endorse, when you can’t be person- or skill-specific?” Also interesting would have been “What is the optimal career choice for you”, but this seemed like changing the spirit of the question too much.
You can discount my “Other” answer then. My “other” answer was “Huh? Optimal for what? Getting laid? Saving the planet? Maximising life satisfaction?” (This supplements that “For Who?” that Carl Mentions.)
Why the hell is “prevention and treatment of diseases” (e.g. the AMF, the SCI, etc.) not on the list?
It is; it’s labelled “tropical medicine”. Apparently that description was quite unclear, though, so the lack of votes for it isn’t necessarily meaningful.
Hey, many of those look like multiple-choice questions! Let’s have a poll. I filled out the lists of possibilities as thoroughly as I could, including both answers that I think are right, and answers that I think are wrong but which other people might vote for, but I can’t have gotten everything; so vote “Other” and reply with your answer, if you have another possibility.
The Practical List
What career choice do you most strongly endorse, when you can’t be person- or skill-specific? (Wording changed: Was “What’s the optimal career choice?”) [pollid:161]
What’s the optimal donation area? [pollid:181]
What are the highest leverage political policies? [pollid:163]
What are the highest value areas of research? [pollid:164]
The Theoretical List
What’s the correct population ethics? Compared to present people, we should value future people [pollid:165] Do people have diminishing marginal value? [pollid:166]
Should we maximise expected value when it comes to small probabilities of huge amounts of value? If not, what should we do instead? [pollid:167]
How should we respond to the possibility of creating infinite value (or disvalue)? Should that consideration swamp all others? (If not, why not?) [pollid:168]
How should we respond to the possibility that the universe actually has infinite value? Does it mean that we have no reason to do any action (because we don’t increase the sum total of value in the world)? Or does this possibility refute aggregative consequentialism? [pollid:169]
How should we accommodate moral uncertainty? Should we apply expected utility theory? If so, how do we make intertheoretic value comparisons? Does this mean that some high-stakes theories should dominate our moral thinking, even if we assign them low credence? [pollid:170]
How should intuitions weigh against theoretical virtues in normative ethics? [pollid:171] Is common-sense ethics roughly correct? [pollid:172] Or should we prefer simpler moral theories? A good moral theory is [pollid:173]
Should we prioritise the prevention of human wrongs over the alleviation of naturally caused suffering? If so, by how much? [pollid:174]
What sorts of entities have moral value? Humans, presumably. But what about non-human animals? [pollid:175] Which ones? [pollid:176] Insects? [pollid:177] The natural environment? [pollid:178] Artificial intelligences? [pollid:179] Which kinds? [pollid:180]
9. What additional items should be on these lists?
One of these poll items is not like the others. The answer to the career question varies depending on the individual whose career is under consideration. ETA: even given some kind of non-relativistic moral realism.
By my values, that criticism applies to most everything in the list.
Optimal, by whose values? Why should anyone assume that their values dictate what “we should do”? These questions are unsolved because they haven’t been formulated in a way that makes sense. Provide the context of an actual Valuer to these questions of value, and you might make some progress toward answers.
To answer one question according to my values, the biggest problem we face is death.
You’re right. I paid lots of attention to filling in options and not enough to the wording of the questions I was copying. There are multiple interpretations of this question, so I changed it (with 9 votes entered so far, 2-0-1-0-3-3) to “What career choice do you most strongly endorse, when you can’t be person- or skill-specific?” Also interesting would have been “What is the optimal career choice for you”, but this seemed like changing the spirit of the question too much.
You can discount my “Other” answer then. My “other” answer was “Huh? Optimal for what? Getting laid? Saving the planet? Maximising life satisfaction?” (This supplements that “For Who?” that Carl Mentions.)
None. Any endorsement of career choice makes sense for certain people/skills but not for others.
Why the hell is “prevention and treatment of diseases” (e.g. the AMF, the SCI, etc.) not on the list?
It is; it’s labelled “tropical medicine”. Apparently that description was quite unclear, though, so the lack of votes for it isn’t necessarily meaningful.
I meant in Question 2.
Ack, you’re right, that should be in there. The closest match is “Development charities”, which isn’t really the same thing.
It would be interesting to extend the range of answers as follows:
Common-sense ethics is reliably, or mostly, good
Common-sense ethics is a mix of good and bad
Common-sense ethics is reliably, or mostly, bad
Common-sense ethics is useless or fails to achieve either good or bad
IOW, it is possible for ethical rules or systems not only to be incorrect, but to be anti-correct.