The genre here is philosophy, and a common type of argument is the thought experiment: “If you had to choose between A and B, what would you choose?” (For example: “is it better to prevent one untimely death, or to allow 10 people to live who would otherwise never have been born?”)
It’s common to react to questions like this with comments like “I don’t really think that kind of choice comes up in real life; actually you can usually get both A and B if you do things right” or “actually A isn’t possible; the underlying assumptions about how the world really works are off here.” My general advice when considering philosophy is to avoid reactions like this and think about what you would do if you really had to make the choice that is being pointed at, even if you think thI ae author’s underlying assumptions about why the choice exists are wrong. Similarly, if you find one part of an argument unconvincing, I suggest pretending you accept it for the rest of the piece anyway, to see whether the rest of the arguments would be compelling under that assumption.
I often give an example of how one could face a choice between A and B in real life, to make it easier to imagine—but it’s not feasible to give this example in enough detail and with enough defense to make it seem realistic to all readers, without a big distraction from the topic at hand.
Philosophy requires some amount of suspending disbelief, because the goal is to ask questions about (for example) what you value, while isolating them from questions about what you believe. (For more on how it can be useful to separate values and beliefs, see Bayesian Mindset.)
I agree with these notes, and liked this post, but I think there’s an extra caution that needs to be mentioned about these notes.
I think that to do this kind of a deliberation right, what one needs to do is something like:
Take a question that’s associated with a complex tangle of real-world considerations, e.g. “is creating extra lives as good as preventing deaths”.
For the purposes of getting clarity on a specific subquestion, isolate just one axis of importance to look at and set the other considerations aside (as your notes advise doing, and as you did when setting aside the considerations that the question of reproductive decisions would bring in)
Within that decontextualized frame, closely analyze the specific subquestion (as your post does)
Then, once you’ve done the decontextualized analysis, recontextualize the subquestion by bringing in all the other considerations again and reflect on how much weight the decontextualized analysis should be given
I have a feeling that a lot of EA/rationalist thinking on this specific question does steps 1-3, but then doesn’t really do the last step or leaves it implicit. This makes sense, since it’s much easier to do the first three steps. For this question about population ethics, you can bring in relatively formal frameworks that let you do things like prove impossibility theorems, but then there’s no similar formal tool that you could apply to answer the question of “so does this actually matter”.
But I’m concerned that the end result is that people see a bunch of this rigorous reasoning and then conclude something like “the formal framework implies that extra lives are therefore as good as saving lives, so therefore EAs should prioritize creating lives just as much as saving lives”. Rather than concluding “the formal framework implies that extra lives are therefore as good as saving lives, but we don’t know how much weight we should give to this reasoning over other considerations, so therefore we still mostly remain confused”. (Your post does say this at the end, but I’m worried that a lot of people will still draw the former conclusion.)
As an intentionally silly analogy, suppose that someone asked me “is it better for there to be green objects rather than red objects in the universe”. And suppose that I agreed to consider this question in the abstract, philosophy-style. Further suppose that there was psychological research saying that green tends to make people slightly calmer and relaxed, whereas red was slightly distressing. And this research was really, really convincing and rigorous, so that I could actually prove without a doubt that if you have to pick between green and red objects, it’s better for people’s well-being for there to be green objects.
Now it might be a valid argument that, other things being even and if you have to choose between them, green objects are better. But even if that was true, it would obviously be silly to take this conclusion to imply anything about EA policy. EAs shouldn’t make it a cause area to paint everything green rather than red. First, the difference in wellbeing just isn’t big enough to care about, and two, this would conflict with a number of other considerations such as it being more aesthetic (and thus more conducive for well-being) if you can use a variety of colors.
In the case of green vs. red, this seems relatively obvious, since we can apply something like a quantitative framework to the broader question of “should green over red therefore be EA policy”. We can say that yes, green is maybe slightly better, but the net impact is very small, and also it would have these other effects that would produce an overall reduction in wellbeing. Even if we can’t actually literally do the math, the overall magnitude of the effects seems obvious enough.
Whereas in EA circles, there’s a tendency in some circles to do the kind of analysis that your post does on the “creating vs. saving lives” question, and then conclude that this should be a major consideration for EA policy. (To be clear, I recognize that your post doesn’t do that, and is suitably cautious about how much weight to put on these thoughts.) I think this might be committing a similar kind of error as analyzing green vs. red and then drawing substantial policy implications from it. The problem is just much harder to see, since we don’t have a broader quantitative framework where we could formally ask questions like “what is the overall impact of these particular thought experiments on our general ethical considerations” in the same way as we can ask “what is the overall impact of favoring green over red on well-being in general”. And since we don’t have anything resembling such a framework, it’s easy to not even notice that it’s missing.
I agree with these notes, and liked this post, but I think there’s an extra caution that needs to be mentioned about these notes.
I think that to do this kind of a deliberation right, what one needs to do is something like:
Take a question that’s associated with a complex tangle of real-world considerations, e.g. “is creating extra lives as good as preventing deaths”.
For the purposes of getting clarity on a specific subquestion, isolate just one axis of importance to look at and set the other considerations aside (as your notes advise doing, and as you did when setting aside the considerations that the question of reproductive decisions would bring in)
Within that decontextualized frame, closely analyze the specific subquestion (as your post does)
Then, once you’ve done the decontextualized analysis, recontextualize the subquestion by bringing in all the other considerations again and reflect on how much weight the decontextualized analysis should be given
I have a feeling that a lot of EA/rationalist thinking on this specific question does steps 1-3, but then doesn’t really do the last step or leaves it implicit. This makes sense, since it’s much easier to do the first three steps. For this question about population ethics, you can bring in relatively formal frameworks that let you do things like prove impossibility theorems, but then there’s no similar formal tool that you could apply to answer the question of “so does this actually matter”.
But I’m concerned that the end result is that people see a bunch of this rigorous reasoning and then conclude something like “the formal framework implies that extra lives are therefore as good as saving lives, so therefore EAs should prioritize creating lives just as much as saving lives”. Rather than concluding “the formal framework implies that extra lives are therefore as good as saving lives, but we don’t know how much weight we should give to this reasoning over other considerations, so therefore we still mostly remain confused”. (Your post does say this at the end, but I’m worried that a lot of people will still draw the former conclusion.)
As an intentionally silly analogy, suppose that someone asked me “is it better for there to be green objects rather than red objects in the universe”. And suppose that I agreed to consider this question in the abstract, philosophy-style. Further suppose that there was psychological research saying that green tends to make people slightly calmer and relaxed, whereas red was slightly distressing. And this research was really, really convincing and rigorous, so that I could actually prove without a doubt that if you have to pick between green and red objects, it’s better for people’s well-being for there to be green objects.
Now it might be a valid argument that, other things being even and if you have to choose between them, green objects are better. But even if that was true, it would obviously be silly to take this conclusion to imply anything about EA policy. EAs shouldn’t make it a cause area to paint everything green rather than red. First, the difference in wellbeing just isn’t big enough to care about, and two, this would conflict with a number of other considerations such as it being more aesthetic (and thus more conducive for well-being) if you can use a variety of colors.
In the case of green vs. red, this seems relatively obvious, since we can apply something like a quantitative framework to the broader question of “should green over red therefore be EA policy”. We can say that yes, green is maybe slightly better, but the net impact is very small, and also it would have these other effects that would produce an overall reduction in wellbeing. Even if we can’t actually literally do the math, the overall magnitude of the effects seems obvious enough.
Whereas in EA circles, there’s a tendency in some circles to do the kind of analysis that your post does on the “creating vs. saving lives” question, and then conclude that this should be a major consideration for EA policy. (To be clear, I recognize that your post doesn’t do that, and is suitably cautious about how much weight to put on these thoughts.) I think this might be committing a similar kind of error as analyzing green vs. red and then drawing substantial policy implications from it. The problem is just much harder to see, since we don’t have a broader quantitative framework where we could formally ask questions like “what is the overall impact of these particular thought experiments on our general ethical considerations” in the same way as we can ask “what is the overall impact of favoring green over red on well-being in general”. And since we don’t have anything resembling such a framework, it’s easy to not even notice that it’s missing.
I broadly agree.