I think the currently most promising way to formally capture your person-affecting but not antinatalist intuitions in a utilitarian view would be something like Teruji Thomas, 2019, The Asymmetry, Uncertainty, and the Long Term from GPI, and I would strongly recommend looking into it. To summarize:
In pairwise comparisons of options, including options with uncertainty, additional contingent happy people can only offset but not outweigh contingent bad lives and possibly (under the “soft” asymmetric views, but not the “hard” asymmetric views) losses to necessary people, both in number of people tradeoffs and probabilistically. This means once you have enough contingent good lives to offset the losses, additional contingent good lives don’t push further for that same option, so you don’t get astronomical losses from extinction.
There are also “wide” and “narrow” versions, so that under the wide versions, it’s better for a better off person to be born than a worse off person, even if they’d have good life regardless.
You use a reasonably well-behaved voting method (with some modification) to extend from comparisons of pairs of options to choosing an option from larger sets of options.
On the “hard” asymmetric versions, long-term future quality improvements, e.g. s-risk reduction, would dominate extinction risk reduction (ignoring aliens, acausal trade, etc.) and interventions that primarily help people in the short term. On the “soft” asymmetric versions, quality improvements and extinction risk reduction are both permissible, but interventions that primarily help people in the short term are dominated. I think the “soft” versions also permit the (very) repugnant conclusion, replacement and similar tradeoffs on small scales (in contained hypotheticals, all else equal), but the “hard” versions don’t.
I’ve also thought of something similar here, but far less developed. I think such views have been underexplored. I suspect one of the reasons is because these approaches tend to be more mathematically complex to develop in full (the independence of irrelevant alternatives is a strong simplifying assumption), so that limits who can contribute and requires more work from each to make progress. Teruji Thomas has both a PhD in mathematics and a PhD in philosophy.
If you’re still sympathetic to the view that making more happy people is inherently good overall, not just able to offset losses, that might be best captured through moral uncertainty rather than fitting it all in one view. You could assign some credence to a symmetric or weakly asymmetric utilitarian view (possibly with moral uncertainty over how weakly asymmetric it should be), and some credence to something like the view by Teruji Thomas above.
Hi Holden, thanks for writing this!
I think the currently most promising way to formally capture your person-affecting but not antinatalist intuitions in a utilitarian view would be something like Teruji Thomas, 2019, The Asymmetry, Uncertainty, and the Long Term from GPI, and I would strongly recommend looking into it. To summarize:
In pairwise comparisons of options, including options with uncertainty, additional contingent happy people can only offset but not outweigh contingent bad lives and possibly (under the “soft” asymmetric views, but not the “hard” asymmetric views) losses to necessary people, both in number of people tradeoffs and probabilistically. This means once you have enough contingent good lives to offset the losses, additional contingent good lives don’t push further for that same option, so you don’t get astronomical losses from extinction.
There are also “wide” and “narrow” versions, so that under the wide versions, it’s better for a better off person to be born than a worse off person, even if they’d have good life regardless.
You use a reasonably well-behaved voting method (with some modification) to extend from comparisons of pairs of options to choosing an option from larger sets of options.
On the “hard” asymmetric versions, long-term future quality improvements, e.g. s-risk reduction, would dominate extinction risk reduction (ignoring aliens, acausal trade, etc.) and interventions that primarily help people in the short term. On the “soft” asymmetric versions, quality improvements and extinction risk reduction are both permissible, but interventions that primarily help people in the short term are dominated. I think the “soft” versions also permit the (very) repugnant conclusion, replacement and similar tradeoffs on small scales (in contained hypotheticals, all else equal), but the “hard” versions don’t.
I’ve also thought of something similar here, but far less developed. I think such views have been underexplored. I suspect one of the reasons is because these approaches tend to be more mathematically complex to develop in full (the independence of irrelevant alternatives is a strong simplifying assumption), so that limits who can contribute and requires more work from each to make progress. Teruji Thomas has both a PhD in mathematics and a PhD in philosophy.
If you’re still sympathetic to the view that making more happy people is inherently good overall, not just able to offset losses, that might be best captured through moral uncertainty rather than fitting it all in one view. You could assign some credence to a symmetric or weakly asymmetric utilitarian view (possibly with moral uncertainty over how weakly asymmetric it should be), and some credence to something like the view by Teruji Thomas above.