Nice write-up. I’m one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year… and then ends up giving away bits of spending money for other things anyway, but that’s warm-fuzzies.
So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes) of income, around the end of the year or tax time, every year, in “JUST F-ING DO IT” mode.
At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.
This is the only thing I actually object to here. Any choice we make that influences the future at all could be said to reallocate probability between one set of future people and another set. There will only be one real future, though. While I vastly prefer for it to be a good one, I don’t consider abortion to be murder, and so I don’t feel any moral compulsion to maximize future people, or even to direct the future population towards a particular number. That would imply, to my view, that I’m already deciding the destinies of next year’s people, let alone next aeon’s, and that’s already deeply immoral.
We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that’s worth living in, not an arid desert, neither a universe tiled in smiley faces.
While I agree that, since future people will never be concrete entities, like shadowy figures, we don’t get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It’s easier in our days than it’s ever been!
Additionally, I personally value a universe filled with humans higher than a universe filled with ■.
My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar’s number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe’s energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don’t want to have changed.
Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other’s values or not value each other. Difficult potential balance to maintain, though that doesn’t necessarily mean it should be rejected as an option.
And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar’s number.
Nice write-up. I’m one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year… and then ends up giving away bits of spending money for other things anyway, but that’s warm-fuzzies.
So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes) of income, around the end of the year or tax time, every year, in “JUST F-ING DO IT” mode.
This is the only thing I actually object to here. Any choice we make that influences the future at all could be said to reallocate probability between one set of future people and another set. There will only be one real future, though. While I vastly prefer for it to be a good one, I don’t consider abortion to be murder, and so I don’t feel any moral compulsion to maximize future people, or even to direct the future population towards a particular number. That would imply, to my view, that I’m already deciding the destinies of next year’s people, let alone next aeon’s, and that’s already deeply immoral.
We can safely reason that the typical human, even in the future, will choose existence over non-existence. We can also infer which environments they would like better, and so we can maximise our efforts to leave behind an earth (solar system, universe) that’s worth living in, not an arid desert, neither a universe tiled in smiley faces.
While I agree that, since future people will never be concrete entities, like shadowy figures, we don’t get to decide on their literary or music tastes, I think we should still try to make them exist in an environment worth living in, and, if possible, get them to exist. In the worst case, they can still decide to exit this world. It’s easier in our days than it’s ever been!
Additionally, I personally value a universe filled with humans higher than a universe filled with ■.
My own moral intuitions say that there is an optimal number of human beings to live amongst X (perhaps around Dunbar’s number, though maybe not if society or anonymity are important) and that we should try to balance between utilizing as much of the universe’s energy as possible before heat death and maximizing these ideal groups of X size. I think a universe totally filled with humans would not be very good, it seems somewhat redundant to me since many of those humans would be extremely similar to each other but use up precious energy. I also think that individuals might feel meaningless in such a large crowd, unable to make an impact or strive for eudaimonia when surrounded by others. We might avoid that outcome by modifying our values about originality or human purpose, but those are values of mine I strongly don’t want to have changed.
Bioengineering might lead to humans who are much less similar to each other.
Yeah. The problem I see with that is that if humans grow too far apart, we will thwart each other’s values or not value each other. Difficult potential balance to maintain, though that doesn’t necessarily mean it should be rejected as an option.
Bioengineering makes CEV a lot harder.
And any number of bioengineering, societal/cultural shifts, and transporation and wealth improvements could help increase our effective Dunbar’s number.
That’s something I’ve wondered about, and also what you could accomplish by having an organization of people with unusually high Dunbar’s numbers.
Or a breeding population selecting for higher Dunbar’s numbers.
Or does that qualify as bioengineering?
I suppose it should count as bioengineering for purposes of this discussion.