As an egoist myself, the prospect of a very very long life would push me to care less about long term existential risk and care more about increasing the odds of that prospect of a long life for me and mine in particular.
Having no prospect of an increased life span would make me more likely to care about existential risk. If there’s not much I can do to increase my lifespan, the question becomes how to spend that time. Spending it saving the world has some appeal, particularly if I can get paid for it.
I think the original post is mistakenly conflating consequentialism and utilitarianism. Consequentialism only indicates you care about consequences—it doesn’t indicate whose consequences you care about. It certainly doesn’t make you a utilitarian, or a utilitarian for future beings either.
Oh, I wouldn’t advise you to do something about existential risks first. But once you’re signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.
I agree however that at that point, you hit diminishing returns.
Even if I’ve done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you’re focused on the utility of actually existing people.
Hmm, that’s a tough call. I note however that at that point, where your marginal dollar goes is more a matter of a cost-benefit calculation than a real difference in preferences (I also mostly care about currently existing people).
The question is, which will maximize life expectancy ? If you estimate that existential risks are sufficiently near and high, you would reduce them. If they are sufficiently far and low, you’d go for life extension first.
I reckon It depends on a range of personal factors, not least of which your own age. You may very well estimate that if you where not egoist, you’d go for existential risks, but maximizing your own life expectancy calls for life extension. Even then, that shouldn’t be a big problem for altruists. Because at that point, you’re doing good for everyone anyway.
As an egoist myself, the prospect of a very very long life would push me to care less about long term existential risk and care more about increasing the odds of that prospect of a long life for me and mine in particular.
Having no prospect of an increased life span would make me more likely to care about existential risk. If there’s not much I can do to increase my lifespan, the question becomes how to spend that time. Spending it saving the world has some appeal, particularly if I can get paid for it.
I think the original post is mistakenly conflating consequentialism and utilitarianism. Consequentialism only indicates you care about consequences—it doesn’t indicate whose consequences you care about. It certainly doesn’t make you a utilitarian, or a utilitarian for future beings either.
Oh, I wouldn’t advise you to do something about existential risks first. But once you’re signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.
I agree however that at that point, you hit diminishing returns.
Even if I’ve done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you’re focused on the utility of actually existing people.
Hmm, that’s a tough call. I note however that at that point, where your marginal dollar goes is more a matter of a cost-benefit calculation than a real difference in preferences (I also mostly care about currently existing people).
The question is, which will maximize life expectancy ? If you estimate that existential risks are sufficiently near and high, you would reduce them. If they are sufficiently far and low, you’d go for life extension first.
I reckon It depends on a range of personal factors, not least of which your own age. You may very well estimate that if you where not egoist, you’d go for existential risks, but maximizing your own life expectancy calls for life extension. Even then, that shouldn’t be a big problem for altruists. Because at that point, you’re doing good for everyone anyway.