Might it be worth to spend some more time investigating arguments for existential risk reduction that don’t presuppose consequentialism?
Most non-consequentialists are not indifferent to consequences. For example, they might believe in punishing drunk drivers irrespective of whether they run into someone—but if they drive drunk and then actually kill someone, that is still highly relevant information.
Egoists are more of a problem from the perspective of this cause, I believe.
Egoists are more of a problem from the perspective of this cause, I believe.
That’s an inferential step further, but those could be swayed by the prospect of a very very long life. It’s a reaaaly long shot, but existential risks are a barrier to personal immortality.
Sure—egoists, assign some value to avoiding the end of the world.
For them, it isn’t billions of times worse than all their friends and relatives dying, though.
Smaller utilities mean that the “tiny chance times huge utility” sums don’t have the same results as for utilitarians.
This results in disagreements over policy issues. For instance, an egoist might regard a utilitarian organisation—like the Singularity Institute—gaining power as being a bad thing—since they plainly have such a different set of values. They would be willing to gamble small chances of a huge utility—while the egoist might regard the huge utility as being illusory.
This is a problem because (I claim) the actions of most people more closely approximate those of egoists than utilitarians—since they were built by natural selection to value their own inclusive fitness.
The Singularity Institute is a kind of utilitarian club—where utilitarians club together in an attempt to steal the future, against practically everyone else’s wishes.
Smaller utilities mean that the “tiny chance times huge utility” sums don’t have the same results as for utilitarians.
Beware Pascal’s wager. Also worthy of note is that Eliezer himself doesn’t gamble on a small probability. But maybe you talked about the difference the egoist could make? Then I agree it amounts to a much smaller probability.
On the other hand, I think the prospect of living a few aeons represents by itself a huge utility, even for an egoist. It might still be worth a long shot.
I would call myself more of an egoist, and I would say the first possibility looks really good and the second possibility looks pretty bad. I of course assume that I am part of the 1%.
As an egoist myself, the prospect of a very very long life would push me to care less about long term existential risk and care more about increasing the odds of that prospect of a long life for me and mine in particular.
Having no prospect of an increased life span would make me more likely to care about existential risk. If there’s not much I can do to increase my lifespan, the question becomes how to spend that time. Spending it saving the world has some appeal, particularly if I can get paid for it.
I think the original post is mistakenly conflating consequentialism and utilitarianism. Consequentialism only indicates you care about consequences—it doesn’t indicate whose consequences you care about. It certainly doesn’t make you a utilitarian, or a utilitarian for future beings either.
Oh, I wouldn’t advise you to do something about existential risks first. But once you’re signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.
I agree however that at that point, you hit diminishing returns.
Even if I’ve done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you’re focused on the utility of actually existing people.
Hmm, that’s a tough call. I note however that at that point, where your marginal dollar goes is more a matter of a cost-benefit calculation than a real difference in preferences (I also mostly care about currently existing people).
The question is, which will maximize life expectancy ? If you estimate that existential risks are sufficiently near and high, you would reduce them. If they are sufficiently far and low, you’d go for life extension first.
I reckon It depends on a range of personal factors, not least of which your own age. You may very well estimate that if you where not egoist, you’d go for existential risks, but maximizing your own life expectancy calls for life extension. Even then, that shouldn’t be a big problem for altruists. Because at that point, you’re doing good for everyone anyway.
Most non-consequentialists are not indifferent to consequences. For example, they might believe in punishing drunk drivers irrespective of whether they run into someone—but if they drive drunk and then actually kill someone, that is still highly relevant information.
Egoists are more of a problem from the perspective of this cause, I believe.
That’s an inferential step further, but those could be swayed by the prospect of a very very long life. It’s a reaaaly long shot, but existential risks are a barrier to personal immortality.
Sure—egoists, assign some value to avoiding the end of the world.
For them, it isn’t billions of times worse than all their friends and relatives dying, though.
Smaller utilities mean that the “tiny chance times huge utility” sums don’t have the same results as for utilitarians.
This results in disagreements over policy issues. For instance, an egoist might regard a utilitarian organisation—like the Singularity Institute—gaining power as being a bad thing—since they plainly have such a different set of values. They would be willing to gamble small chances of a huge utility—while the egoist might regard the huge utility as being illusory.
This is a problem because (I claim) the actions of most people more closely approximate those of egoists than utilitarians—since they were built by natural selection to value their own inclusive fitness.
The Singularity Institute is a kind of utilitarian club—where utilitarians club together in an attempt to steal the future, against practically everyone else’s wishes.
Beware Pascal’s wager. Also worthy of note is that Eliezer himself doesn’t gamble on a small probability. But maybe you talked about the difference the egoist could make? Then I agree it amounts to a much smaller probability.
On the other hand, I think the prospect of living a few aeons represents by itself a huge utility, even for an egoist. It might still be worth a long shot.
If an example of where there is a difference would help, consider these two possibilities:
1% of the population takes over the universe;
everyone is obliterated (99% chance) - or “everyone” takes over the universe (1% chance);
To an egoist those two possibilities look about equally bad.
To those whose main concern is existential risk, the second option looks a lot worse.
I would call myself more of an egoist, and I would say the first possibility looks really good and the second possibility looks pretty bad. I of course assume that I am part of the 1%.
As an egoist myself, the prospect of a very very long life would push me to care less about long term existential risk and care more about increasing the odds of that prospect of a long life for me and mine in particular.
Having no prospect of an increased life span would make me more likely to care about existential risk. If there’s not much I can do to increase my lifespan, the question becomes how to spend that time. Spending it saving the world has some appeal, particularly if I can get paid for it.
I think the original post is mistakenly conflating consequentialism and utilitarianism. Consequentialism only indicates you care about consequences—it doesn’t indicate whose consequences you care about. It certainly doesn’t make you a utilitarian, or a utilitarian for future beings either.
Oh, I wouldn’t advise you to do something about existential risks first. But once you’re signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.
I agree however that at that point, you hit diminishing returns.
Even if I’ve done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you’re focused on the utility of actually existing people.
Hmm, that’s a tough call. I note however that at that point, where your marginal dollar goes is more a matter of a cost-benefit calculation than a real difference in preferences (I also mostly care about currently existing people).
The question is, which will maximize life expectancy ? If you estimate that existential risks are sufficiently near and high, you would reduce them. If they are sufficiently far and low, you’d go for life extension first.
I reckon It depends on a range of personal factors, not least of which your own age. You may very well estimate that if you where not egoist, you’d go for existential risks, but maximizing your own life expectancy calls for life extension. Even then, that shouldn’t be a big problem for altruists. Because at that point, you’re doing good for everyone anyway.