I don’t agree with this particular argument, but I’ll mention it anyway for the sake of having a devil’s advocate:
The number of lives lost to an extinction event is arguably capped at ~10 billion, or whatever Earth’s carrying capacity is. If you think the AI risk is enough generations out, then it may well be possible to do more good by, say, eliminating poverty faster. A simple mathematical model would suggest that if the singularity is 10 generations away, and Earth will have a constant population of ~10 billion, then 100 billion lives will pass between now and the singularity. A 10% increase in humanity’s average quality of life over that period would be morally equivalent to stopping the singularity.
Now, there are a host of problems with the above argument:
First, it is trying to minimize death rather than maximize life. If you set out to maximize the number of Quality Adjusted Life Years that intelligent life accumulates before it’s extinction, then you should also take into account all of the potential future lives which would be extinguished by an extinction event, rather than just the lives taken by the event itself.
Second, the Future of Humanity Institute has conducted an informal survey of existential risk researchers, asking for estimates of the probability of human extinction in the next 100 years. The median result (not mean so as to minimize the impact of outliers) was ~19%. If that’s a ~20% chance each century, then we can expect humanity to last perhaps 2 or 3 centuries (aka, that’s the half life of a technological civilization). Even 300 years is only maybe 4 or 5 generations, so perhaps 50 billion lives could be effected by eliminating poverty now. Using the same simplistic model as before, that would require a 20% increase in humanity’s average quality of life to be morally equivalent to ~10 billion deaths. That’s a harder target to hit, but it may be even harder still if you consider that poverty is likely to be nearly eliminated in ~100 years. Poverty really has been going down steadily for the last century or so, and in another century we can expect it to be much improved.
Note that both of these points are based on somewhat subjective judgements. Personally, I think Friendly AI research is around the point of diminishing returns. More money would be useful, of course, but I think it would be putting some focus on preemptively addressing other forms of existential risk which may emerge over the next century. Additionally, I think it’s important to play with other factors that go into QALY. Quality of life is already being addressed, and duration of our civilization is starting to be addressed via x-risk reduction. The other factor is the number of lives, which is currently capped at Earth’s carrying capacity of ~10 billion. I’d like to see trillions of lives. Brain uploads are one method as technology improves, but another is space colonization. The cheapest option I see is Directed Panspermia, which is the intentional seeding of new solar systems with dormant single cell life. All other forms of X-risk reduction address the possibility that the Great Filter is ahead of us, but this would hedge that bet. I haven’t done any calculations yet, but donating to organizations like the Mars Society may even turn out to be competitive in terms of QALY/$, if they can tip the political scale between humanity staying in and around Earth, and humanity starting to spread outward, colonizing other planets and eventually other stars over the next couple millennia. It’s hard to put a figure on the expected QALY return, but if quadrillions of lives hang in the balance, that may well tip the scales and make the tens of billions of dollars needed to initiate Mars colonization an extremely good investment.
I don’t agree with this particular argument, but I’ll mention it anyway for the sake of having a devil’s advocate:
The number of lives lost to an extinction event is arguably capped at ~10 billion, or whatever Earth’s carrying capacity is. If you think the AI risk is enough generations out, then it may well be possible to do more good by, say, eliminating poverty faster. A simple mathematical model would suggest that if the singularity is 10 generations away, and Earth will have a constant population of ~10 billion, then 100 billion lives will pass between now and the singularity. A 10% increase in humanity’s average quality of life over that period would be morally equivalent to stopping the singularity.
Now, there are a host of problems with the above argument:
First, it is trying to minimize death rather than maximize life. If you set out to maximize the number of Quality Adjusted Life Years that intelligent life accumulates before it’s extinction, then you should also take into account all of the potential future lives which would be extinguished by an extinction event, rather than just the lives taken by the event itself.
Second, the Future of Humanity Institute has conducted an informal survey of existential risk researchers, asking for estimates of the probability of human extinction in the next 100 years. The median result (not mean so as to minimize the impact of outliers) was ~19%. If that’s a ~20% chance each century, then we can expect humanity to last perhaps 2 or 3 centuries (aka, that’s the half life of a technological civilization). Even 300 years is only maybe 4 or 5 generations, so perhaps 50 billion lives could be effected by eliminating poverty now. Using the same simplistic model as before, that would require a 20% increase in humanity’s average quality of life to be morally equivalent to ~10 billion deaths. That’s a harder target to hit, but it may be even harder still if you consider that poverty is likely to be nearly eliminated in ~100 years. Poverty really has been going down steadily for the last century or so, and in another century we can expect it to be much improved.
Note that both of these points are based on somewhat subjective judgements. Personally, I think Friendly AI research is around the point of diminishing returns. More money would be useful, of course, but I think it would be putting some focus on preemptively addressing other forms of existential risk which may emerge over the next century. Additionally, I think it’s important to play with other factors that go into QALY. Quality of life is already being addressed, and duration of our civilization is starting to be addressed via x-risk reduction. The other factor is the number of lives, which is currently capped at Earth’s carrying capacity of ~10 billion. I’d like to see trillions of lives. Brain uploads are one method as technology improves, but another is space colonization. The cheapest option I see is Directed Panspermia, which is the intentional seeding of new solar systems with dormant single cell life. All other forms of X-risk reduction address the possibility that the Great Filter is ahead of us, but this would hedge that bet. I haven’t done any calculations yet, but donating to organizations like the Mars Society may even turn out to be competitive in terms of QALY/$, if they can tip the political scale between humanity staying in and around Earth, and humanity starting to spread outward, colonizing other planets and eventually other stars over the next couple millennia. It’s hard to put a figure on the expected QALY return, but if quadrillions of lives hang in the balance, that may well tip the scales and make the tens of billions of dollars needed to initiate Mars colonization an extremely good investment.