Sure, assuming the development of your cure doesn’t have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is “the risks are not that high”, but it’s worth pointing out that this is really a core assumption that the rest of your position is based on.
I’ll freely admit that my case for acceleration depends in large part on the risk being low. But I want to separate two distinct arguments here. Many people have told me that acceleration would be unjustified even if the risk is low. Their reasoning is that the sheer number of potential future people creates an overwhelming moral obligation to prioritize bringing them into existence, and that this obligation outweighs the welfare interests of everyone alive today.
I think this longtermist moral argument fails on its own terms, independently of my views about risk. Giving each potential future person significant moral weight inevitably reduces the moral weight of every currently living person to something negligible, since >10^23 potential future people will always swamp anything on the other side of the equation. Billions of real, existing people effectively become a rounding error in the calculation. To me, any moral framework that treats the people alive right now as though they barely matter at all is not one worth taking seriously. It is a ghastly moral stance, and I would reject it even if I thought the risks of acceleration were higher than I actually believe them to be.
Sure, assuming the development of your cure doesn’t have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is “the risks are not that high”, but it’s worth pointing out that this is really a core assumption that the rest of your position is based on.
I’ll freely admit that my case for acceleration depends in large part on the risk being low. But I want to separate two distinct arguments here. Many people have told me that acceleration would be unjustified even if the risk is low. Their reasoning is that the sheer number of potential future people creates an overwhelming moral obligation to prioritize bringing them into existence, and that this obligation outweighs the welfare interests of everyone alive today.
I think this longtermist moral argument fails on its own terms, independently of my views about risk. Giving each potential future person significant moral weight inevitably reduces the moral weight of every currently living person to something negligible, since >10^23 potential future people will always swamp anything on the other side of the equation. Billions of real, existing people effectively become a rounding error in the calculation. To me, any moral framework that treats the people alive right now as though they barely matter at all is not one worth taking seriously. It is a ghastly moral stance, and I would reject it even if I thought the risks of acceleration were higher than I actually believe them to be.