If I get cancer and someone intervenes to prevent the cure from being developed before the cancer kills me, I think they harmed me. Even if they choose to call it a “foregone benefit”, I still died. That’s what matters to me, not how they choose to describe it.
Sure, assuming the development of your cure doesn’t have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is “the risks are not that high”, but it’s worth pointing out that this is really a core assumption that the rest of your position is based on.
I’ll freely admit that my case for acceleration depends in large part on the risk being low. But I want to separate two distinct arguments here. Many people have told me that acceleration would be unjustified even if the risk is low. Their reasoning is that the sheer number of potential future people creates an overwhelming moral obligation to prioritize bringing them into existence, and that this obligation outweighs the welfare interests of everyone alive today.
I think this longtermist moral argument fails on its own terms, independently of my views about risk. Giving each potential future person significant moral weight inevitably reduces the moral weight of every currently living person to something negligible, since >10^23 potential future people will always swamp anything on the other side of the equation. Billions of real, existing people effectively become a rounding error in the calculation. To me, any moral framework that treats the people alive right now as though they barely matter at all is not one worth taking seriously. It is a ghastly moral stance, and I would reject it even if I thought the risks of acceleration were higher than I actually believe them to be.
But you don’t sue the scientist who retired one year before they discovered the cure. You don’t sue the government agency who denied the grant. You’re right to be angry at them. But it’s not a harm.
You do sue the factory that emitted the carcinogen that gave you cancer. Because that is a harm.
If someone physically held a gun to a surgeon’s head and stopped them from saving your life, would you consider that a harm? I would. In the same way, if the government forcibly prevents AI companies from accelerating medical breakthroughs through AI pause regulations, I consider that a harm too. This is fundamentally different from a situation where someone simply chooses not to advance medicine on their own. In one case, progress is being actively blocked by force; in the other, someone is merely declining to contribute. The distinction between coercively preventing progress and passively not pursuing it matters a lot for assigning blame and naming harm.
Yes, if a surgeon is forced not to operate, the patient is harmed.
But if the surgeon decides to stop operating and lets the patient die on the operating table, that is also a harm. The patient can sue the surgeon!
If a scientist is forced to retire, that harms the scientist. But I don’t think that harms the potential beneficiaries of technologies the scientist would have discovered. They can’t sue.
How does it matter to my argument that, in your analogy, someone dies but we can’t sue the person responsible? I don’t see the relevance. My point is about whether the death constitutes a harm that we should try to mitigate, not about whether anyone can be held legally liable for it.
I concede that if policymakers pass regulations that delay medical progress and cause billions of deaths as a result, I won’t be able to sue them. I still intend to fight against those regulations.
I was sloppy last night: If someone dies, they do suffer a harm. I should be arguing that these acts are “not harmful”; i.e., the actor isn’t responsible, the disease is. I think you gathered my meaning, but sorry for not being clear.
What I’m objecting to is the language “would likely cause grave harm”, which implies that heavy regulation would do harm — would harm people — and implies that the regulators would be morally responsible for the harm. This improperly tries to put an extra burden on supporters of regulation, because there’s a higher bar for harmful policies than for policies that fail to avert harm.
There’s a real ethical distinction between harming someone and merely failing to help them (or failing to mitigate a harm). One piece of evidence for whether an act is considered harmful is whether the victim can sue for damages. There are other kinds of evidence:
In the absence of a functioning legal system, most people would agree it would not be ethical to take revenge on someone who forced a scientist to retire early (assuming justice had already been done with respect to the harm to the scientist).
People don’t have legal or moral rights to the fruit of scientists’ labor (except for labor that’s already been paid for with public funds).
If you ask who or what killed a sick person, most people would blame the disease, not anyone who slowed down scientific progress lately.
If you just want to argue that heavy regulation would fail to mitigate a harm that we should mitigate, you could just say that your preferred policy would mitigate a grave harm. Or you could say that heavy regulation carries a cost of human lives — which still involves a rhetorical move of making your preferred policy the default against which opportunity costs are assessed, but doesn’t make an extra moral claim about harm. (Bostrom is careful to use the word “cost” instead of “harm” here, it seems, for what it’s worth.)
If I get cancer and someone intervenes to prevent the cure from being developed before the cancer kills me, I think they harmed me. Even if they choose to call it a “foregone benefit”, I still died. That’s what matters to me, not how they choose to describe it.
Sure, assuming the development of your cure doesn’t have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is “the risks are not that high”, but it’s worth pointing out that this is really a core assumption that the rest of your position is based on.
I’ll freely admit that my case for acceleration depends in large part on the risk being low. But I want to separate two distinct arguments here. Many people have told me that acceleration would be unjustified even if the risk is low. Their reasoning is that the sheer number of potential future people creates an overwhelming moral obligation to prioritize bringing them into existence, and that this obligation outweighs the welfare interests of everyone alive today.
I think this longtermist moral argument fails on its own terms, independently of my views about risk. Giving each potential future person significant moral weight inevitably reduces the moral weight of every currently living person to something negligible, since >10^23 potential future people will always swamp anything on the other side of the equation. Billions of real, existing people effectively become a rounding error in the calculation. To me, any moral framework that treats the people alive right now as though they barely matter at all is not one worth taking seriously. It is a ghastly moral stance, and I would reject it even if I thought the risks of acceleration were higher than I actually believe them to be.
Of course, the outcome matters.
But you don’t sue the scientist who retired one year before they discovered the cure. You don’t sue the government agency who denied the grant. You’re right to be angry at them. But it’s not a harm.
You do sue the factory that emitted the carcinogen that gave you cancer. Because that is a harm.
If someone physically held a gun to a surgeon’s head and stopped them from saving your life, would you consider that a harm? I would. In the same way, if the government forcibly prevents AI companies from accelerating medical breakthroughs through AI pause regulations, I consider that a harm too. This is fundamentally different from a situation where someone simply chooses not to advance medicine on their own. In one case, progress is being actively blocked by force; in the other, someone is merely declining to contribute. The distinction between coercively preventing progress and passively not pursuing it matters a lot for assigning blame and naming harm.
Yes, if a surgeon is forced not to operate, the patient is harmed.
But if the surgeon decides to stop operating and lets the patient die on the operating table, that is also a harm. The patient can sue the surgeon!
If a scientist is forced to retire, that harms the scientist. But I don’t think that harms the potential beneficiaries of technologies the scientist would have discovered. They can’t sue.
In these cases, it’s not force that makes a harm.
How does it matter to my argument that, in your analogy, someone dies but we can’t sue the person responsible? I don’t see the relevance. My point is about whether the death constitutes a harm that we should try to mitigate, not about whether anyone can be held legally liable for it.
I concede that if policymakers pass regulations that delay medical progress and cause billions of deaths as a result, I won’t be able to sue them. I still intend to fight against those regulations.
I was sloppy last night: If someone dies, they do suffer a harm. I should be arguing that these acts are “not harmful”; i.e., the actor isn’t responsible, the disease is. I think you gathered my meaning, but sorry for not being clear.
What I’m objecting to is the language “would likely cause grave harm”, which implies that heavy regulation would do harm — would harm people — and implies that the regulators would be morally responsible for the harm. This improperly tries to put an extra burden on supporters of regulation, because there’s a higher bar for harmful policies than for policies that fail to avert harm.
There’s a real ethical distinction between harming someone and merely failing to help them (or failing to mitigate a harm). One piece of evidence for whether an act is considered harmful is whether the victim can sue for damages. There are other kinds of evidence:
In the absence of a functioning legal system, most people would agree it would not be ethical to take revenge on someone who forced a scientist to retire early (assuming justice had already been done with respect to the harm to the scientist).
People don’t have legal or moral rights to the fruit of scientists’ labor (except for labor that’s already been paid for with public funds).
If you ask who or what killed a sick person, most people would blame the disease, not anyone who slowed down scientific progress lately.
If you just want to argue that heavy regulation would fail to mitigate a harm that we should mitigate, you could just say that your preferred policy would mitigate a grave harm. Or you could say that heavy regulation carries a cost of human lives — which still involves a rhetorical move of making your preferred policy the default against which opportunity costs are assessed, but doesn’t make an extra moral claim about harm. (Bostrom is careful to use the word “cost” instead of “harm” here, it seems, for what it’s worth.)