But you don’t sue the scientist who retired one year before they discovered the cure. You don’t sue the government agency who denied the grant. You’re right to be angry at them. But it’s not a harm.
You do sue the factory that emitted the carcinogen that gave you cancer. Because that is a harm.
If someone physically held a gun to a surgeon’s head and stopped them from saving your life, would you consider that a harm? I would. In the same way, if the government forcibly prevents AI companies from accelerating medical breakthroughs through AI pause regulations, I consider that a harm too. This is fundamentally different from a situation where someone simply chooses not to advance medicine on their own. In one case, progress is being actively blocked by force; in the other, someone is merely declining to contribute. The distinction between coercively preventing progress and passively not pursuing it matters a lot for assigning blame and naming harm.
Yes, if a surgeon is forced not to operate, the patient is harmed.
But if the surgeon decides to stop operating and lets the patient die on the operating table, that is also a harm. The patient can sue the surgeon!
If a scientist is forced to retire, that harms the scientist. But I don’t think that harms the potential beneficiaries of technologies the scientist would have discovered. They can’t sue.
How does it matter to my argument that, in your analogy, someone dies but we can’t sue the person responsible? I don’t see the relevance. My point is about whether the death constitutes a harm that we should try to mitigate, not about whether anyone can be held legally liable for it.
I concede that if policymakers pass regulations that delay medical progress and cause billions of deaths as a result, I won’t be able to sue them. I still intend to fight against those regulations.
I was sloppy last night: If someone dies, they do suffer a harm. I should be arguing that these acts are “not harmful”; i.e., the actor isn’t responsible, the disease is. I think you gathered my meaning, but sorry for not being clear.
What I’m objecting to is the language “would likely cause grave harm”, which implies that heavy regulation would do harm — would harm people — and implies that the regulators would be morally responsible for the harm. This improperly tries to put an extra burden on supporters of regulation, because there’s a higher bar for harmful policies than for policies that fail to avert harm.
There’s a real ethical distinction between harming someone and merely failing to help them (or failing to mitigate a harm). One piece of evidence for whether an act is considered harmful is whether the victim can sue for damages. There are other kinds of evidence:
In the absence of a functioning legal system, most people would agree it would not be ethical to take revenge on someone who forced a scientist to retire early (assuming justice had already been done with respect to the harm to the scientist).
People don’t have legal or moral rights to the fruit of scientists’ labor (except for labor that’s already been paid for with public funds).
If you ask who or what killed a sick person, most people would blame the disease, not anyone who slowed down scientific progress lately.
If you just want to argue that heavy regulation would fail to mitigate a harm that we should mitigate, you could just say that your preferred policy would mitigate a grave harm. Or you could say that heavy regulation carries a cost of human lives — which still involves a rhetorical move of making your preferred policy the default against which opportunity costs are assessed, but doesn’t make an extra moral claim about harm. (Bostrom is careful to use the word “cost” instead of “harm” here, it seems, for what it’s worth.)
Of course, the outcome matters.
But you don’t sue the scientist who retired one year before they discovered the cure. You don’t sue the government agency who denied the grant. You’re right to be angry at them. But it’s not a harm.
You do sue the factory that emitted the carcinogen that gave you cancer. Because that is a harm.
If someone physically held a gun to a surgeon’s head and stopped them from saving your life, would you consider that a harm? I would. In the same way, if the government forcibly prevents AI companies from accelerating medical breakthroughs through AI pause regulations, I consider that a harm too. This is fundamentally different from a situation where someone simply chooses not to advance medicine on their own. In one case, progress is being actively blocked by force; in the other, someone is merely declining to contribute. The distinction between coercively preventing progress and passively not pursuing it matters a lot for assigning blame and naming harm.
Yes, if a surgeon is forced not to operate, the patient is harmed.
But if the surgeon decides to stop operating and lets the patient die on the operating table, that is also a harm. The patient can sue the surgeon!
If a scientist is forced to retire, that harms the scientist. But I don’t think that harms the potential beneficiaries of technologies the scientist would have discovered. They can’t sue.
In these cases, it’s not force that makes a harm.
How does it matter to my argument that, in your analogy, someone dies but we can’t sue the person responsible? I don’t see the relevance. My point is about whether the death constitutes a harm that we should try to mitigate, not about whether anyone can be held legally liable for it.
I concede that if policymakers pass regulations that delay medical progress and cause billions of deaths as a result, I won’t be able to sue them. I still intend to fight against those regulations.
I was sloppy last night: If someone dies, they do suffer a harm. I should be arguing that these acts are “not harmful”; i.e., the actor isn’t responsible, the disease is. I think you gathered my meaning, but sorry for not being clear.
What I’m objecting to is the language “would likely cause grave harm”, which implies that heavy regulation would do harm — would harm people — and implies that the regulators would be morally responsible for the harm. This improperly tries to put an extra burden on supporters of regulation, because there’s a higher bar for harmful policies than for policies that fail to avert harm.
There’s a real ethical distinction between harming someone and merely failing to help them (or failing to mitigate a harm). One piece of evidence for whether an act is considered harmful is whether the victim can sue for damages. There are other kinds of evidence:
In the absence of a functioning legal system, most people would agree it would not be ethical to take revenge on someone who forced a scientist to retire early (assuming justice had already been done with respect to the harm to the scientist).
People don’t have legal or moral rights to the fruit of scientists’ labor (except for labor that’s already been paid for with public funds).
If you ask who or what killed a sick person, most people would blame the disease, not anyone who slowed down scientific progress lately.
If you just want to argue that heavy regulation would fail to mitigate a harm that we should mitigate, you could just say that your preferred policy would mitigate a grave harm. Or you could say that heavy regulation carries a cost of human lives — which still involves a rhetorical move of making your preferred policy the default against which opportunity costs are assessed, but doesn’t make an extra moral claim about harm. (Bostrom is careful to use the word “cost” instead of “harm” here, it seems, for what it’s worth.)