I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.
I don’t like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.
The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn’t exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.
You can slightly change the scenarios and get it so that people counter factually wouldn’t have died if you didn’t exist, which don’t seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?
Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.
I meant to link to that or something similar. In both situations I am killing someone. By not donating to a givewell charity some innocent in Africa dies, (saving more innocents live elsewhere). So I am already in mistake territory, even before I start thinking about terrorism.
I don’t like being in mistake territory, so my brain is liable to want to shut off from thinking about it, or inure my heart to the decision.
The distinction between taking an action resulting in someone dying when counterfactually they would not have died if you took some other action, and when counterfactually they would not have died if you didn’t exist, while not important to pure consequentialist reasoning, has bearing on when a human attempting consequentialist reasoning should be wary of the fact that they are running on hostile hardware.
You can slightly change the scenarios and get it so that people counter factually wouldn’t have died if you didn’t exist, which don’t seem much morally different. For example X is going to donate to givewell and save Zs life. Should you (Y) convince X to donate to an anti-tobacco campaign which will save more lives. Is this morally the same as (risk free, escalation-less) terrorism or the same as being X?
Anyway I have the feeling people are getting bored of me on this subject, including myself. Simply chalk this up to someone not compartmentalizing correctly. Although I think that if I need to keep consequentialist reasoning compartmentalised, I am likely to find all consequentialist reasoning more suspect.