Suppose I hire a hitman to kill you. But suppose there already are 3 hitmen trying to kill you, and I’m hoping my hitman would reach you first, and I know that my hitman has really bad aim. Once the first hitman reaches you and starts shooting, the other hitmen will freak out and run away, so I’m hoping you’re more likely to survive.
I have no other options for saving you, since the only contact I have is a hitman, and he’s very bad at English and doesn’t understand any instructions except trying to kill someone.
In this case, you can argue to the court that my plan to save you was retarded. But you cannot concede that my plan actually was a good idea consequentially, but deontologically unethical. Since I didn’t intend to kill anyone.
Deontology only kicks in when your plan involves making someone die, or greatly increasing the chance someone dies.
I feel like this it’s actually a great analogy! The only difference is that if your hitman starts shooting and doesn’t kill anyone, you get infinite gold.
You know that in real life you go to police instead of hiring a hitman, right?
And I claim that it’s really not okay to hire a hitman who might lower the chance of the person ending up dead, especially when your brain is aware of the infinite gold part.
The good strategy for anyone in that situation to follow is to go to the police or go public and not hire any additional hitmen.
I don’t agree that deontology is about intent. Deontology is about action. Deontology is about not hiring hitmen to kill someone even if you have a really good reason, and even if your intent is good. Deontology is substantially about schelling lines of action where everything gets hard to predict and goes bad after you commit it.
I imagine that your incompetent hitman has only like a 50% chance of succeeding, whereas the others have ~100%, that seems deontologically wrong to me.
It seems plausible that what you mean to say by the hypothetical is that he has 0% chance.
I admit this is more confusing and I’m not fully resolved on this.
I notice I am confused about how you can get that epistemic state in real life.
I observe that society will still prosecute you for attempted murder if you buy a hitman off the dark web, even one with a clearly incompetent reputation for 0⁄10 kills or whatever.
I think society’s ability to police this line is not as fine grained as you’re imagining, and so you should not buy incompetent hitmen in order to not kill your friend, unless you’re willing to face the consequences.
To be honest I couldn’t resist writing the comment because I just wanted to share the silly thought :/
Now that I think about it, it’s much more complicated. Mikhail Samin is right that the personal incentive of reaching AGI first really complicates the good intentions. And while a lot of deontology is about intent, it’s hyperbole to say that deontology is just intent.
I think if your main intent is to save someone (and not personal gain), and your plan doesn’t require or seek anyone’s death, then it is deontologically much less bad than evil things like murder. But it may still be too bad for you to do, if you strongly lean towards deontology rather than consequentialism. Even if the court doesn’t find you guilty of first degree murder, it may still find you guilty of… some… things.
One might argue that the enormous scale (risking everyone’s death instead of only one person), makes it deontologically worse. But I think the balance does not shift in favor of deontology and against consequentialism as we increase the scale (it might even shift a little in favor of consequentialism?).
Sorry for replying to a dead thread but,
Murder implies an intent to kill someone.
Suppose I hire a hitman to kill you. But suppose there already are 3 hitmen trying to kill you, and I’m hoping my hitman would reach you first, and I know that my hitman has really bad aim. Once the first hitman reaches you and starts shooting, the other hitmen will freak out and run away, so I’m hoping you’re more likely to survive.
I have no other options for saving you, since the only contact I have is a hitman, and he’s very bad at English and doesn’t understand any instructions except trying to kill someone.
In this case, you can argue to the court that my plan to save you was retarded. But you cannot concede that my plan actually was a good idea consequentially, but deontologically unethical. Since I didn’t intend to kill anyone.
Deontology only kicks in when your plan involves making someone die, or greatly increasing the chance someone dies.
I feel like this it’s actually a great analogy! The only difference is that if your hitman starts shooting and doesn’t kill anyone, you get infinite gold.
You know that in real life you go to police instead of hiring a hitman, right?
And I claim that it’s really not okay to hire a hitman who might lower the chance of the person ending up dead, especially when your brain is aware of the infinite gold part.
The good strategy for anyone in that situation to follow is to go to the police or go public and not hire any additional hitmen.
Yeah, it’s less deontologically bad than murder but I admit it’s still not completely okay.
PS: Part of the reason I used the unflattering hitman analogy is because I’m no longer as optimistic about Anthropic’s influence.
They routinely describe other problems (e.g. winning the race against China to defend democracy) with the same urgency as AI Notkilleveryoneism.
The only way to believe that AI Notkilleveryoneism is still Anthropic’s main purpose, is to hope that,
They describe a ton of other problems with the same urgency as AI Notkilleveryoneism, but that is only due to political necessity.
At the same time, their apparent concern for AI Notkilleveryoneism is not just a political maneuver, but significantly more genuine.
This “hope” is plausible since the people in charge of Anthropic prefer to live, and consistently claimed to have high P(doom).
But it’s not certain, and there is circumstantial evidence suggesting this isn’t the case (e.g. their lobbying direction, and how they’re choosing people for their board of directors).
Maybe50% this hope is just cope :(
I don’t agree that deontology is about intent. Deontology is about action. Deontology is about not hiring hitmen to kill someone even if you have a really good reason, and even if your intent is good. Deontology is substantially about schelling lines of action where everything gets hard to predict and goes bad after you commit it.
I imagine that your incompetent hitman has only like a 50% chance of succeeding, whereas the others have ~100%, that seems deontologically wrong to me.
It seems plausible that what you mean to say by the hypothetical is that he has 0% chance.
I admit this is more confusing and I’m not fully resolved on this.
I notice I am confused about how you can get that epistemic state in real life.
I observe that society will still prosecute you for attempted murder if you buy a hitman off the dark web, even one with a clearly incompetent reputation for 0⁄10 kills or whatever.
I think society’s ability to police this line is not as fine grained as you’re imagining, and so you should not buy incompetent hitmen in order to not kill your friend, unless you’re willing to face the consequences.
To be honest I couldn’t resist writing the comment because I just wanted to share the silly thought :/
Now that I think about it, it’s much more complicated. Mikhail Samin is right that the personal incentive of reaching AGI first really complicates the good intentions. And while a lot of deontology is about intent, it’s hyperbole to say that deontology is just intent.
I think if your main intent is to save someone (and not personal gain), and your plan doesn’t require or seek anyone’s death, then it is deontologically much less bad than evil things like murder. But it may still be too bad for you to do, if you strongly lean towards deontology rather than consequentialism. Even if the court doesn’t find you guilty of first degree murder, it may still find you guilty of… some… things.
One might argue that the enormous scale (risking everyone’s death instead of only one person), makes it deontologically worse. But I think the balance does not shift in favor of deontology and against consequentialism as we increase the scale (it might even shift a little in favor of consequentialism?).