In a way yes. It was just the context that I thought of the problem under.
. Why not bomb cigarette factories? If you’re willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?
Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.
Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.
This decision algorithm (“kill anyone whom I think needs killing”) leads to general anarchy.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Like I said this line of thought made me want to reject utilitarianism.
“A guy bombed a chip factory, guess we’ll never pursue advanced computer technology again until we have the wisdom to use it.”
That wasn’t the reasoning at all! It was, “Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10.” Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.
I guess it is kind of suspicious that I know without doing the calculations that we’re not at the point where violence is justified yet.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.
This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won’t be cherry picked and will have violence do more harm than good. We have to pretend we’re setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.
So in this case, you’re letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people’s (stupid) unpopular violent actions would have killed.
It could be justified—if you’re going to save the world from Skynet, that’s worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry—but I imagine most cases wouldn’t be.
In a way yes. It was just the context that I thought of the problem under.
Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.
Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Like I said this line of thought made me want to reject utilitarianism.
That wasn’t the reasoning at all! It was, “Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10.” Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.
I guess it is kind of suspicious that I know without doing the calculations that we’re not at the point where violence is justified yet.
Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.
This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won’t be cherry picked and will have violence do more harm than good. We have to pretend we’re setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.
So in this case, you’re letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people’s (stupid) unpopular violent actions would have killed.
It could be justified—if you’re going to save the world from Skynet, that’s worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry—but I imagine most cases wouldn’t be.