All this AI stuff is an unnecessary distraction. Why not bomb cigarette factories? If you’re willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?
This decision algorithm (“kill anyone whom I think needs killing”) leads to general anarchy. There are a lot of people around who believe for one reason or another that killing various people would make things better, and most of them are wrong, for example religious fundamentalists who think killing gay people will improve society.
There are three possible equilibria—the one in which everyone kills everyone else, the one in which no one kills anyone else, and the one where everyone comes together to come up with a decision procedure to decide whom to kill—ie establishes an institution with a monopoly on using force. This third one is generally better than the other two which is why we have government and why most of us are usually willing to follow its laws.
I can conceive of extreme cases where it might be worth defecting from the equilibrium because the alternative is even worse—but bombing Intel? Come on. “A guy bombed a chip factory, guess we’ll never pursue advanced computer technology again until we have the wisdom to use it.”
In a way yes. It was just the context that I thought of the problem under.
. Why not bomb cigarette factories? If you’re willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?
Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.
Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.
This decision algorithm (“kill anyone whom I think needs killing”) leads to general anarchy.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Like I said this line of thought made me want to reject utilitarianism.
“A guy bombed a chip factory, guess we’ll never pursue advanced computer technology again until we have the wisdom to use it.”
That wasn’t the reasoning at all! It was, “Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10.” Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.
I guess it is kind of suspicious that I know without doing the calculations that we’re not at the point where violence is justified yet.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.
This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won’t be cherry picked and will have violence do more harm than good. We have to pretend we’re setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.
So in this case, you’re letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people’s (stupid) unpopular violent actions would have killed.
It could be justified—if you’re going to save the world from Skynet, that’s worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry—but I imagine most cases wouldn’t be.
I’m sure that smarts of the kind that visit LW and who work for the SIAI would be able to commit targeted killings of rogue researchers. Given that uFAI is portrayed to be the biggest risk humanity is facing, I don’t think it is a distraction to directly ask for why we are not talking about preemptive actions against uFAI rather than trying to come up with a conceptual framework for FAI. Even if the SIAI succeeds in creating a mathematical strict model to guarantee FAI, who says that the people working on AGI would implement it? Given the stakes, the option to take active actions against anyone carelessly working on AGI should be seriously considered.
1.) We have no guarantee that FAI is possible.
2.) We don’t know if the SIAI will succeed before the first AGI is build.
3.) We don’t know that if the SIAI succeeds that others will listen.
We should consider preemptive means in the case that someone might succeed in creating AGI under the above circumstances.
Someone asked me to delete this comment. I do not think deletion is necessary in cases like this. It’s enough for the readers to downvote it into oblivion. Note that after it hits −3 most readers will stop seeing it, so don’t panic if it only gets downvoted to −3 instead of −40.
You simply don’t stand up to your own rationality here. Although I can understand why you have to deny it in public, of course it is illegal.
I think it is just ridiculous that people think about taking out terrorists and nuclear facilities but not about AI researchers that could destroy the universe according to your view that AI can go FOOM.
Why though don’t we talk about contacting those people and tell them how dangerous it is, or maybe even try that they don’t get any funding?
If someone does think about it, do you think they would do it in public and we would ever hear about it? If someone’s doing it, I hope they have the good sense to do it covertly instead of discussing all the violent and illegal things they’re planning on an online forum.
I deleted all my other comments regarding this topic. I just wanted to figure out if you’re preaching the imminent rise of sea levels and at the same time purchase ocean-front property. Your comment convinced me.
I guess it was obvious, but too interesting to ignore. Others will come up with this idea sooner or later and as AI going FOOM will become mainstream, people are going to act upon it.
Thank you for deleting the comments; I realize that it’s an interesting idea to play with, but it’s just not something you can talk about in a public forum. Nothing good will come of it.
As usual, my lack of self control and that I do not think things through got me to act like an idiot. I guess someone like me is a even bigger risk :-(
I’ve even got a written list of rules I should follow but sometimes fail to care: Think before talking to people or writing stuff in public; Be careful of what you say and write; Rather write and say less or nothing at all if you’re not sure it isn’t stupid to do so; Be humble; You think you don’t know much but you actually don’t know nearly as much as you think; Other people won’t perceive what you utter the way you intended it to be meant; Other people may take things really serious; You often fail to perceive that matters actually are serious, be careful…
A little bit of knowledge is a dangerous thing. It can convince you that an argument this idiotic and this sloppy is actually profound. It can convince you to publicly make a raging jackass out of yourself, by rambling on and on, based on a stupid misunderstanding of a simplified, informal, intuitive description of something complex. — The Danger When You Don’t Know What You Don’t Know
All this AI stuff is an unnecessary distraction. Why not bomb cigarette factories? If you’re willing to tell people to stop smoking, you should be willing to kill a tobacco company executive if it will reduce lung cancer by the same amount, right?
This decision algorithm (“kill anyone whom I think needs killing”) leads to general anarchy. There are a lot of people around who believe for one reason or another that killing various people would make things better, and most of them are wrong, for example religious fundamentalists who think killing gay people will improve society.
There are three possible equilibria—the one in which everyone kills everyone else, the one in which no one kills anyone else, and the one where everyone comes together to come up with a decision procedure to decide whom to kill—ie establishes an institution with a monopoly on using force. This third one is generally better than the other two which is why we have government and why most of us are usually willing to follow its laws.
I can conceive of extreme cases where it might be worth defecting from the equilibrium because the alternative is even worse—but bombing Intel? Come on. “A guy bombed a chip factory, guess we’ll never pursue advanced computer technology again until we have the wisdom to use it.”
In a way yes. It was just the context that I thought of the problem under.
Not quite. If you are willing to donate $1000 dollars to an ad campaign against stopping smoking, because you think the ad campaign will save more than 1 life then yes it might be equivalent. If killing that executive would have a comparable effect in saving lives as the ad campaign.
Edit: To make things clearer, I mean by not donating $1000 dollars to a give well charity you are already causing someone to die.
But we are willing to let people die who we don’t think are important that we could have saved. This is equivalent to killing them, no? Or do you approach the trolley problem in some way that references the wider society?
Like I said this line of thought made me want to reject utilitarianism.
That wasn’t the reasoning at all! It was, “Guess the price of computer chips has gone up due to the uncertainty of building chip factories so we can only afford 6 spiffy new brain simulators this year rather than 10.” Each one has an X percent chance of becoming an AGI fooming and destroying us all. It is purely a stalling for time tactic. Feel free to ignore the AI argument if you want.
I suppose the difference is whether you’re doing the Intel attack now, or in a hypothetical future in which Intel is making brain simulators that seem likely to become AGI. As someone else mentioned, if we’re talking about literally THEY ARE BUILDING SKYNET RIGHT NOW, then violence seems like the right idea.
I guess it is kind of suspicious that I know without doing the calculations that we’re not at the point where violence is justified yet.
Even though on this individual problem leaving things alone would be worse than committing an act of violence, in the general case having everyone commit acts of violence is worse than having everyone leave things alone.
This example cherry-picks a case where violence is the correct answer. But when we generalize it, most of the cases it affects won’t be cherry picked and will have violence do more harm than good. We have to pretend we’re setting a moral system both for ourselves and for the fundamentalist who wants to kill gay people.
So in this case, you’re letting die (killing) the people your (smart) unpopular violent action would have saved, in order to save the lives of all the people whom other people’s (stupid) unpopular violent actions would have killed.
It could be justified—if you’re going to save the world from Skynet, that’s worth instituting a moral system that gives religious fundamentalists a little more latitude to violent bigotry—but I imagine most cases wouldn’t be.
I’m sure that smarts of the kind that visit LW and who work for the SIAI would be able to commit targeted killings of rogue researchers. Given that uFAI is portrayed to be the biggest risk humanity is facing, I don’t think it is a distraction to directly ask for why we are not talking about preemptive actions against uFAI rather than trying to come up with a conceptual framework for FAI. Even if the SIAI succeeds in creating a mathematical strict model to guarantee FAI, who says that the people working on AGI would implement it? Given the stakes, the option to take active actions against anyone carelessly working on AGI should be seriously considered.
1.) We have no guarantee that FAI is possible.
2.) We don’t know if the SIAI will succeed before the first AGI is build.
3.) We don’t know that if the SIAI succeeds that others will listen.
We should consider preemptive means in the case that someone might succeed in creating AGI under the above circumstances.
Someone asked me to delete this comment. I do not think deletion is necessary in cases like this. It’s enough for the readers to downvote it into oblivion. Note that after it hits −3 most readers will stop seeing it, so don’t panic if it only gets downvoted to −3 instead of −40.
You simply don’t stand up to your own rationality here. Although I can understand why you have to deny it in public, of course it is illegal.
I think it is just ridiculous that people think about taking out terrorists and nuclear facilities but not about AI researchers that could destroy the universe according to your view that AI can go FOOM.
Why though don’t we talk about contacting those people and tell them how dangerous it is, or maybe even try that they don’t get any funding?
If someone does think about it, do you think they would do it in public and we would ever hear about it? If someone’s doing it, I hope they have the good sense to do it covertly instead of discussing all the violent and illegal things they’re planning on an online forum.
I deleted all my other comments regarding this topic. I just wanted to figure out if you’re preaching the imminent rise of sea levels and at the same time purchase ocean-front property. Your comment convinced me.
I guess it was obvious, but too interesting to ignore. Others will come up with this idea sooner or later and as AI going FOOM will become mainstream, people are going to act upon it.
Thank you for deleting the comments; I realize that it’s an interesting idea to play with, but it’s just not something you can talk about in a public forum. Nothing good will come of it.
As usual, my lack of self control and that I do not think things through got me to act like an idiot. I guess someone like me is a even bigger risk :-(
I’ve even got a written list of rules I should follow but sometimes fail to care: Think before talking to people or writing stuff in public; Be careful of what you say and write; Rather write and say less or nothing at all if you’re not sure it isn’t stupid to do so; Be humble; You think you don’t know much but you actually don’t know nearly as much as you think; Other people won’t perceive what you utter the way you intended it to be meant; Other people may take things really serious; You often fail to perceive that matters actually are serious, be careful…
Okay, I’m convinced. Let’s add paramilitary.lesswrong.com to the subreddit proposal.