I agree with JoshuaZ’s post in that the probability of UFAI creation will increase with the number of people trying to create AGI without concern for Friendliness, and that this is a much better measure of such than the number of locations at which such research takes place.
The world would probably stand a better chance without certain AGI projects, but I don’t think that effort put into dismantling such is nearly as efficient as putting effort towards FAI (considering that a FOOM-ing FAI will probably be able to stop future UFAI), especially considering current laws etc. By the way, I don’t see why you’re talking about eliminating countries and such. People that are not working on AGI have a very low likelyhood of creating UFAI, so I think you would just want to target the projects.
You seem to be using zero utility like I would use ‘infinite negative utility.’ To me, zero utility means that I don’t care in the slightest whether happens or not. With that said, I don’t assign infinite negative utility to anything (primarily because it causes my brain to bug out), so the probability of something happening still has a significant effect on the expected utility.
By the way, I don’t see why you’re talking about eliminating countries and such. People that are not working on AGI have a very low likelyhood of creating UFAI, so I think you would just want to target the projects.
Would you say China has a less than 10^-20 probability of developing UFAI? Or would you assign the utility of the entire future of the roughly 10^23 stars in the universe for the next 10^10 years to be less than 10^20 times the utility of life in China today? You must pick one (modulo time discounting), if you’re working within the generic LW existential-risk long-future big-universe scenario.
My point was that there would be no need to kill, say, the guy working in a textile factory. I know that probabilities of zero and one are not allowed, but I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero. I assume you agree that (negative utility produced by killing all people not working on FAI)>(negative utility produced by killing all people pursuing AGI that are not paying attention to Friendliness), so I think that you would want to take the latter option.
I did not claim that if I had the ability to eliminate all non-Friendly AGI projects I would not do so. (To remove the negatives, I believe that I would do so, subject to a large amount of further deliberation.)
I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero.
I would explain why I disagree with this, but my ultimate goal is not to motivate people to nuke China. My goal is more nearly opposite—to get people to realize that the usual LW approach has cast the problem in terms that logically justify killing most people. Once people realize that, they’ll be more open to alternative ways of looking at the problem.
I don’t know whether what I am saying concurs with the ‘usual LW approach,’ but I would very quickly move past the option of killing most people.
If we currently present ourselves with two options (letting dangerous UFAI projects progress and killing lots of people), then we should not grimace and take whichever choice we deem slightly more palatable—we should instead seek a third alternative.
In my eyes, this is what I have done—shutting down AGI projects would not necessitate the killing of large numbers of people, and perhaps a third alternative could be found to killing even one. To maintain that the premise “rapidly self-improving UFAI will almost certainly kill us all, if created” leads to killing most people, you must explain why, indeed, killing most people would reduce the existential risk presented by UFAI significantly more than would completely shutting down UFAI projects.
Upvoted for making me think.
I agree with JoshuaZ’s post in that the probability of UFAI creation will increase with the number of people trying to create AGI without concern for Friendliness, and that this is a much better measure of such than the number of locations at which such research takes place.
The world would probably stand a better chance without certain AGI projects, but I don’t think that effort put into dismantling such is nearly as efficient as putting effort towards FAI (considering that a FOOM-ing FAI will probably be able to stop future UFAI), especially considering current laws etc. By the way, I don’t see why you’re talking about eliminating countries and such. People that are not working on AGI have a very low likelyhood of creating UFAI, so I think you would just want to target the projects.
You seem to be using zero utility like I would use ‘infinite negative utility.’ To me, zero utility means that I don’t care in the slightest whether happens or not. With that said, I don’t assign infinite negative utility to anything (primarily because it causes my brain to bug out), so the probability of something happening still has a significant effect on the expected utility.
Would you say China has a less than 10^-20 probability of developing UFAI? Or would you assign the utility of the entire future of the roughly 10^23 stars in the universe for the next 10^10 years to be less than 10^20 times the utility of life in China today? You must pick one (modulo time discounting), if you’re working within the generic LW existential-risk long-future big-universe scenario.
My point was that there would be no need to kill, say, the guy working in a textile factory. I know that probabilities of zero and one are not allowed, but I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero. I assume you agree that (negative utility produced by killing all people not working on FAI)>(negative utility produced by killing all people pursuing AGI that are not paying attention to Friendliness), so I think that you would want to take the latter option.
I did not claim that if I had the ability to eliminate all non-Friendly AGI projects I would not do so. (To remove the negatives, I believe that I would do so, subject to a large amount of further deliberation.)
I would explain why I disagree with this, but my ultimate goal is not to motivate people to nuke China. My goal is more nearly opposite—to get people to realize that the usual LW approach has cast the problem in terms that logically justify killing most people. Once people realize that, they’ll be more open to alternative ways of looking at the problem.
I don’t know whether what I am saying concurs with the ‘usual LW approach,’ but I would very quickly move past the option of killing most people.
If we currently present ourselves with two options (letting dangerous UFAI projects progress and killing lots of people), then we should not grimace and take whichever choice we deem slightly more palatable—we should instead seek a third alternative.
In my eyes, this is what I have done—shutting down AGI projects would not necessitate the killing of large numbers of people, and perhaps a third alternative could be found to killing even one. To maintain that the premise “rapidly self-improving UFAI will almost certainly kill us all, if created” leads to killing most people, you must explain why, indeed, killing most people would reduce the existential risk presented by UFAI significantly more than would completely shutting down UFAI projects.
Edit: For clarification purposes, I do not believe that shutting down UFAI projects is the best use of my time. The above discussion refers to a situation in which people are much closer to creating UFAI than FAI and will continue to be given expected rate of progress.