My point was that there would be no need to kill, say, the guy working in a textile factory. I know that probabilities of zero and one are not allowed, but I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero. I assume you agree that (negative utility produced by killing all people not working on FAI)>(negative utility produced by killing all people pursuing AGI that are not paying attention to Friendliness), so I think that you would want to take the latter option.
I did not claim that if I had the ability to eliminate all non-Friendly AGI projects I would not do so. (To remove the negatives, I believe that I would do so, subject to a large amount of further deliberation.)
I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero.
I would explain why I disagree with this, but my ultimate goal is not to motivate people to nuke China. My goal is more nearly opposite—to get people to realize that the usual LW approach has cast the problem in terms that logically justify killing most people. Once people realize that, they’ll be more open to alternative ways of looking at the problem.
I don’t know whether what I am saying concurs with the ‘usual LW approach,’ but I would very quickly move past the option of killing most people.
If we currently present ourselves with two options (letting dangerous UFAI projects progress and killing lots of people), then we should not grimace and take whichever choice we deem slightly more palatable—we should instead seek a third alternative.
In my eyes, this is what I have done—shutting down AGI projects would not necessitate the killing of large numbers of people, and perhaps a third alternative could be found to killing even one. To maintain that the premise “rapidly self-improving UFAI will almost certainly kill us all, if created” leads to killing most people, you must explain why, indeed, killing most people would reduce the existential risk presented by UFAI significantly more than would completely shutting down UFAI projects.
My point was that there would be no need to kill, say, the guy working in a textile factory. I know that probabilities of zero and one are not allowed, but I feel that I can safely round the chance that he will be directly involved in creating a UFAI to zero. I assume you agree that (negative utility produced by killing all people not working on FAI)>(negative utility produced by killing all people pursuing AGI that are not paying attention to Friendliness), so I think that you would want to take the latter option.
I did not claim that if I had the ability to eliminate all non-Friendly AGI projects I would not do so. (To remove the negatives, I believe that I would do so, subject to a large amount of further deliberation.)
I would explain why I disagree with this, but my ultimate goal is not to motivate people to nuke China. My goal is more nearly opposite—to get people to realize that the usual LW approach has cast the problem in terms that logically justify killing most people. Once people realize that, they’ll be more open to alternative ways of looking at the problem.
I don’t know whether what I am saying concurs with the ‘usual LW approach,’ but I would very quickly move past the option of killing most people.
If we currently present ourselves with two options (letting dangerous UFAI projects progress and killing lots of people), then we should not grimace and take whichever choice we deem slightly more palatable—we should instead seek a third alternative.
In my eyes, this is what I have done—shutting down AGI projects would not necessitate the killing of large numbers of people, and perhaps a third alternative could be found to killing even one. To maintain that the premise “rapidly self-improving UFAI will almost certainly kill us all, if created” leads to killing most people, you must explain why, indeed, killing most people would reduce the existential risk presented by UFAI significantly more than would completely shutting down UFAI projects.
Edit: For clarification purposes, I do not believe that shutting down UFAI projects is the best use of my time. The above discussion refers to a situation in which people are much closer to creating UFAI than FAI and will continue to be given expected rate of progress.