Please forgive the self-promotion but this is from Chapter 5 of my book Singularity Rising
“Successfully creating an obedient ultra-intelligence would give a country control of everything, making ultra-AI far more militarily useful than mere atomic weapons. The first nation to create an obedient ultra-AI would also instantly acquire the capacity to terminate its rivals’ AI development projects. Knowing the stakes, rival nations might go full throttle to win an ultra-AI race, even if they understood that haste could cause them to create a world destroying ultra-intelligence. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners’ Dilemma thwarts all cooperation efforts.”
“Scenario 2:
Generals, I [The United States President] have ordered the CIA to try to penetrate the Chinese seed AI development program, but I’m not hopeful, since the entire program consists of only twenty software engineers. Similarly, although Chinese intelligence must be using all their resources to break into our development program, the small size of our program means they will likely fail. I’ve thought about suggesting to the Chinese that we each monitor the other’s development program, but then I realized that each of us would cheat by creating a fake program that the other could watch while the real program continued to operate in secret. Since we can’t monitor the Chinese and they can’t monitor us, I’m ordering you to proceed quickly.”
“Scenario 4:
Generals, I order you to immediately bomb the Chinese AI research facilities because they are on track to finish a few months
before we do. Fortunately, their development effort is on a large enough scale that our spies were able to locate it. I know you worry that the Chinese will retaliate against us, but as soon as we attack, I will let the Chinese know that our seed AI development team operates out of submarines undetectable by their military. I will tell the Chinese that if they don’t strike back at us, then after the Singularity we will treat them extremely well.”
Scenario 5:
“Generals, I order you to strike China with a thousand hydrogen bombs. Our spies have determined that the Chinese are on the verge of activating their seed AI. Based on information given to them by the CIA, our AI development team believes that if the Chinese create an AI, it has a 20% chance of extinguishing mankind.
I personally called the Chinese Premier, told him everything we know about his program, and urged him to slow down, lest he destroy us all. But the Premier denied even having an AI program and probably believes that I’m lying to give our program time to finish ahead of his. And (to be honest), even if a Chinese ultra-AI would be just as safe as ours, I would be willing to deceive the Chinese if it would give our program a greater chance of beating theirs.
Tragically, our spies haven’t been able to pinpoint the location of the Chinese program, and the only action I can take that has a high chance of stopping the program is to kill almost everyone in China. The Chinese have a robust second strike capacity and I’m certain that they’ll respond to our attack by hitting us with biological weapons and hundreds of hydrogen bombs.
If unchecked, radioactive fallout and weaponized pathogens will eventually wipe out the human race. But our ultra-AI, which I’m 90% confident we will be able to develop within 15 years, could undoubtedly cleanup the radiation and pathogens, modify humans so they won’t be affected by either, or even use nanotechnology to terraform Mars and transport our species there. Our AI program operates out of submarines and secure underground bases that can withstand any Chinese attack. Based on my intelligence I’m almost certain that the Chinese haven’t similarly protected their program. I can’t use the threat of thermonuclear war to make the Chinese halt their program because they would then place their development team outside of our grasp.
Within a year, we will probably have the technical ability to activate a seed AI, but once the Chinese threat has been annihilated our team will have no reason to hurry and could take a decade to fine-tune their seed AI. If we delay, any intelligence explosion we create will have an extremely high probability of yielding a friendly AI. Some people on our team think that, given another decade, they will be able to mathematically prove that the seed AI will turn into a friendly ultra-AI.
A friendly AI would allow trillions and trillions of people to eventually live their lives, and mankind and our descendents could survive to the end of the universe in utopia. In contrast, an unfriendly AI would destroy us. I have decided to make the survival of mankind my overwhelming priority. Consequently, since a thermonuclear war would non-trivially increase the chance of mankind’s survival, I believe that it’s my moral duty to initiate war, even though my war will kill over a billion human beings. Physicists haven’t ruled out the possibility of time travel, so perhaps our ultra-AI will be able to save all of the people I’m about to kill.”
Please forgive the self-promotion but this is from Chapter 5 of my book Singularity Rising
“Successfully creating an obedient ultra-intelligence would give a country control of everything, making ultra-AI far more militarily useful than mere atomic weapons. The first nation to create an obedient ultra-AI would also instantly acquire the capacity to terminate its rivals’ AI development projects. Knowing the stakes, rival nations might go full throttle to win an ultra-AI race, even if they understood that haste could cause them to create a world destroying ultra-intelligence. These rivals might realize the danger and desperately wish to come to an agreement to reduce the peril, but they might find that the logic of the widely used game theory paradox of the Prisoners’ Dilemma thwarts all cooperation efforts.”
“Scenario 2: Generals, I [The United States President] have ordered the CIA to try to penetrate the Chinese seed AI development program, but I’m not hopeful, since the entire program consists of only twenty software engineers. Similarly, although Chinese intelligence must be using all their resources to break into our development program, the small size of our program means they will likely fail. I’ve thought about suggesting to the Chinese that we each monitor the other’s development program, but then I realized that each of us would cheat by creating a fake program that the other could watch while the real program continued to operate in secret. Since we can’t monitor the Chinese and they can’t monitor us, I’m ordering you to proceed quickly.”
“Scenario 4: Generals, I order you to immediately bomb the Chinese AI research facilities because they are on track to finish a few months before we do. Fortunately, their development effort is on a large enough scale that our spies were able to locate it. I know you worry that the Chinese will retaliate against us, but as soon as we attack, I will let the Chinese know that our seed AI development team operates out of submarines undetectable by their military. I will tell the Chinese that if they don’t strike back at us, then after the Singularity we will treat them extremely well.”
Scenario 5: “Generals, I order you to strike China with a thousand hydrogen bombs. Our spies have determined that the Chinese are on the verge of activating their seed AI. Based on information given to them by the CIA, our AI development team believes that if the Chinese create an AI, it has a 20% chance of extinguishing mankind.
I personally called the Chinese Premier, told him everything we know about his program, and urged him to slow down, lest he destroy us all. But the Premier denied even having an AI program and probably believes that I’m lying to give our program time to finish ahead of his. And (to be honest), even if a Chinese ultra-AI would be just as safe as ours, I would be willing to deceive the Chinese if it would give our program a greater chance of beating theirs.
Tragically, our spies haven’t been able to pinpoint the location of the Chinese program, and the only action I can take that has a high chance of stopping the program is to kill almost everyone in China. The Chinese have a robust second strike capacity and I’m certain that they’ll respond to our attack by hitting us with biological weapons and hundreds of hydrogen bombs.
If unchecked, radioactive fallout and weaponized pathogens will eventually wipe out the human race. But our ultra-AI, which I’m 90% confident we will be able to develop within 15 years, could undoubtedly cleanup the radiation and pathogens, modify humans so they won’t be affected by either, or even use nanotechnology to terraform Mars and transport our species there. Our AI program operates out of submarines and secure underground bases that can withstand any Chinese attack. Based on my intelligence I’m almost certain that the Chinese haven’t similarly protected their program. I can’t use the threat of thermonuclear war to make the Chinese halt their program because they would then place their development team outside of our grasp.
Within a year, we will probably have the technical ability to activate a seed AI, but once the Chinese threat has been annihilated our team will have no reason to hurry and could take a decade to fine-tune their seed AI. If we delay, any intelligence explosion we create will have an extremely high probability of yielding a friendly AI. Some people on our team think that, given another decade, they will be able to mathematically prove that the seed AI will turn into a friendly ultra-AI.
A friendly AI would allow trillions and trillions of people to eventually live their lives, and mankind and our descendents could survive to the end of the universe in utopia. In contrast, an unfriendly AI would destroy us. I have decided to make the survival of mankind my overwhelming priority. Consequently, since a thermonuclear war would non-trivially increase the chance of mankind’s survival, I believe that it’s my moral duty to initiate war, even though my war will kill over a billion human beings. Physicists haven’t ruled out the possibility of time travel, so perhaps our ultra-AI will be able to save all of the people I’m about to kill.”
Thanks, James! Yes, things could get ugly. :(