Negative Utilitarianism is quite dangerous to my values in a high-power-imbalance scenario (not omnipotence or future-knowledge, just even a billionaire under our current system, or an unusually intelligent scientist with a well equipped cross-discipline laboratory). Why? Because I positively value life and sapient experience, much more than I anti-value suffering. I have a preference for on-average moderately suffering humans over no humans. A negative utilitarian can take actions to prevent the suffering of future humans and animals by ending those humans and animals ability to reproduce, or removing them from existence. Even a horrible death is worth it if it prevents a great deal of suffering of the target’s descendants. I don’t want a brilliant negative utilitarian unleashing a horrible plague on the world designed to render as many humans as possible sterile, and not caring if it also killed them. I’m only a mid-tier genetic engineer with a few years of practice making custom viruses for modifying the brains of mammals, and I could come up with a handful of straightforward candidate plagues that I could assemble in a few months time in a well equipped lab without tripping any of the current paltry international bio-defense alerts. We are really vulnerable to technological leaps.
Simply a narrow AI made of current technology that had a similar power of designing custom viruses that DeepFold has at predicting protein folding is very dangerous in bad hands. Such a tool would make my simplistic ideas for forcibly sterilizing humanity and all animals much more effective, reliable, and more fast and easy to produce.
One mad scientist with a grad student level background in genetic engineering, a few weeks access to a couple million dollars of lab equipment (as most grad students studying anything involving genetic engineering would be expected to have), and a narrow AI assistant… that’s nowhere near the power of a large state or corporate actor, much less an omnipotent being. I don’t want veto power over the future of humanity to fall into any single person’s hands.
Negative Utilitarianism is quite dangerous to my values in a high-power-imbalance scenario (not omnipotence or future-knowledge, just even a billionaire under our current system, or an unusually intelligent scientist with a well equipped cross-discipline laboratory). Why? Because I positively value life and sapient experience, much more than I anti-value suffering. I have a preference for on-average moderately suffering humans over no humans. A negative utilitarian can take actions to prevent the suffering of future humans and animals by ending those humans and animals ability to reproduce, or removing them from existence. Even a horrible death is worth it if it prevents a great deal of suffering of the target’s descendants. I don’t want a brilliant negative utilitarian unleashing a horrible plague on the world designed to render as many humans as possible sterile, and not caring if it also killed them. I’m only a mid-tier genetic engineer with a few years of practice making custom viruses for modifying the brains of mammals, and I could come up with a handful of straightforward candidate plagues that I could assemble in a few months time in a well equipped lab without tripping any of the current paltry international bio-defense alerts. We are really vulnerable to technological leaps.
Simply a narrow AI made of current technology that had a similar power of designing custom viruses that DeepFold has at predicting protein folding is very dangerous in bad hands. Such a tool would make my simplistic ideas for forcibly sterilizing humanity and all animals much more effective, reliable, and more fast and easy to produce.
One mad scientist with a grad student level background in genetic engineering, a few weeks access to a couple million dollars of lab equipment (as most grad students studying anything involving genetic engineering would be expected to have), and a narrow AI assistant… that’s nowhere near the power of a large state or corporate actor, much less an omnipotent being. I don’t want veto power over the future of humanity to fall into any single person’s hands.