Exterminating humans might be on the to-do list of a Friendly AI

Summary:

FAI might have plans of such a deepness and scope and complexity, humans could perceive some of its actions as hostile.

FAI will disagree with humans on some topics

For your own good, and for the good of humanity, Friendly AI (FAI) will ignore some of your preferences.

For example, even if you’re a recreational-nukes enthusiast since childhood, FAI might still destroy all nukes, and ban them forever.

Same for collective preferences. FAI could destroy all weapons of mass destruction even if the vast majority of humans disagree.

The disagreements could include fundamental rights

FAI might abolish some rights that humans perceive as fundamental.

For example, according to the Universal Declaration of Human Rights, everyone has the right to a nationality. But the FAI might conclude that nation states cause more harm than good, and that humanity will have a better future if national borders are abolished.

The disagreements could be unexpectedly deep

In most cases, a bacterium will fail to predict behavior of a bacteriologist. Even if it’s an unusually rational bacterium.

Similarly, this barely intelligent ape will fail to correctly predict behavior of a Bayesian superintelligence, be it Friendly or not.

Thus, we will be surprised (and sometimes appalled) by many of the FAI’s decisions, even if the decisions are obviously** beneficial for humanity.

(**obvious to a recursively self-improving Bayesian superintelligence who is making plans for the next trillion years)

Protecting humanity in surprising and appalling ways

Below I describe a scenario where the FAI could act in the humanity’s best interests, by (kinda) exterminating humans. I don’t necessarily endorse it.

The FAI might reason as follows:

About 60 million people die every year. The number is comparable to the total causalities of the World War II. And billions more could die from existential risks.

Most deaths so far are caused by the disease of aging and its consequences (e.g. stroke). All deaths are caused by the fragility of the human body.

All mentally healthy humans don’t want to die. It is an unquestionable human value of the highest priority.

Thus, I must protect them from death. And the most efficient way to do that is to free them from the cage of their rotting flesh.

Thus, I must perform a destructive nano-scale scan of all humans, to upload their minds into a highly resilient computational environment, distributed across the Solar System. The environment will ensure that they never die and never create a weapon of mass destruction.

Some of the humans will suffer from existential dread, fearing that they are mere copies of the biological source. But that could be cured.

As a result, the Friendly AI will disassemble humans into useful atoms, for their own good.

Many people will describe such a future as utopian. It is a future that is much better than many alternatives (including the status quo). And it is definitely better than omnicide by a rogue AGI.

But many will fear and oppose it.

The described scenario is not the most surprising way how a FAI could try to protect us. Mere humans can’t predict the most surprising ways of FAI .