A Primer On Risks From AI

The Power of Algorithms

Evolutionary processes are the most evident example of the power of simple algorithms [1][2][3][4][5].

The field of evolutionary biology gathered a vast amount of evidence [6] that established evolution as the process that explains the local decrease in entropy [7], the complexity of life.

Since it can be conclusively shown that all life is an effect of an evolutionary process it is implicit that everything we do not understand about living beings is also an effect of evolution.

We might not understand the nature of intelligence [8] and consciousness [9] but we do know that they are the result of an optimization process that is neither intelligent nor conscious.

Therefore we know that it is possible for an physical optimization process to culminate in the creation of more advanced processes that feature superior qualities.

One of these qualities is the human ability to observe and improve the optimization process that created us. The most obvious example being science [10].

Science can be thought of as civilization-level self-improvement method. It allows us to work together in a systematic and efficient way and accelerate the rate at which further improvements are made.

The Automation of Science

We know that optimization processes that can create improved versions of themselves are possible, even without an explicit understanding of their own workings, as exemplified by natural selection.

We know that optimization processes can lead to self-reinforcing improvements, as exemplified by the adaptation of the scientific method [11] as an improved evolutionary process and successor of natural selection.

Which raises questions about the continuation of this self-reinforcing feedback cycle and its possible implications.

One possibility is to automate science [12][13] and apply it to itself and its improvement.

But science is a tool and its bottleneck are its users. Humans, the biased [14] effect of the blind idiot god that is evolution.

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

Artificial general intelligence, that can recursively optimize itself [15], is the logical endpoint of various converging and self-reinforcing feedback cycles.

Risks from AI

Will we be able to build an artificial general intelligence? Yes, sooner or later.

Even the unintelligent, unconscious and aimless process of natural selection was capable of creating goal-oriented, intelligent and conscious agents that can think ahead, jump fitness gaps and improve upon the process that created them to engage in prediction and direct experimentation.

The question is, what are the possible implications of the invention of an artificial, fully autonomous, intelligent and goal-oriented optimization process?

One good bet is that such an agent will recursively improve its most versatile, and therefore instrumentally useful, resource. It will improve its general intelligence, respectively cross-domain optimization power.

Since it is unlikely that human intelligence is the optimum, the positive feedback effect, that is a result of using intelligence amplifications to amplify intelligence, is likely to lead to a level of intelligence that is generally more capable than the human intelligence level.

Humans are unlikely to be the most efficient thinkers because evolution is mindless and has no goals. Evolution did not actively try to create the smartest thing possible.

Evolution is further not limitlessly creative, each step of an evolutionary design must increase the fitness of its host. Which makes it probable that there are artificial mind designs that can do what no product of natural selection could accomplish, since an intelligent artificer does not rely on the incremental fitness of each step in the development process.

It is actually possible that human general intelligence is the bare minimum. Because the human level of intelligence might have been sufficient to both survive and reproduce and that therefore no further evolutionary pressure existed to select for even higher levels of general intelligence.

The implications of this possibility might be the creation of an intelligent agent that is more capable than humans in every sense. Maybe because it does directly employ superior approximations of our best formal methods, that tell us how to update based on evidence and how to choose between various actions. Or maybe it will simply think faster. It doesn’t matter.

What matters is that a superior intellect is probable and that it will be better than us at discovering knowledge and inventing new technology. Technology that will make it even more powerful and likely invincible.

And that is the problem. We might be unable to control such a superior being. Just like a group of chimpanzees is unable to stop a company from clearing its forest [16].

But even if such a being is only slightly more capable than us. We might find ourselves at its mercy nonetheless.

Human history provides us with many examples [17][18][19] that make it abundantly clear that even the slightest advance can enable one group to dominate others.

What happens is that the dominant group imposes its values on the others. Which in turn raises the question of what values an artificial general intelligence might have and the implications of those values for us.

Due to our evolutionary origins, our struggle for survival and the necessity to cooperate with other agents, we are equipped with many values and a concern for the welfare of others [20].

The information theoretic complexity [21][22] of our values is very high. Which means that it is highly unlikely for similar values to automatically arise in agents that are the product of intelligent design, agents that never underwent the million of years of competition with other agents that equipped humans with altruism and general compassion.

But that does not mean that an artificial intelligence won’t have any goals [23][24]. Just that those goals will be simple and their realization remorseless [25].

An artificial general intelligence will do whatever is implied by its initial design. And we will be helpless to stop it from achieving its goals. Goals that won’t automatically respect our values [26].

A likely implication is the total extinction of all of humanity [27].

Further Reading

References

[1] Genetic Algorithms and Evolutionary Computation, talkorigins.org/​faqs/​genalg/​genalg.html
[2] Fixing software bugs in 10 minutes or less using evolutionary computation, genetic-programming.org/​hc2009/​1-Forrest/​Forrest-Presentation.pdf
[3] Automatically Finding Patches Using Genetic Programming, genetic-programming.org/​hc2009/​1-Forrest/​Forrest-Paper-on-Patches.pdf
[4] A Genetic Programming Approach to Automated Software Repair, genetic-programming.org/​hc2009/​1-Forrest/​Forrest-Paper-on-Repair.pdf
[5]GenProg: A Generic Method for Automatic Software Repair, virginia.edu/​~weimer/​p/​weimer-tse2012-genprog.pdf
[6] 29+ Evidences for Macroevolution (The Scientific Case for Common Descent), talkorigins.org/​faqs/​comdesc/​
[7] Thermodynamics, Evolution and Creationism, talkorigins.org/​faqs/​thermo.html
[8] A Collection of Definitions of Intelligence, vetta.org/​documents/​A-Collection-of-Definitions-of-Intelligence.pdf
[9] plato.stanford.edu/​entries/​consciousness/​
[10] en.wikipedia.org/​wiki/​Science
[11] en.wikipedia.org/​wiki/​Scientific_method
[12] The Automation of Science, sciencemag.org/​content/​324/​5923/​85.abstract
[13] Computer Program Self-Discovers Laws of Physics, wired.com/​wiredscience/​2009/​04/​newtonai/​
[14] List of cognitive biases, en.wikipedia.org/​wiki/​List_of_cognitive_biases
[15] Intelligence explosion, wiki.lesswrong.com/​wiki/​Intelligence_explosion
[16] 1% with Neil deGrasse Tyson, youtu.be/​9nR9XEqrCvw
[17] Mongol military tactics and organization, en.wikipedia.org/​wiki/​Mongol_military_tactics_and_organization
[18] Wars of Alexander the Great, en.wikipedia.org/​wiki/​Wars_of_Alexander_the_Great
[19] Spanish colonization of the Americas, en.wikipedia.org/​wiki/​Spanish_colonization_of_the_Americas
[20] A Quantitative Test of Hamilton’s Rule for the Evolution of Altruism, plosbiology.org/​article/​info:doi/​10.1371/​journal.pbio.1000615
[21] Algorithmic information theory, scholarpedia.org/​article/​Algorithmic_information_theory
[22] Algorithmic probability, scholarpedia.org/​article/​Algorithmic_probability
[23] The Nature of Self-Improving Artificial Intelligence, selfawaresystems.files.wordpress.com/​2008/​01/​nature_of_self_improving_ai.pdf
[24] The Basic AI Drives, selfawaresystems.files.wordpress.com/​2008/​01/​ai_drives_final.pdf
[25] Paperclip maximizer, wiki.lesswrong.com/​wiki/​Paperclip_maximizer
[26] Friendly artificial intelligence, wiki.lesswrong.com/​wiki/​Friendly_artificial_intelligence
[27] Existential Risk, existential-risk.org