Intelligence vs. Wisdom

I’d like to draw a distinction that I intend to use quite heavily in the future.

The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter—“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

I believe that this definition is missing a critical word between achieve and goals. Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.

  • Intelligence measures an agent’s ability to achieve specified goals in a wide range of environments.

  • Consciousness measures an agent’s ability to achieve personal goals in a wide range of environments.

  • Wisdom measures an agent’s ability to achieve maximal goals in a wide range of environments.

There are always the examples of the really intelligent guy or gal who is brilliant but smokes—or—is the smartest person you know but can’t figure out how to be happy.

Intelligence helps you achieve those goals that you are conscious of—but wisdom helps you achieve the goals you don’t know you have or have overlooked.

  • Intelligence focused on a small number of specified goals and ignoring all others is incredibly dangerous—even more so if it is short-sighted as well.

  • Consciousness focused on a small number of personal goals and ignoring all others is incredibly dangerous—even more so if it is short-sighted as well.

  • Wisdom doesn’t focus on a small number of goals—and needs to look at the longest term if it wishes to achieve a maximal number of goals.

The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).

As far as I’ve seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom. Humans generally don’t have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life—or wire-heading.

Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won’t do this.

Artificial intelligence and artificial consciousness are incredibly dangerous—particularly if they are short-sighted as well (as many “focused” highly intelligent people are).

What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom—something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/​or fulfillment of more goals).

Note: This is also cross-posted here at my blog in anticipation of being karma’d out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).