RSS

Gen­eral Intelligence

TagLast edit: 1 Oct 2020 18:28 UTC by Ruby

General Intelligence or Universal Intelligence is the ability to efficiently achieve goals in a wide range of domains.

This tag is specifically for discussing intelligence in the broad sense: for discussion of IQ testing and psychometric intelligence, see IQ /​ g-factor; for discussion about e.g. specific results in artificial intelligence, see AI. These tags may overlap with this one to the extent that they discuss the nature of general intelligence.

Examples of posts that fall under this tag include The Power of Intelligence, Measuring Optimization Power, Adaption-Executers not Fitness Maximizers, Distinctions in Types of Thought, The Octopus, the Dolphin and Us: a Great Filter tale.

On the difference between psychometric intelligence (IQ) and general intelligence:

But the word “intelligence” commonly evokes pictures of the starving professor with an IQ of 160 and the billionaire CEO with an IQ of merely 120. Indeed there are differences of individual ability apart from “book smarts” which contribute to relative success in the human world: enthusiasm, social skills, education, musical talent, rationality. Note that each factor I listed is cognitive. Social skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet professors of academia, who are chimpanzees. You will not find many acclaimed rationalists, nor artists, nor poets, nor leaders, nor engineers, nor skilled networkers, nor martial artists, nor musical composers who are mice. Intelligence is the foundation of human power, the strength that fuels our other arts.

-- Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk

Definitions of General Intelligence

After reviewing extensive literature on the subject, Legg and Hutter [1] summarizes the many possible valuable definitions in the informal statement “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use Solomonoff induction—a formalization of Occam’s razor—to construct an universal artificial intelligence with a embedded utility function which assigns less utility to those actions based on theories with higher complexity. They argue this final formalization is a valid, meaningful, informative, general, unbiased, fundamental, objective, universal and practical definition of intelligence.

We can relate Legg and Hutter’s definition with the concept of optimization. According to Eliezer Yudkowsky intelligence is efficient cross-domain optimization [2]. It measures an agent’s capacity for efficient cross-domain optimization of the world according to the agent’s preferences [3]. Optimization measures not only the capacity to achieve the desired goal but also is inversely proportional to the amount of resources used. It’s the ability to steer the future so it hits that small target of desired outcomes in the large space of all possible outcomes, using fewer resources as possible. For example, when Deep Blue defeated Kasparov, it was able to hit that small possible outcome where it made the right order of moves given Kasparov’s moves from the very large set of all possible moves. In that domain, it was more optimal than Kasparov. However, Kasparov would have defeated Deep Blue in almost any other relevant domain, and hence, he is considered more intelligent.

One could cast this definition in a possible world vocabulary, intelligence is:

  1. the ability to precisely realize one of the members of a small set of possible future worlds that have a higher preference over the vast set of all other possible worlds with lower preference; while

  2. using fewer resources than the other alternatives paths for getting there; and in the

  3. most diverse domains as possible.

How many more worlds have a higher preference then the one realized by the agent, less intelligent he is. How many more worlds have a lower preference than the one realized by the agent, more intelligent he is. (Or: How much smaller is the set of worlds at least as preferable as the one realized, more intelligent the agent is). How much less paths for realizing the desired world using fewer resources than those spent by the agent, more intelligent he is. And finally, in how many more domains the agent can be more efficiently optimal, more intelligent he is. Restating it, the intelligence of an agent is directly proportional to:

and it is, accordingly, inversely proportional to:

This definition avoids several problems common in many others definitions, especially it avoids anthropomorphizing intelligence.

See Also

Refram­ing Su­per­in­tel­li­gence: Com­pre­hen­sive AI Ser­vices as Gen­eral Intelligence

rohinmshah8 Jan 2019 7:12 UTC
94 points
74 comments5 min readLW link2 nominations2 reviews
(www.fhi.ox.ac.uk)

Hu­mans Who Are Not Con­cen­trat­ing Are Not Gen­eral Intelligences

sarahconstantin25 Feb 2019 20:40 UTC
158 points
34 comments6 min readLW link4 nominations1 review
(srconstantin.wordpress.com)

AlphaS­tar: Im­pres­sive for RL progress, not for AGI progress

orthonormal2 Nov 2019 1:50 UTC
113 points
58 comments2 min readLW link2 nominations1 review

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
58 points
124 comments6 min readLW link

Distinc­tions in Types of Thought

sarahconstantin10 Oct 2017 3:36 UTC
33 points
24 comments13 min readLW link

The Oc­to­pus, the Dolphin and Us: a Great Filter tale

Stuart_Armstrong3 Sep 2014 21:37 UTC
76 points
236 comments3 min readLW link

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Eliezer Yudkowsky11 Nov 2007 6:39 UTC
89 points
32 comments3 min readLW link

The Power of Intelligence

Eliezer Yudkowsky1 Jan 2007 20:00 UTC
43 points
3 comments4 min readLW link

Mea­sur­ing Op­ti­miza­tion Power

Eliezer Yudkowsky27 Oct 2008 21:44 UTC
40 points
34 comments6 min readLW link

Com­plex­ity and Intelligence

Eliezer Yudkowsky3 Nov 2008 20:27 UTC
30 points
79 comments11 min readLW link

The Prin­ci­pled In­tel­li­gence Hypothesis

KatjaGrace14 Feb 2018 1:00 UTC
34 points
15 comments4 min readLW link
(meteuphoric.wordpress.com)

How min­i­mal is our in­tel­li­gence?

Douglas_Reay25 Nov 2012 23:34 UTC
78 points
214 comments6 min readLW link

Belief in Intelligence

Eliezer Yudkowsky25 Oct 2008 15:00 UTC
51 points
36 comments3 min readLW link

What In­tel­li­gence Tests Miss: The psy­chol­ogy of ra­tio­nal thought

Kaj_Sotala11 Jul 2010 23:01 UTC
50 points
55 comments9 min readLW link

The Limits of In­tel­li­gence and Me: Do­main Expertise

ChrisHallquist7 Dec 2013 8:23 UTC
45 points
79 comments5 min readLW link

The ground of optimization

alexflint20 Jun 2020 0:38 UTC
163 points
68 comments27 min readLW link

Book Re­view: The Eureka Factor

drossbucket4 Mar 2019 19:47 UTC
19 points
2 comments13 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
132 points
36 comments6 min readLW link

Chap­ter 24: Machi­avel­lian In­tel­li­gence Hypothesis

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
14 points
0 comments14 min readLW link

Two ex­pla­na­tions for vari­a­tion in hu­man abilities

Matthew Barnett25 Oct 2019 22:06 UTC
83 points
26 comments6 min readLW link2 nominations1 review

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
153 points
58 comments5 min readLW link

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
23 points
28 comments5 min readLW link

The Flynn Effect Clarified

PeterMcCluskey12 Dec 2020 5:18 UTC
33 points
2 comments4 min readLW link
(www.bayesianinvestor.com)

Ben Go­ertzel’s “Kinds of Minds”

JoshuaFox11 Apr 2021 12:41 UTC
12 points
4 comments1 min readLW link

Im­plicit extortion

paulfchristiano13 Apr 2018 16:33 UTC
29 points
16 comments6 min readLW link
(ai-alignment.com)

How spe­cial are hu­man brains among an­i­mal brains?

zhukeepa1 Apr 2020 1:35 UTC
72 points
38 comments7 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei_Dai5 Mar 2019 21:52 UTC
63 points
3 comments3 min readLW link

A rant against robots

Lê Nguyên Hoang14 Jan 2020 22:03 UTC
60 points
7 comments5 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
53 points
41 comments3 min readLW link

AGI and Friendly AI in the dom­i­nant AI textbook

lukeprog11 Mar 2011 4:12 UTC
73 points
27 comments3 min readLW link

My Best and Worst Mistake

Eliezer Yudkowsky16 Sep 2008 0:43 UTC
51 points
17 comments5 min readLW link

Another take on agent foun­da­tions: for­mal­iz­ing zero-shot reasoning

zhukeepa1 Jul 2018 6:12 UTC
57 points
20 comments12 min readLW link

My Child­hood Role Model

Eliezer Yudkowsky23 May 2008 8:51 UTC
54 points
63 comments5 min readLW link

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben Pace30 Jan 2020 5:07 UTC
36 points
15 comments27 min readLW link

Ex­pected Creative Surprises

Eliezer Yudkowsky24 Oct 2008 22:22 UTC
41 points
45 comments4 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
47 points
28 comments7 min readLW link

Beyond Smart and Stupid

PhilGoetz17 May 2011 6:25 UTC
34 points
44 comments3 min readLW link

When An­thro­po­mor­phism Be­came Stupid

Eliezer Yudkowsky16 Aug 2008 23:43 UTC
38 points
12 comments3 min readLW link

Modest Superintelligences

Wei_Dai22 Mar 2012 0:29 UTC
34 points
100 comments1 min readLW link

HELP: Do I have a chance at be­com­ing in­tel­li­gent?

johnbgone26 Oct 2010 21:41 UTC
36 points
68 comments1 min readLW link

Muehlhauser-Wang Dialogue

lukeprog22 Apr 2012 22:40 UTC
34 points
288 comments12 min readLW link

[Question] Is Stu­pidity Ex­pand­ing? Some Hy­pothe­ses.

David_Gross15 Oct 2020 3:28 UTC
68 points
42 comments4 min readLW link

Eco­nomic Defi­ni­tion of In­tel­li­gence?

Eliezer Yudkowsky29 Oct 2008 19:32 UTC
14 points
9 comments7 min readLW link

The One That Isn’t There

Annoyance20 Nov 2009 20:10 UTC
18 points
6 comments3 min readLW link

Rec­og­niz­ing Intelligence

Eliezer Yudkowsky7 Nov 2008 23:22 UTC
13 points
30 comments4 min readLW link

Au­to­mated in­tel­li­gence is not AI

KatjaGrace1 Nov 2020 23:30 UTC
54 points
10 comments2 min readLW link
(meteuphoric.com)

Fun­da­men­tal Philo­soph­i­cal Prob­lems In­her­ent in AI discourse

AlexSadler16 Sep 2018 21:03 UTC
23 points
1 comment17 min readLW link

Can we cre­ate a func­tion that prov­ably pre­dicts the op­ti­miza­tion power of in­tel­li­gences?

whpearson28 May 2009 11:35 UTC
−7 points
17 comments2 min readLW link

Con­crete vs Con­tex­tual values

whpearson2 Jun 2009 9:47 UTC
−1 points
32 comments3 min readLW link

How to test your men­tal perfor­mance at the mo­ment?

taw23 Nov 2009 18:35 UTC
24 points
74 comments1 min readLW link

In­tel­li­gence en­hance­ment as ex­is­ten­tial risk mitigation

Roko15 Jun 2009 19:35 UTC
21 points
244 comments3 min readLW link

Hu­mans are not au­to­mat­i­cally strategic

AnnaSalamon8 Sep 2010 7:02 UTC
310 points
274 comments4 min readLW link

[Question] Will AGI have “hu­man” flaws?

Agustinus Theodorus23 Dec 2020 3:43 UTC
1 point
2 comments1 min readLW link

Pro­duc­tivity as a func­tion of abil­ity in the­o­ret­i­cal fields

Stefan_Schubert26 Jan 2014 13:16 UTC
37 points
34 comments4 min readLW link

Is the ar­gu­ment that AI is an xrisk valid?

MACannon19 Jul 2021 13:20 UTC
4 points
53 comments1 min readLW link
(onlinelibrary.wiley.com)
No comments.