RSS

Gen­eral Intelligence

TagLast edit: 1 Oct 2020 18:28 UTC by Ruby

Gen­eral In­tel­li­gence or Univer­sal In­tel­li­gence is the abil­ity to effi­ciently achieve goals in a wide range of do­mains.

This tag is speci­fi­cally for dis­cussing in­tel­li­gence in the broad sense: for dis­cus­sion of IQ test­ing and psy­cho­me­t­ric in­tel­li­gence, see IQ /​ g-fac­tor; for dis­cus­sion about e.g. spe­cific re­sults in ar­tifi­cial in­tel­li­gence, see AI. Th­ese tags may over­lap with this one to the ex­tent that they dis­cuss the na­ture of gen­eral in­tel­li­gence.

Ex­am­ples of posts that fall un­der this tag in­clude The Power of In­tel­li­gence, Mea­sur­ing Op­ti­miza­tion Power, Adap­tion-Ex­e­cuters not Fit­ness Max­i­miz­ers, Distinc­tions in Types of Thought, The Oc­to­pus, the Dolphin and Us: a Great Filter tale.

On the differ­ence be­tween psy­cho­me­t­ric in­tel­li­gence (IQ) and gen­eral in­tel­li­gence:

But the word “in­tel­li­gence” com­monly evokes pic­tures of the starv­ing pro­fes­sor with an IQ of 160 and the billion­aire CEO with an IQ of merely 120. In­deed there are differ­ences of in­di­vi­d­ual abil­ity apart from “book smarts” which con­tribute to rel­a­tive suc­cess in the hu­man world: en­thu­si­asm, so­cial skills, ed­u­ca­tion, mu­si­cal tal­ent, ra­tio­nal­ity. Note that each fac­tor I listed is cog­ni­tive. So­cial skills reside in the brain, not the liver. And jokes aside, you will not find many CEOs, nor yet pro­fes­sors of academia, who are chim­panzees. You will not find many ac­claimed ra­tio­nal­ists, nor artists, nor po­ets, nor lead­ers, nor en­g­ineers, nor skil­led net­work­ers, nor mar­tial artists, nor mu­si­cal com­posers who are mice. In­tel­li­gence is the foun­da­tion of hu­man power, the strength that fuels our other arts.

-- Eliezer Yud­kowsky, Ar­tifi­cial In­tel­li­gence as a Pos­i­tive and Nega­tive Fac­tor in Global Risk

Defi­ni­tions of Gen­eral Intelligence

After re­view­ing ex­ten­sive liter­a­ture on the sub­ject, Legg and Hut­ter [1] sum­ma­rizes the many pos­si­ble valuable defi­ni­tions in the in­for­mal state­ment “In­tel­li­gence mea­sures an agent’s abil­ity to achieve goals in a wide range of en­vi­ron­ments.” They then show this defi­ni­tion can be math­e­mat­i­cally for­mal­ized given rea­son­able math­e­mat­i­cal defi­ni­tions of its terms. They use Solomonoff in­duc­tion—a for­mal­iza­tion of Oc­cam’s ra­zor—to con­struct an uni­ver­sal ar­tifi­cial in­tel­li­gence with a em­bed­ded util­ity func­tion which as­signs less util­ity to those ac­tions based on the­o­ries with higher com­plex­ity. They ar­gue this fi­nal for­mal­iza­tion is a valid, mean­ingful, in­for­ma­tive, gen­eral, un­bi­ased, fun­da­men­tal, ob­jec­tive, uni­ver­sal and prac­ti­cal defi­ni­tion of in­tel­li­gence.

We can re­late Legg and Hut­ter’s defi­ni­tion with the con­cept of op­ti­miza­tion. Ac­cord­ing to Eliezer Yud­kowsky in­tel­li­gence is effi­cient cross-do­main op­ti­miza­tion [2]. It mea­sures an agent’s ca­pac­ity for effi­cient cross-do­main op­ti­miza­tion of the world ac­cord­ing to the agent’s prefer­ences [3]. Op­ti­miza­tion mea­sures not only the ca­pac­ity to achieve the de­sired goal but also is in­versely pro­por­tional to the amount of re­sources used. It’s the abil­ity to steer the fu­ture so it hits that small tar­get of de­sired out­comes in the large space of all pos­si­ble out­comes, us­ing fewer re­sources as pos­si­ble. For ex­am­ple, when Deep Blue defeated Kas­parov, it was able to hit that small pos­si­ble out­come where it made the right or­der of moves given Kas­parov’s moves from the very large set of all pos­si­ble moves. In that do­main, it was more op­ti­mal than Kas­parov. How­ever, Kas­parov would have defeated Deep Blue in al­most any other rele­vant do­main, and hence, he is con­sid­ered more in­tel­li­gent.

One could cast this defi­ni­tion in a pos­si­ble world vo­cab­u­lary, in­tel­li­gence is:

  1. the abil­ity to pre­cisely re­al­ize one of the mem­bers of a small set of pos­si­ble fu­ture wor­lds that have a higher prefer­ence over the vast set of all other pos­si­ble wor­lds with lower prefer­ence; while

  2. us­ing fewer re­sources than the other al­ter­na­tives paths for get­ting there; and in the

  3. most di­verse do­mains as pos­si­ble.

How many more wor­lds have a higher prefer­ence then the one re­al­ized by the agent, less in­tel­li­gent he is. How many more wor­lds have a lower prefer­ence than the one re­al­ized by the agent, more in­tel­li­gent he is. (Or: How much smaller is the set of wor­lds at least as prefer­able as the one re­al­ized, more in­tel­li­gent the agent is). How much less paths for re­al­iz­ing the de­sired world us­ing fewer re­sources than those spent by the agent, more in­tel­li­gent he is. And fi­nally, in how many more do­mains the agent can be more effi­ciently op­ti­mal, more in­tel­li­gent he is. Res­tat­ing it, the in­tel­li­gence of an agent is di­rectly pro­por­tional to:

and it is, ac­cord­ingly, in­versely pro­por­tional to:

This defi­ni­tion avoids sev­eral prob­lems com­mon in many oth­ers defi­ni­tions, es­pe­cially it avoids an­thro­po­mor­phiz­ing in­tel­li­gence.

See Also

Hu­mans Who Are Not Con­cen­trat­ing Are Not Gen­eral Intelligences

sarahconstantin25 Feb 2019 20:40 UTC
138 points
29 comments6 min readLW link
(srconstantin.wordpress.com)

AlphaS­tar: Im­pres­sive for RL progress, not for AGI progress

orthonormal2 Nov 2019 1:50 UTC
119 points
54 comments2 min readLW link

Ar­tifi­cial Addition

Eliezer Yudkowsky20 Nov 2007 7:58 UTC
53 points
122 comments6 min readLW link

Distinc­tions in Types of Thought

sarahconstantin10 Oct 2017 3:36 UTC
62 points
24 comments13 min readLW link

The Oc­to­pus, the Dolphin and Us: a Great Filter tale

Stuart_Armstrong3 Sep 2014 21:37 UTC
60 points
236 comments3 min readLW link

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Eliezer Yudkowsky11 Nov 2007 6:39 UTC
62 points
32 comments3 min readLW link

The Power of Intelligence

Eliezer Yudkowsky1 Jan 2007 20:00 UTC
33 points
3 comments4 min readLW link

Mea­sur­ing Op­ti­miza­tion Power

Eliezer Yudkowsky27 Oct 2008 21:44 UTC
29 points
34 comments6 min readLW link

Com­plex­ity and Intelligence

Eliezer Yudkowsky3 Nov 2008 20:27 UTC
26 points
79 comments11 min readLW link

Refram­ing Su­per­in­tel­li­gence: Com­pre­hen­sive AI Ser­vices as Gen­eral Intelligence

rohinmshah8 Jan 2019 7:12 UTC
98 points
70 comments5 min readLW link
(www.fhi.ox.ac.uk)

The Prin­ci­pled In­tel­li­gence Hypothesis

KatjaGrace14 Feb 2018 1:00 UTC
66 points
15 comments4 min readLW link
(meteuphoric.wordpress.com)

How min­i­mal is our in­tel­li­gence?

Douglas_Reay25 Nov 2012 23:34 UTC
56 points
214 comments6 min readLW link

Belief in Intelligence

Eliezer Yudkowsky25 Oct 2008 15:00 UTC
45 points
36 comments3 min readLW link

What In­tel­li­gence Tests Miss: The psy­chol­ogy of ra­tio­nal thought

Kaj_Sotala11 Jul 2010 23:01 UTC
39 points
54 comments9 min readLW link

The Limits of In­tel­li­gence and Me: Do­main Expertise

ChrisHallquist7 Dec 2013 8:23 UTC
30 points
79 comments5 min readLW link

The ground of optimization

alexflint20 Jun 2020 0:38 UTC
134 points
62 comments27 min readLW link

Book Re­view: The Eureka Factor

drossbucket4 Mar 2019 19:47 UTC
21 points
2 comments13 min readLW link

Every­day Les­sons from High-Di­men­sional Optimization

johnswentworth6 Jun 2020 20:57 UTC
125 points
32 comments6 min readLW link

Chap­ter 24: Machi­avel­lian In­tel­li­gence Hypothesis

Eliezer Yudkowsky14 Mar 2015 19:00 UTC
14 points
0 comments14 min readLW link

Two ex­pla­na­tions for vari­a­tion in hu­man abilities

Matthew Barnett25 Oct 2019 22:06 UTC
76 points
18 comments6 min readLW link

Is Click­bait De­stroy­ing Our Gen­eral In­tel­li­gence?

Eliezer Yudkowsky16 Nov 2018 23:06 UTC
145 points
57 comments5 min readLW link2 nominations2 reviews

...Re­cur­sion, Magic

Eliezer Yudkowsky25 Nov 2008 9:10 UTC
16 points
28 comments5 min readLW link

Im­plicit extortion

paulfchristiano13 Apr 2018 16:33 UTC
74 points
16 comments6 min readLW link
(ai-alignment.com)

How spe­cial are hu­man brains among an­i­mal brains?

zhukeepa1 Apr 2020 1:35 UTC
73 points
38 comments7 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei_Dai5 Mar 2019 21:52 UTC
71 points
3 comments3 min readLW link

A rant against robots

Lê Nguyên Hoang14 Jan 2020 22:03 UTC
62 points
7 comments5 min readLW link

Might hu­mans not be the most in­tel­li­gent an­i­mals?

Matthew Barnett23 Dec 2019 21:50 UTC
55 points
41 comments3 min readLW link

AGI and Friendly AI in the dom­i­nant AI textbook

lukeprog11 Mar 2011 4:12 UTC
54 points
27 comments3 min readLW link

My Best and Worst Mistake

Eliezer Yudkowsky16 Sep 2008 0:43 UTC
50 points
17 comments5 min readLW link

Another take on agent foun­da­tions: for­mal­iz­ing zero-shot reasoning

zhukeepa1 Jul 2018 6:12 UTC
65 points
20 comments12 min readLW link

My Child­hood Role Model

Eliezer Yudkowsky23 May 2008 8:51 UTC
49 points
63 comments5 min readLW link

If brains are com­put­ers, what kind of com­put­ers are they? (Den­nett tran­script)

Ben Pace30 Jan 2020 5:07 UTC
40 points
15 comments27 min readLW link

Ex­pected Creative Surprises

Eliezer Yudkowsky24 Oct 2008 22:22 UTC
35 points
43 comments4 min readLW link

Sur­prised by Brains

Eliezer Yudkowsky23 Nov 2008 7:26 UTC
40 points
28 comments7 min readLW link

Beyond Smart and Stupid

PhilGoetz17 May 2011 6:25 UTC
29 points
44 comments3 min readLW link

When An­thro­po­mor­phism Be­came Stupid

Eliezer Yudkowsky16 Aug 2008 23:43 UTC
26 points
12 comments3 min readLW link

Modest Superintelligences

Wei_Dai22 Mar 2012 0:29 UTC
29 points
100 comments1 min readLW link

HELP: Do I have a chance at be­com­ing in­tel­li­gent?

johnbgone26 Oct 2010 21:41 UTC
28 points
68 comments1 min readLW link

Muehlhauser-Wang Dialogue

lukeprog22 Apr 2012 22:40 UTC
24 points
288 comments12 min readLW link

[Question] Is Stu­pidity Ex­pand­ing? Some Hy­pothe­ses.

David_Gross15 Oct 2020 3:28 UTC
67 points
42 comments4 min readLW link

Eco­nomic Defi­ni­tion of In­tel­li­gence?

Eliezer Yudkowsky29 Oct 2008 19:32 UTC
10 points
9 comments7 min readLW link

The One That Isn’t There

Annoyance20 Nov 2009 20:10 UTC
18 points
6 comments3 min readLW link

Rec­og­niz­ing Intelligence

Eliezer Yudkowsky7 Nov 2008 23:22 UTC
10 points
30 comments4 min readLW link

Au­to­mated in­tel­li­gence is not AI

KatjaGrace1 Nov 2020 23:30 UTC
53 points
10 comments2 min readLW link
(meteuphoric.com)

Fun­da­men­tal Philo­soph­i­cal Prob­lems In­her­ent in AI discourse

AlexSadler16 Sep 2018 21:03 UTC
25 points
1 comment17 min readLW link

Can we cre­ate a func­tion that prov­ably pre­dicts the op­ti­miza­tion power of in­tel­li­gences?

whpearson28 May 2009 11:35 UTC
−7 points
17 comments2 min readLW link

Con­crete vs Con­tex­tual values

whpearson2 Jun 2009 9:47 UTC
−2 points
32 comments3 min readLW link

How to test your men­tal perfor­mance at the mo­ment?

taw23 Nov 2009 18:35 UTC
22 points
74 comments1 min readLW link

In­tel­li­gence en­hance­ment as ex­is­ten­tial risk mitigation

Roko15 Jun 2009 19:35 UTC
19 points
244 comments3 min readLW link

Hu­mans are not au­to­mat­i­cally strategic

AnnaSalamon8 Sep 2010 7:02 UTC
257 points
273 comments4 min readLW link
No comments.