RSS

Orthog­o­nal­ity Thesis

TagLast edit: 19 Mar 2023 20:13 UTC by Diabloto96

The Orthogonality Thesis states that an agent can have any combination of intelligence level and final goal, that is, its final goals and intelligence levels can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.

The thesis was originally defined by Nick Bostrom in the paper “Superintelligent Will”, (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.

Related: Complexity of Value, Decision Theory, General Intelligence, Utility Functions

Defense of the thesis

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligent agents to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.

See Also

External links

Sort­ing Peb­bles Into Cor­rect Heaps

Eliezer Yudkowsky10 Aug 2008 1:00 UTC
193 points
110 comments4 min readLW link

Su­per­in­tel­li­gent In­tro­spec­tion: A Counter-ar­gu­ment to the Orthog­o­nal­ity Thesis

DirectedEvolution29 Aug 2021 4:53 UTC
3 points
18 comments4 min readLW link

Pro­posed Orthog­o­nal­ity Th­e­ses #2-5

rjbg14 Jul 2022 22:59 UTC
6 points
0 comments2 min readLW link

Self-Refer­ence Breaks the Orthog­o­nal­ity Thesis

lsusr17 Feb 2023 4:11 UTC
34 points
34 comments2 min readLW link

Gen­eral pur­pose in­tel­li­gence: ar­gu­ing the Orthog­o­nal­ity thesis

Stuart_Armstrong15 May 2012 10:23 UTC
33 points
156 comments18 min readLW link

Ar­gu­ing Orthog­o­nal­ity, pub­lished form

Stuart_Armstrong18 Mar 2013 16:19 UTC
19 points
10 comments23 min readLW link

Ev­i­dence for the or­thog­o­nal­ity thesis

Stuart_Armstrong3 Apr 2012 10:58 UTC
14 points
293 comments1 min readLW link

Co­her­ence ar­gu­ments im­ply a force for goal-di­rected behavior

KatjaGrace26 Mar 2021 16:10 UTC
89 points
24 comments11 min readLW link1 review
(aiimpacts.org)

John Dana­her on ‘The Su­per­in­tel­li­gent Will’

lukeprog3 Apr 2012 3:08 UTC
9 points
12 comments1 min readLW link

Dist­in­guish­ing claims about train­ing vs deployment

Richard_Ngo3 Feb 2021 11:30 UTC
61 points
30 comments9 min readLW link

[Link] Is the Orthog­o­nal­ity Th­e­sis Defen­si­ble? (Qualia Com­put­ing)

ioannes13 Nov 2019 3:59 UTC
6 points
4 comments1 min readLW link

A poor but cer­tain at­tempt to philo­soph­i­cally un­der­mine the or­thog­o­nal­ity of in­tel­li­gence and aims

Jay9524 Feb 2023 3:03 UTC
−2 points
1 comment1 min readLW link

An­thro­po­mor­phic Optimism

Eliezer Yudkowsky4 Aug 2008 20:17 UTC
68 points
59 comments5 min readLW link

Su­per­in­tel­li­gence 9: The or­thog­o­nal­ity of in­tel­li­gence and goals

KatjaGrace11 Nov 2014 2:00 UTC
13 points
80 comments7 min readLW link

Are we all mis­al­igned?

Mateusz Mazurkiewicz3 Jan 2021 2:42 UTC
11 points
0 comments5 min readLW link

[Video] In­tel­li­gence and Stu­pidity: The Orthog­o­nal­ity Thesis

plex13 Mar 2021 0:32 UTC
5 points
1 comment1 min readLW link
(www.youtube.com)

Is the ar­gu­ment that AI is an xrisk valid?

MACannon19 Jul 2021 13:20 UTC
5 points
62 comments1 min readLW link
(onlinelibrary.wiley.com)

How many philoso­phers ac­cept the or­thog­o­nal­ity the­sis ? Ev­i­dence from the PhilPapers survey

Paperclip Minimizer16 Jun 2018 12:11 UTC
3 points
26 comments3 min readLW link

Is the or­thog­o­nal­ity the­sis at odds with moral re­al­ism?

ChrisHallquist5 Nov 2013 20:47 UTC
7 points
118 comments1 min readLW link

Amend­ing the “Gen­eral Pu­pose In­tel­li­gence: Ar­gu­ing the Orthog­o­nal­ity Th­e­sis”

diegocaleiro13 Mar 2013 23:21 UTC
4 points
22 comments2 min readLW link

Non-or­thog­o­nal­ity im­plies un­con­trol­lable superintelligence

Stuart_Armstrong30 Apr 2012 13:53 UTC
23 points
47 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Dario Citrini16 Sep 2021 16:13 UTC
6 points
0 comments8 min readLW link

[Question] Why Do AI re­searchers Rate the Prob­a­bil­ity of Doom So Low?

Aorou24 Sep 2022 2:33 UTC
7 points
6 comments3 min readLW link

[Question] Is the Orthog­o­nal­ity Th­e­sis true for hu­mans?

Noosphere8927 Oct 2022 14:41 UTC
12 points
20 comments1 min readLW link

A caveat to the Orthog­o­nal­ity Thesis

Wuschel Schulz9 Nov 2022 15:06 UTC
37 points
10 comments2 min readLW link

Sort­ing Peb­bles Into Cor­rect Heaps: The Animation

Writer10 Jan 2023 15:58 UTC
26 points
2 comments1 min readLW link
(youtu.be)