RSS

Orthog­o­nal­ity Thesis

TagLast edit: 1 Oct 2020 22:27 UTC by Ruby

The Orthogonality Thesis states that an artificial intelligence can have any combination of intelligence level and goal, that is, its final goals and intelligence levels can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal. The thesis was originally defined by Nick Bostrom in the paper “Superintelligent Will”, (along with the instrumental convergence thesis). For his purposes, Bostrom defines intelligence to be instrumental rationality.

Related: Complexity of Value, Decision Theory, General Intelligence, Utility Functions

Defense of the thesis

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligences to converge to the same goals may be because most humans have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as AIXI and Gödel machines, the thesis is known to be true. Furthermore, if the thesis was false, then Oracle AIs would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.

See Also

External links

Sort­ing Peb­bles Into Cor­rect Heaps

Eliezer Yudkowsky10 Aug 2008 1:00 UTC
151 points
108 comments4 min readLW link

Ar­gu­ing Orthog­o­nal­ity, pub­lished form

Stuart_Armstrong18 Mar 2013 16:19 UTC
19 points
10 comments23 min readLW link

Ev­i­dence for the or­thog­o­nal­ity thesis

Stuart_Armstrong3 Apr 2012 10:58 UTC
14 points
293 comments1 min readLW link

Gen­eral pur­pose in­tel­li­gence: ar­gu­ing the Orthog­o­nal­ity thesis

Stuart_Armstrong15 May 2012 10:23 UTC
32 points
156 comments18 min readLW link

Co­her­ence ar­gu­ments im­ply a force for goal-di­rected behavior

KatjaGrace26 Mar 2021 16:10 UTC
84 points
20 comments14 min readLW link
(aiimpacts.org)

An­thro­po­mor­phic Optimism

Eliezer Yudkowsky4 Aug 2008 20:17 UTC
56 points
58 comments5 min readLW link

Su­per­in­tel­li­gence 9: The or­thog­o­nal­ity of in­tel­li­gence and goals

KatjaGrace11 Nov 2014 2:00 UTC
13 points
80 comments7 min readLW link

Are we all mis­al­igned?

Mateusz Mazurkiewicz3 Jan 2021 2:42 UTC
10 points
0 comments5 min readLW link

[Video] In­tel­li­gence and Stu­pidity: The Orthog­o­nal­ity Thesis

plex13 Mar 2021 0:32 UTC
5 points
1 comment1 min readLW link
(www.youtube.com)

Is the ar­gu­ment that AI is an xrisk valid?

MACannon19 Jul 2021 13:20 UTC
4 points
54 comments1 min readLW link
(onlinelibrary.wiley.com)