The Orthogonality Thesis asserts that there can exist arbitrarily intelligent agents pursuing any kind of goal.
The strong form of the Orthogonality Thesis says that there’s no extra difficulty or complication in creating an intelligent agent to pursue a goal, above and beyond the computational tractability of that goal. [...]
This contrasts to inevitablist theses which might assert, for example:
“It doesn’t matter what kind of AI you build, it will turn out to only pursue its own survival as a final end.”
“Even if you tried to make an AI optimize for paperclips, it would reflect on those goals, reject them as being stupid, and embrace a goal of valuing all sapient life.” [...]
Orthogonality does not require that all agent designs be equally compatible with all goals. E.g., the agent architecture AIXI-tl can only be formulated to care about direct functions of its sensory data, like a reward signal; it would not be easy to rejigger the AIXI architecture to care about creating massive diamonds in the environment (let alone any more complicated environmental goals). The Orthogonality Thesis states “there exists at least one possible agent such that...” over the whole design space; it’s not meant to be true of every particular agent architecture and every way of constructing agents. [...]
The weak form of the Orthogonality Thesis says, “Since the goal of making paperclips is tractable, somewhere in the design space is an agent that optimizes that goal.”
The strong form of Orthogonality says, “And this agent doesn’t need to be twisted or complicated or inefficient or have any weird defects of reflectivity; the agent is as tractable as the goal.” [...]
This could be restated as, “To whatever extent you (or a superintelligent version of you) could figure out how to get a high-U outcome if aliens offered to pay you huge amount of resources to do it, the corresponding agent that terminally prefers high-U outcomes can be at least that good at achieving U.” This assertion would be false if, for example, an intelligent agent that terminally wanted paperclips was limited in intelligence by the defects of reflectivity required to make the agent not realize how pointless it is to pursue paperclips; whereas a galactic superintelligence being paid to pursue paperclips could be far more intelligent and strategic because it didn’t have any such defects. [...]
For purposes of stating Orthogonality’s precondition, the “tractability” of the computational problem of U-search should be taken as including only the object-level search problem of computing external actions to achieve external goals. If there turn out to be special difficulties associated with computing “How can I make sure that I go on pursuingU?” or “What kind of successor agent would want to pursue U?” whenever U is something other than “be nice to all sapient life”, then these new difficulties contradict the intuitive claim of Orthogonality. Orthogonality is meant to be empirically-true-in-practice, not true-by-definition because of how we sneakily defined “optimization problem” in the setup.
Orthogonality is not literally, absolutely universal because theoretically ‘goals’ can include such weird constructions as “Make paperclips for some terminal reason other than valuing paperclips” and similar such statements that require cognitive algorithms and not just results. To the extent that goals don’t single out particular optimization methods, and just talk about paperclips, the Orthogonality claim should cover them.
Quoting the specific definitions in the Arbital article for orthogonality, in case people haven’t seen that page (bold added):