On­tolog­i­cal Crisis

TagLast edit: 12 Sep 2022 10:23 UTC by NunoSempere

Ontological crisis is a term coined to describe the crisis an agent, human or not, goes through when its model—its ontology—of reality changes.

In the human context, a clear example of an ontological crisis is a believer’s loss of faith in God. Their motivations and goals, coming from a very specific view of life suddenly become obsolete and maybe even nonsense in the face of this new configuration. The person will then experience a deep crisis and go through the psychological task of reconstructing their set of preferences according the new world view.

When dealing with artificial agents, we, as their creators, are directly interested in their goals. That is, as Peter de Blanc puts it, when we create something we want it to be useful. As such we will have to define the artificial agent’s ontology—but since a fixed ontology severely limits its usefulness we have to think about adaptability. In his 2011 paper, the author then proposes a method to map old ontologies into new ones, thus adapting the agent’s utility functions and avoiding a crisis.

This crisis, in the context of an AGI, could in the worst case pose an existential risk when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very unfriendly behaviors could arise. Depending on the extent of its reevaluations, the AGI’s changes may be detected and safely fixed. On the other hand, ontology changes could go undetected until they go wrong—which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI.

Further Reading & References

Notable Posts

See also

On­tolog­i­cal Cri­sis in Humans

Wei Dai18 Dec 2012 17:32 UTC
77 points
68 comments4 min readLW link

On­tolog­i­cal Crises in Ar­tifi­cial Agents’ Value Sys­tems by Peter de Blanc

jimrandomh21 May 2011 1:05 UTC
26 points
2 comments1 min readLW link

AI on­tol­ogy crises: an in­for­mal typology

Stuart_Armstrong13 Oct 2011 10:23 UTC
6 points
13 comments2 min readLW link

Eu­topia is Scary

Eliezer Yudkowsky12 Jan 2009 5:28 UTC
65 points
126 comments5 min readLW link

The Blue-Min­i­miz­ing Robot

Scott Alexander4 Jul 2011 22:26 UTC
318 points
161 comments4 min readLW link

Me­satrans­la­tion and Metatranslation

jdp9 Nov 2022 18:46 UTC
25 points
4 comments11 min readLW link

Ad­ding Up To Normality

orthonormal24 Mar 2020 21:53 UTC
84 points
22 comments3 min readLW link

[Stub] On­tolog­i­cal crisis = out of en­vi­ron­ment be­havi­our?

Stuart_Armstrong13 Jan 2016 15:10 UTC
15 points
4 comments1 min readLW link

[Stub] On­tolog­i­cal crisis = out of en­vi­ron­ment be­havi­our?

Stuart_Armstrong13 Jan 2016 15:11 UTC
0 points
0 comments1 min readLW link

Thoughts on “On­tolog­i­cal Crises”

thomascolthurst31 Oct 2018 2:39 UTC
20 points
1 comment4 min readLW link

Ex­tend­ing the stated objectives

Stuart_Armstrong19 Jan 2016 15:46 UTC
0 points
1 comment7 min readLW link

Coun­ter­fac­tu­als are Con­fus­ing be­cause of an On­tolog­i­cal Shift

Chris_Leong5 Aug 2022 19:03 UTC
17 points
35 comments2 min readLW link

Yet more UFO Bet­ting: Put Up or Shut Up

MoreRatsWrongReUAP8 Aug 2023 17:50 UTC
10 points
18 comments1 min readLW link

GPT-2 XL’s ca­pac­ity for co­her­ence and on­tol­ogy clustering

MiguelDev30 Oct 2023 9:24 UTC
6 points
2 comments41 min readLW link

UFO Bet­ting: Put Up or Shut Up

RatsWrongAboutUAP13 Jun 2023 4:05 UTC
230 points
207 comments2 min readLW link

The UAP Dis­clo­sure Act of 2023 and its implications

andeslodes21 Jul 2023 17:21 UTC
36 points
47 comments20 min readLW link

The Bind­ing of Isaac & Trans­par­ent New­comb’s Prob­lem

suvjectibity22 Feb 2024 18:56 UTC
−11 points
0 comments10 min readLW link

On­tolog­i­cal Rea­sons to be Op­ti­mistic About AI

goktu5 Sep 2023 8:10 UTC
−11 points
1 comment3 min readLW link
No comments.