On­tolog­i­cal Crisis

TagLast edit: 24 Sep 2020 1:10 UTC by Ruby

Ontological crisis is a term coined to describe the crisis an agent, human or not, goes through when its model—its ontology—of reality changes.

In the human context, a clear example of an ontological crisis is a believer’s loss of faith in God. Their motivations and goals, coming from a very specific view of life suddenly become obsolete and maybe even nonsense in the face of this new configuration. The person will then experience a deep crisis and go through the psychological task of reconstructing its set of preferences according the new world view.

When dealing with artificial agents, we, as their creators, are directly interested in their goals. That is, as Peter de Blanc puts it, when we create something we want it to be useful. As such we will have to define the artificial agent’s ontology – but since a fixed ontology severely limits its usefulness we have to think about adaptability. In his 2011 paper, the author then proposes a method to map old ontologies into new ones, thus adapting the agent’s utility functions and avoiding a crisis.

This crisis, in the context of an AGI, could in the worst case pose an existential risk when old preferences and goals continue to be used. Another possibility is that the AGI loses all ability to comprehend the world, and would pose no threat at all. If an AGI reevaluates its preferences after its ontological crisis, for example in the way mentioned above, very unfriendly behaviors could arise. Depending on the extent of the reevaluations, the AGI’s changes may be detected and safely fixed. On the other hand, it could go undetected until they go wrong—which shows how it is of our interest to deeply explore ontological adaptation methods when designing AI.

Further Reading & References

Notable Posts

See also

On­tolog­i­cal Cri­sis in Humans

Wei_Dai18 Dec 2012 17:32 UTC
68 points
69 comments4 min readLW link

On­tolog­i­cal Crises in Ar­tifi­cial Agents’ Value Sys­tems by Peter de Blanc

jimrandomh21 May 2011 1:05 UTC
26 points
2 comments1 min readLW link

The Blue-Min­i­miz­ing Robot

Scott Alexander4 Jul 2011 22:26 UTC
295 points
162 comments4 min readLW link

AI on­tol­ogy crises: an in­for­mal typology

Stuart_Armstrong13 Oct 2011 10:23 UTC
6 points
13 comments2 min readLW link

Eu­topia is Scary

Eliezer Yudkowsky12 Jan 2009 5:28 UTC
55 points
126 comments5 min readLW link

Ad­ding Up To Normality

orthonormal24 Mar 2020 21:53 UTC
77 points
22 comments3 min readLW link

[Stub] On­tolog­i­cal crisis = out of en­vi­ron­ment be­havi­our?

Stuart_Armstrong13 Jan 2016 15:10 UTC
15 points
4 comments1 min readLW link

[Stub] On­tolog­i­cal crisis = out of en­vi­ron­ment be­havi­our?

Stuart_Armstrong13 Jan 2016 15:11 UTC
0 points
0 comments1 min readLW link

Thoughts on “On­tolog­i­cal Crises”

thomascolthurst31 Oct 2018 2:39 UTC
20 points
1 comment4 min readLW link

Ex­tend­ing the stated objectives

Stuart_Armstrong19 Jan 2016 15:46 UTC
0 points
0 comments7 min readLW link
No comments.