[Stub] Ontological crisis = out of environment behaviour?

One prob­lem with AI is the pos­si­bil­ity of on­tolog­i­cal crises—of AIs dis­cov­er­ing their fun­da­men­tal model of re­al­ity is flawed, and be­ing un­able to cope safely with that change. Another prob­lem is the out-of-en­vi­ron­ment be­havi­our—that an AI that has been trained to be­have very well in a spe­cific train­ing en­vi­ron­ment, messes up when in­tro­duced to a more gen­eral en­vi­ron­ment.

It sud­denly oc­curred to me that these might in fact be the same prob­lem in dis­guise. In both cases, the AI has de­vel­oped cer­tain ways of be­hav­ing in re­ac­tion to cer­tain reg­u­lar fea­tures of their en­vi­ron­ment. And sud­denly they are placed in a situ­a­tion where these reg­u­lar fea­tures are ab­sent—ei­ther be­cause they re­al­ised that these fea­tures are ac­tu­ally very differ­ent from what they thought (on­tolog­i­cal crisis) or be­cause the en­vi­ron­ment is differ­ent and no longer sup­ports the same reg­u­lar­i­ties (out-of-en­vi­ron­ment be­havi­our).

In a sense, both these er­rors may be seen as im­perfect ex­trap­o­la­tion from par­tial train­ing data.