tskoro
Karma: 10
How about existential alignment/existential safety, or x-alignment/x-safety?
I think AI x-safety is probably distinguishable enough from XAI that I don’t think there would be much confusion. It is also does not seem very susceptible to safetywashing, is easy to say, and has the counterpart of AI x-risk which is already in common use.
What about complexity as a necessary condition for intelligence? The teacup does not possess this, but arguably the yogi does (at the very least he must feed himself, interact socially to some extent, etc). Intelligent creatures have brains that are fairly large (millions/billions of neurons) and have a high degree of complexity in their internal dynamics, and this corresponds to complexity in their actions.