I’m getting the impression that “consciousness” is inherently not well defined; that is, there is no singular thing we can point to that will meaningfully determine whether or not something is “conscious”.
In this sense, consciousness might be a red herring. A similar but more concrete question worth asking: what behaviours would an AI agent have to exhibit for you to want it to be granted fundamental rights/autonomy? Or otherwise for it to be intrinsically unethical to create and run an instance of it?
I suspect that “enlightenment” is probably a bundle of different things rather than one discrete thing, and maybe what it means depends on the culture and even how an individual relates to the world. This is based on the heuristic that when you dig into the nature of mental states, they tend to not fall into neat categories that are the same from person to person.
However, there are people existing today who claim to be “awakened” who were certainly self-aware, and still describe a dramatic change in their perception of the world. The descriptions tend to fall along similar lines, and include:
A dissolving of the boundary between “self” and “other”.
A sense of fundamental peace/ok-ness that in independent of current thoughts and emotions.
The ability to rest in some space that is “beyond thought” (or something along those lines, it sounds like a sazen).
A natural and automatic removal of anxiety, fear and other negative emotions.
Unlearning something about thought and the self-model that most people implicitly take to be true without realising it.
This sounds like there’s something more going on than gaining consciousness, and in some ways points in the opposite direction. It is often described as more of an “unlearning” than a learning.