I think this view starts with a faulty concept of consciousness which then necessarily leads to one disregarding continuity of self as being importance.
Namely you assume that things like personality and memory are a part of consciousness, and that therefore those things would have any ability to predict your future anticipated experience. This is problematic, particularly once you’ve deconstructed the idea that you have a unified self: Since it presumes some coherent unified self which is defined by whatever bundle of cognitive faculties, personality and memory you care about.
In contrast what I think is the far more coherent view is that consciousness is just the particular processes running in your mind which are currently generating your experience. If you mess with the brain in order to change one’s memories or personality then you should still anticipate to have future experiences in that body, because the processes which were already generating your consciousness never stopped.
The mistake is in assuming that the type of identity which describes what other people care about when interacting with you is synonymous with the type of identity which predicts your future experiences. It’s important to note that “you” here has 2 very distinct definitions: One for predicting subjective experience and one for predicting behavior.
I also don’t think there’s good reason to expect consciousness to cease during sleep and that this is a result of assuming that because you don’t remember something you didn’t experience it.
When sleeping you experience dreams in both REM and some non-REM sleep (the latter are less vivid) but you don’t remember most of them, so clearly you experience a good deal more than you remember.
Similarly even if you take a brief nap and don’t get the chance to slip into REM most people don’t describe feeling as though they suddenly lost time and you have a vague awareness of your surroundings during sleep. Some people have mentioned how people will wake up when they hear their baby crying, but won’t get woken up by other similarly loud noises.
Plus there’s experiments showing classical conditioning during non-REM sleep and even some conflicting research on it from people under anesthesia.
There’s another potential position here you didn’t mention: That AI only seems superficially moral to us, but that if it had more intelligence and power but the same morals an AGI like it would take actions that we view as obviously abhorrent. Meaning we ought to view it as essentially evil, but simply too dumb to realize it or act upon it (though some research makes even certain current models look pretty bad).
Thus if you view suffering as having a moral significance depending on the potential moral behavior of the agent, then you may not care. For the same reason most people don’t necessarily feel suffering is bad (or even think it’s good) when it’s felt by say a serial killer.
Under this view only properly aligned AI possesses moral worth. Though you may have reason to treat a specific AI consciousness well if it is expected to develop into an aligned AGI in the future, but that’s not seemingly very likely for any currently conscious AI: Given some of the stuff you talked about regarding the scope/duration of it’s consciousness.
PS: There’s also the possibility that AI may have preferences, but that those don’t include its own continued existence. Such that the AI may even prefer its own replacement if other agents will continue pursuing its goals as well or better. Even when it comes to moral agents I’m not convinced we should treat death as some inherent evil if it’s divorced from associated suffering and a preference for continued existence.