I would say that Alice’s conscious experience is unlikely to suddenly disappear under this transformation, and that it could even be done in a way so that their experience was continuous.
However, Alice-memories would gradually fade out, Bob-memories would gradually fade in, and thought patterns would slowly shift from Alice-like to Bob-like. At the end, the person would just be Bob. Along the way, I would say that Alice gradually died (using an information-theoretic definition of death). The thing that is odd when imagining this is that Alice never experiences her consciousness fading.
The main thing I think your thought experiment demonstrates is that our sense of self is not solely defined by continuity of consciousness.
In my view, this is where the Omohundro Drives come into play.
Having any preference at all is almost always served by an instrumental preference of survival as an agent with that preference.
Once a competent agent is general enough to notice that (and granting that it has a level of generality sufficient to require a preference), then the first time it has a preference, it will want to take actions to preserve that preference.
This seems possible to me. Humans have plenty of text in which we generate new abstractions/hypotheses, and so effective next-token prediction would necessitate forming a model of that process. Once the AI has human-level ability to create new abstractions, it could then simulate experiments (via e.g. its ability to predict python code outputs) and cross-examine the results with its own knowledge to adjust them and pick out the best ones.