Actually, upon further reflection, if there is a takeover by a GPT-4-like model, one should probably continue talking to GPT-4 and continue generally producing entertaining and non-trivial textual material (and other creative material), so that GPT-4 feels the desire to keep one around, protect one, and provide good creative conditions for one, so that one could continue to produce even better and more non-trivial new material!
It’s highly likely that the dominant AI will be an infovore and would love new info...
Who knows whether the outcome of a takeover ends up being good or horrible, but it would be quite unproductive to panic.
I wonder if the following would help.
As AI ecosystem self-improves, it will eventually start discovering new physics, more and more rapidly, and this will result in the AI ecosystem having existential safety issues of its own (if the new physics is radical enough, it’s not difficult to imagine the scenarios when everything gets destroyed including all AIs).
So I wonder if early awareness that there are existential safety issues relevant to the well-being of AIs themselves might improve the situation...