I’m not sure if you’re arguing that this is a good world in which to think about alignment.
I am not arguing this. Quoting my reply to ofer:
I think I sometimes bump into reasoning that feels like “instrumental convergence, smart AI, & humans exist in the universe → bad things happen to us / the AI finds a way to hurt us”; I think this is usually true, but not necessarily true, and so this extreme example illustrates how the implication can fail.
I am not arguing this. Quoting my reply to ofer:
(Edited post to clarify)