we can say confidently that an actually-transformative AI system (aligned or not) will be doing something that is at least roughly coherently consequentialist.
I don’t think we can confidently say that. If takeoff looks like more like a cambrian explosion than like a singleton (and that is how I would bet), that would definitely be transformative but the transformation would not be the result of any particular agent deciding what world state is desirable and taking actions intended to bring about that world state.
I don’t think we can confidently say that. If takeoff looks like more like a cambrian explosion than like a singleton (and that is how I would bet), that would definitely be transformative but the transformation would not be the result of any particular agent deciding what world state is desirable and taking actions intended to bring about that world state.