Recent work (e.g.) has helped clarify the continuum between “general” emergent misalignment, where the AI does a wide variety of bad stuff in a very vibes-based way, through more specific but still vibes-based misaligned behavior, to more and more situationally-aware and narrowly consequentialist bad behavior.
Do you think this is more the sort of thing where you’d want to produce a wide diversity of models, or would you produce a bunch of models on the consequentialism end of this axis if you could?
Recent work (e.g.) has helped clarify the continuum between “general” emergent misalignment, where the AI does a wide variety of bad stuff in a very vibes-based way, through more specific but still vibes-based misaligned behavior, to more and more situationally-aware and narrowly consequentialist bad behavior.
Do you think this is more the sort of thing where you’d want to produce a wide diversity of models, or would you produce a bunch of models on the consequentialism end of this axis if you could?