Most current AI alignment work is pretty abstract and theoretical, for two reasons.
FWIW, this is not obvious to me (or at least depends a lot on what you mean by ‘AI alignment’). Work at places like OpenAI, CHAI, and DeepMind tends to be relatively concrete.
FWIW, this is not obvious to me (or at least depends a lot on what you mean by ‘AI alignment’). Work at places like OpenAI, CHAI, and DeepMind tends to be relatively concrete.
Also if you count work done by people not publicly identified as motivated by existential risk, I think the concrete:abstract ratio will increase.