Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models.
This sounds very sensible. Does anyone know what is this ‘best safety work’ he’s referring to?
This sounds very sensible. Does anyone know what is this ‘best safety work’ he’s referring to?