One worry I have about my current AI safety research (empirical mechanistic anomaly detection and interpretability) is that now is the wrong time to work on it. A lot of this work seems pretty well-suited to (partial) automation by future AI. And it also seems quite plausible to me that we won’t strictly need this type of work to safely use the early AGI systems that could automate a lot of it. If both of these are true, then that seems like a good argument to do this type of work once AI can speed it up a lot more.
Under this view, arguably the better things to do right now (within technical AI safety) are:
working on less speculative techniques that can help us safely use those early AGI systems
working on things that seem less likely to profit from early AI automation and will be important to align later AI systems
An example of 1. would be control evals as described by Redwood. Within 2., the ideal case would be doing work now that would be hard to safely automate, but that (once done) will enable additional safety work that can be automated. For example, maybe it’s hard to use AI to come up with the right notions for “good explanations” in interpretability, but once you have things like causal scrubbing/causal abstraction, you can safely use AI to find good interpretations under those definitions. I would be excited to have more agendas that are both ambitious and could profit a lot from early AI automation.
(Of course it’s also possible to do work in 2. on the assumption that it’s never going to be safely automatable without having done that work first.)
Two important counter-considerations to this whole story:
It’s hard to do this kind of agenda-development or conceptual research in a vacuum. So doing some amount of concrete empirical work right now might be good even if we could automate it later (because we might need it now to support the more foundational work).
However, the type and amount of empirical work to do presumably looks quite different depending on whether it’s the main product or in support of some other work.
I don’t trust my forecasts for which types of research will and won’t be automatable early on that much. So perhaps we should have some portfolio right now that doesn’t look extremely different from the portfolio of research we’d want to do ignoring the possibility of future AI automation.
But we can probably still say something about what’s more or less likely to be automated early on, so that seems like it should shift the portfolio to some extent.
the type and amount of empirical work to do presumably looks quite different depending on whether it’s the main product or in support of some other work
One worry I have about my current AI safety research (empirical mechanistic anomaly detection and interpretability) is that now is the wrong time to work on it. A lot of this work seems pretty well-suited to (partial) automation by future AI. And it also seems quite plausible to me that we won’t strictly need this type of work to safely use the early AGI systems that could automate a lot of it. If both of these are true, then that seems like a good argument to do this type of work once AI can speed it up a lot more.
Under this view, arguably the better things to do right now (within technical AI safety) are:
working on less speculative techniques that can help us safely use those early AGI systems
working on things that seem less likely to profit from early AI automation and will be important to align later AI systems
An example of 1. would be control evals as described by Redwood. Within 2., the ideal case would be doing work now that would be hard to safely automate, but that (once done) will enable additional safety work that can be automated. For example, maybe it’s hard to use AI to come up with the right notions for “good explanations” in interpretability, but once you have things like causal scrubbing/causal abstraction, you can safely use AI to find good interpretations under those definitions. I would be excited to have more agendas that are both ambitious and could profit a lot from early AI automation.
(Of course it’s also possible to do work in 2. on the assumption that it’s never going to be safely automatable without having done that work first.)
Two important counter-considerations to this whole story:
It’s hard to do this kind of agenda-development or conceptual research in a vacuum. So doing some amount of concrete empirical work right now might be good even if we could automate it later (because we might need it now to support the more foundational work).
However, the type and amount of empirical work to do presumably looks quite different depending on whether it’s the main product or in support of some other work.
I don’t trust my forecasts for which types of research will and won’t be automatable early on that much. So perhaps we should have some portfolio right now that doesn’t look extremely different from the portfolio of research we’d want to do ignoring the possibility of future AI automation.
But we can probably still say something about what’s more or less likely to be automated early on, so that seems like it should shift the portfolio to some extent.
Doing stuff manually might provide helpful intuitions/experience for automating it?
Yeah, agreed. Though I think
applies to that as well