Am I the right “kind” of researcher for working in AI Safety? Here, my main intuition is that the field needs more “theory-builders” than “problem-solvers”, to take the archetypes of Gower’s Two Cultures of Mathematics. By that I mean that AI Safety has not yet cristallize into a field where the main approaches and questions are well understood and known. Almost every researcher has a different perspective on what is fundamental in the field. Therefore, the most useful works will be the ones that clarify, deconfuse and characterize the fundamental questions and problems in the field.
To add on to this, it also means it’s going to be somewhat hard to know if you’re right kind of researcher or not because the feedback cycle is long and you may be doing good work but it’s work that will take months or years to come together in a way that can be easily evaluated by others.
This doesn’t mean it all looks maximally like this. This is less of an issue with, say, safety research focused on machine learning than safety research focused on theoretical AI systems we don’t know how to build yet or safety research focused on turning ideas about what safety looks like into something mathematical precise enough to build.
Thus a corollary of this answer might be something like “you might be the right kind of researcher only if you’re okay with long (multi-year) feedback cycles”.
I agree, but I’m not sure if it’s really linked to the division between problem solvers and theory builders. Because you can have very long feedback loops in problem solving—think Wiles and Fermat’s last theorem. That being said, I think the advantage of the problem solvers is that they tend to attack problems that are recognized as important, and thus the only uncertainty is in whether they can actually solve it. Whereas deconfusion or theory building is only “recognized” at the end, when the theory is done and it works and it captures something interesting.
To add on to this, it also means it’s going to be somewhat hard to know if you’re right kind of researcher or not because the feedback cycle is long and you may be doing good work but it’s work that will take months or years to come together in a way that can be easily evaluated by others.
This doesn’t mean it all looks maximally like this. This is less of an issue with, say, safety research focused on machine learning than safety research focused on theoretical AI systems we don’t know how to build yet or safety research focused on turning ideas about what safety looks like into something mathematical precise enough to build.
Thus a corollary of this answer might be something like “you might be the right kind of researcher only if you’re okay with long (multi-year) feedback cycles”.
I agree, but I’m not sure if it’s really linked to the division between problem solvers and theory builders. Because you can have very long feedback loops in problem solving—think Wiles and Fermat’s last theorem. That being said, I think the advantage of the problem solvers is that they tend to attack problems that are recognized as important, and thus the only uncertainty is in whether they can actually solve it. Whereas deconfusion or theory building is only “recognized” at the end, when the theory is done and it works and it captures something interesting.