I often see the sentiment, “I’m going to learn linear algebra, probability theory, computational complexity, machine learning and deep RL, and then I’ll have the prerequisites to do AI safety”. (Possible reasons for this: the 80K AI safety syllabus, CHAI’s bibliography, a general sense that you have to be an expert before you can do research.) This sentiment seems wrong to me
See also, my shortform post about this.
+1, I agree with the “be lazy in the CS sense” prescription; that’s basically what I’m recommending here.