Regarding the methodological difference. My perspective is that head-on attempts to solve AI safety are not very promising since we lack the tools to answer basic questions in the general area of the problem such as “what is intelligence?”, “what is the computational resource cost of constructing an agent of given intelligence?” or “what is the growth curve of self-improving agents?” Therefore, what we should be doing is constructing a general theory capable of answering questions of this type (I would call it abstract intelligence theory). Thinking about problems such naturalized induction and Vingean reflection seems to me a useful way to approach this, not because they are subproblems of the AI safety problem but because they are handles to getting a mathematical grip on the entire area.
Regarding the methodological difference. My perspective is that head-on attempts to solve AI safety are not very promising since we lack the tools to answer basic questions in the general area of the problem such as “what is intelligence?”, “what is the computational resource cost of constructing an agent of given intelligence?” or “what is the growth curve of self-improving agents?” Therefore, what we should be doing is constructing a general theory capable of answering questions of this type (I would call it abstract intelligence theory). Thinking about problems such naturalized induction and Vingean reflection seems to me a useful way to approach this, not because they are subproblems of the AI safety problem but because they are handles to getting a mathematical grip on the entire area.