It seems to me that 100 years ago (or more) you would have to consider pretty much any philosophy and mathematics to be relevant to AI risk reduction, as well as reduction of other potential risks, and the attempts to select the work particularly conductive to the AI risk reduction would not be able to succeed. Effort planning is the key to success.
On somewhat unrelated: Reading the publications and this thread, there is point of definitions that I do not understand: what exactly does S.I. mean when it speaks of “utility function” in the context of an AI? Is it a computable mathematical function over a model, such that the ‘intelligence’ component computes the action that results in maximum of that function taken over the world state resulting from the action?
But then SI has to have dramatically better idea what research has to be funded to protect the mankind, than every other group of people capable of either performing such research or employing people to perform such research.
Muehlhauser has stated that SI should be compared to alternatives in form of the organizations working on the AI risk mitigation, but that seems like an overly narrow choice reliant on presumption that it is not an alternative to not work on AI risk mitigation now.
For example, 100 years ago it would seem to have been too early to fund work on AI risk mitigation; that may still be the case; as the time gone on one could naturally expect that the opinions will form a distribution and the first organizations offering AI risk mitigation will pop up earlier than the time at which such work is effective. When we look into the past through the goggles of notoriety, we don’t see all the failed early starts.