1b. As AI becomes more powerful and AI safety concerns go more mainstream, other wealthy donors may become activated
I’m worried about a motte-and-bailey situation where people sometimes use “AI safety” to mean “make AI go well” and other times use “AI safety” to mean “reduce catastrophic risk.” I take the authors to mean the latter, in which case 1b is valid.
I agree that government intervention and non-EA philanthropists will make a meaningful impact to funding opportunities for reducing catastrophic risk.
However, I think the world is likely to remain wrong about other key issues (digital sentience, longtermism and scope sensitivity broadly), such that 1b is in fact not valid for the former definition of “AI safety,” which I claim is what people should really care about.
I’m worried about a motte-and-bailey situation where people sometimes use “AI safety” to mean “make AI go well” and other times use “AI safety” to mean “reduce catastrophic risk.” I take the authors to mean the latter, in which case 1b is valid.
I agree that government intervention and non-EA philanthropists will make a meaningful impact to funding opportunities for reducing catastrophic risk.
However, I think the world is likely to remain wrong about other key issues (digital sentience, longtermism and scope sensitivity broadly), such that 1b is in fact not valid for the former definition of “AI safety,” which I claim is what people should really care about.