I think that large portions of the AI safety community act this way. This includes most people working on scalable alignment, interp, and deception.
Are you sure? For example, I work on technical AI safety because it’s my comparative advantage, but agree at a high level with your view of the AI safety problem, and almost all of my donations are directed at making AI governance go well. My (not very confident) impression is that most of the people working on technical AI safety (at least in Berkeley/SF) are in a similar place.
Are you sure? For example, I work on technical AI safety because it’s my comparative advantage, but agree at a high level with your view of the AI safety problem, and almost all of my donations are directed at making AI governance go well. My (not very confident) impression is that most of the people working on technical AI safety (at least in Berkeley/SF) are in a similar place.