I don’t regret it, and part of the reason is that I find it hard to find people/opportunities to direct resources to that I can be confident won’t end up doing more harm than good. Reasons:
Meta: Well-meaning people often end up making things worse. See Anthropic (and many other examples), and this post.
Object-level: It’s really hard to find people who share enough of my views that I can trust their strategy / decision making. For example when MIRI was trying to build FAI I thought they should be pushing for AI pause/stop, and now that they’re pushing for AI pause/stop, I worry they’re focusing too much on AI misalignment (to the exclusion of other similarly concerning AI-related risks) as well as being too confident in misalignment. I think this could cause a backlash in the unlikely (but not vanishingly so) worlds where AI alignment turns out to be relatively easy but we still need to solve other AI x-risks.
(When I did try to direct resources to others in the past, I often regretted it later. I think the overall effect is unclear or even net negative. Seems like it would have to be at least “clearly net positive” to justify investing in Anthropic as “earning-to-give”.)
I don’t regret it, and part of the reason is that I find it hard to find people/opportunities to direct resources to that I can be confident won’t end up doing more harm than good. Reasons:
Meta: Well-meaning people often end up making things worse. See Anthropic (and many other examples), and this post.
Object-level: It’s really hard to find people who share enough of my views that I can trust their strategy / decision making. For example when MIRI was trying to build FAI I thought they should be pushing for AI pause/stop, and now that they’re pushing for AI pause/stop, I worry they’re focusing too much on AI misalignment (to the exclusion of other similarly concerning AI-related risks) as well as being too confident in misalignment. I think this could cause a backlash in the unlikely (but not vanishingly so) worlds where AI alignment turns out to be relatively easy but we still need to solve other AI x-risks.
(When I did try to direct resources to others in the past, I often regretted it later. I think the overall effect is unclear or even net negative. Seems like it would have to be at least “clearly net positive” to justify investing in Anthropic as “earning-to-give”.)