Do you regret not investing in Anthropic? I don’t know how much the investment was for, but it seems like you could do a lot of good with 350x that amount. Is there a return level you would have been willing to invest in it for (assuming the return level was inevitable; you would not be causing the company to increase by 1000x)?
I don’t regret it, and part of the reason is that I find it hard to find people/opportunities to direct resources to that I can be confident won’t end up doing more harm than good. Reasons:
Meta: Well-meaning people often end up making things worse. See Anthropic (and many other examples), and this post.
Object-level: It’s really hard to find people who share enough of my views that I can trust their strategy / decision making. For example when MIRI was trying to build FAI I thought they should be pushing for AI pause/stop, and now that they’re pushing for AI pause/stop, I worry they’re focusing too much on AI misalignment (to the exclusion of other similarly concerning AI-related risks) as well as being too confident in misalignment. I think this could cause a backlash in the unlikely (but not vanishingly so) worlds where AI alignment turns out to be relatively easy but we still need to solve other AI x-risks.
(When I did try to direct resources to others in the past, I often regretted it later. I think the overall effect is unclear or even net negative. Seems like it would have to be at least “clearly net positive” to justify investing in Anthropic as “earning-to-give”.)
Do you regret not investing in Anthropic? I don’t know how much the investment was for, but it seems like you could do a lot of good with 350x that amount. Is there a return level you would have been willing to invest in it for (assuming the return level was inevitable; you would not be causing the company to increase by 1000x)?
I don’t regret it, and part of the reason is that I find it hard to find people/opportunities to direct resources to that I can be confident won’t end up doing more harm than good. Reasons:
Meta: Well-meaning people often end up making things worse. See Anthropic (and many other examples), and this post.
Object-level: It’s really hard to find people who share enough of my views that I can trust their strategy / decision making. For example when MIRI was trying to build FAI I thought they should be pushing for AI pause/stop, and now that they’re pushing for AI pause/stop, I worry they’re focusing too much on AI misalignment (to the exclusion of other similarly concerning AI-related risks) as well as being too confident in misalignment. I think this could cause a backlash in the unlikely (but not vanishingly so) worlds where AI alignment turns out to be relatively easy but we still need to solve other AI x-risks.
(When I did try to direct resources to others in the past, I often regretted it later. I think the overall effect is unclear or even net negative. Seems like it would have to be at least “clearly net positive” to justify investing in Anthropic as “earning-to-give”.)