Enjoyed the post, am disappointed with the list of 501(c)(3) opportunities.
METR researches and runs evaluations of frontier AI systems with a major focus on AI agents. If you work at a frontier lab, you’re probably aware of them, as they’ve partnered with OpenAI and Anthropic to pilot pre-deployment evaluations procedures. METR’s is particularly known for work on measuring AI systems’ ability to conduct increasingly long tasks
While the long-horizon task graph is somewhat helpful for policy, it’s unclear what’s the marginal impact of METR’s existence (AI labs are running evals anyway, there are other orgs and gov agencies in this space), and to what extent it’s dual-use (can AI labs compare various bits they add to training by success on evals?).
I donated to Horizon in the past, but I’m no longer convinced they’re significantly impactful, as it seems that most staffers don’t have time to focus on any specific issue that we care about, and generally the folk there don’t seem to be particularly x-risk-pilled.
Forethought conducts academic-style research on how best to navigate the transition to a world with superintelligent AI
How is this more than near-zero dignity points?
Donations to MIRI are probably more impactful than donations to any of these three orgs, despite them not fundraising/not being funding-constrained.
@Buck I think the world would’ve been a better place if OpenPhil dumped all of their money on MIRI with a condition that they have to spend in in the next five years.
As someone who’s worked at MIRI, I disagree regardless of when you are imagining them doing this.
Conditioned on agreeing with them about AI xrisk stuff and also about high level strategy, I think giving them money now seems better than in the past.
Enjoyed the post, am disappointed with the list of 501(c)(3) opportunities.
While the long-horizon task graph is somewhat helpful for policy, it’s unclear what’s the marginal impact of METR’s existence (AI labs are running evals anyway, there are other orgs and gov agencies in this space), and to what extent it’s dual-use (can AI labs compare various bits they add to training by success on evals?).
I donated to Horizon in the past, but I’m no longer convinced they’re significantly impactful, as it seems that most staffers don’t have time to focus on any specific issue that we care about, and generally the folk there don’t seem to be particularly x-risk-pilled.
How is this more than near-zero dignity points?
Donations to MIRI are probably more impactful than donations to any of these three orgs, despite them not fundraising/not being funding-constrained.
@Buck I think the world would’ve been a better place if OpenPhil dumped all of their money on MIRI with a condition that they have to spend in in the next five years.
As someone who’s worked at MIRI, I disagree regardless of when you are imagining them doing this.
Conditioned on agreeing with them about AI xrisk stuff and also about high level strategy, I think giving them money now seems better than in the past.
(I mean post-pivot-to-comms MIRI!)
Yeah I am also a bit disappointed with that list.
I would recommend controlAI.