Coming from a somewhat similar space myself, I’ve also had the same thoughts. My current thinking is there is no straightforward answer on how to convert dollars to impact.
I think the EA community did a really good job at that back in the day with a spreadsheet-based relatively easier way to measure impact per dollars or per life saved in the near-term future.
With AI safety / existential-risk—the space seems a lot more confused, and everyone has different models of the world, what will work, and what good ideas are. There are some people working directly on this space directly—like QURI, but IMO it’s not anything close to a consensus for “where can I put my marginal dollar for AI safety”. The really obvious / good ideas and people working on them don’t seem funding-constrained.
There’s in general (from my observation):
- Direct interpretability work on LLM - Governance work (trying to convince regulators / goverments to put a stop to this) - Explaining AI risk to the general public - Direct alignment work on current-gen LLM (super-alignment type things in major labs) - More theoretical work (like MIRI), but I don’t know if anyone is doing this now. - More weirder things like whole brain emulation, or gene-editing / making superbabies.
My guess is your best bet spending your money / time on the last one would be on the margin helpful, or just talk to people who are struggling for funding and otherwise seem like they have decent ideas that you can fund.
There’s probably something other than those in the above list will actually work for reducing existential risk from AI, but no one knows what it it is.
Coming from a somewhat similar space myself, I’ve also had the same thoughts. My current thinking is there is no straightforward answer on how to convert dollars to impact.
I think the EA community did a really good job at that back in the day with a spreadsheet-based relatively easier way to measure impact per dollars or per life saved in the near-term future.
With AI safety / existential-risk—the space seems a lot more confused, and everyone has different models of the world, what will work, and what good ideas are. There are some people working directly on this space directly—like QURI, but IMO it’s not anything close to a consensus for “where can I put my marginal dollar for AI safety”. The really obvious / good ideas and people working on them don’t seem funding-constrained.
There’s in general (from my observation):
- Direct interpretability work on LLM
- Governance work (trying to convince regulators / goverments to put a stop to this)
- Explaining AI risk to the general public
- Direct alignment work on current-gen LLM (super-alignment type things in major labs)
- More theoretical work (like MIRI), but I don’t know if anyone is doing this now.
- More weirder things like whole brain emulation, or gene-editing / making superbabies.
My guess is your best bet spending your money / time on the last one would be on the margin helpful, or just talk to people who are struggling for funding and otherwise seem like they have decent ideas that you can fund.
There’s probably something other than those in the above list will actually work for reducing existential risk from AI, but no one knows what it it is.