Researching donation opportunities. Previously: ailabwatch.org.
Zach Stein-Perlman
For people following my daily posting: this is the last of my daily posts. I failed to write some posts that I really want to exist:
Donations are super important
The US government is super important
Don’t diversify your impact[1]
Buying galaxies is not cost-effective
I hope to cause most of these posts to exist in April.
- ^
This is too strong — some limited diversification is defensible but many people are underprioritizing scope-sensitive stuff for confused reasons
Sure, divide by 1000 or something
Of course!
I agree things aren’t as simple as ”
is a big number, therefore optimize the cosmic endowment.” Maybe we should act based on trade/ECL reasons. But the stakes are still high in expectation, as you discuss in Beyond Astronomical Waste.
Yes. FTL would be surprising given that we find ourselves in a 14 billion year old universe — you’d expect there to be aliens by now. But:
We will likely have better things to do than simulate humans
Mature technology may enable more computation than the conservative guess in this piece
Acausal trade and ECL may present opportunities to have effects outside the lightcone
There are probably more such considerations!
I agree? This applies more to donating to orgs than donating to politicians. And regardless when my team recommends donating to politicians, usually we have a fundraising target after which we no longer recommend it (and sometimes we ask for pledges and then ask a subset of pledgers to donate, in order to hit the target and avoid using small donors unnecessarily), and other times we’re like “this is one of the best opportunities; the optimal amount of money we could raise is more than we will actually be able to raise because there aren’t enough small donors; everyone should donate (after donating to everything better and having a certain budget saved for future small-donor opportunities).” (Most of our recommendations are the former kind, but a donor might tend to hear about the latter kind because for the former we only need to tell a small set of donors while for the latter we’re telling everyone who might be interested.)
How are you supposed to “account[] for that uncertainty”? I think you notice it, you try really hard to make sure that your parameters are right, and then at the end of the day you take the expectation.
I read this in 2019; it helped me understand that the long-term future is astronomically more important than whatever happens on Earth this millennium. See also Astronomical Waste.
Edit: but as various commenters observe, the actual amount you should care about the long-term future and space stuff isn’t super related to the
figure (or whatever the true number is) because of acausal trade and ECL.
I failed to illustrate what goes wrong when people naively make up distributions rather than thinking carefully about EV. Here’s a quick squiggle:
The EV of a (and b and c) is
; the EV of product is 1086^3 = 1.28B. (Squiggle says 976M here; its error is due to it using few samples.) If you ultimately care about the EV of product, I think you should think about the EV of a, b, and c. You should know that the EV is 1086, such that if that sounds wrong you’ll notice. The EV is sensitive to the right end of the distribution, so you should think carefully about what that part of the distribution looks like. For each parameter, if considering the distribution and considering the EV directly gives you different EV figures, you want to reach reflective equilibrium or something.Squiggle makes it easy to use distributions — but if you don’t put serious effort into it, your squiggle model will have predictably flawed parameters.
I agree reasoning about uncertainty is crucial. If your EV isn’t sensitive to the probability and magnitude of tail outcomes, you’re doing EV estimation wrong.
I think EV should be in units of utils or something, such that you’re risk neutral in EV.
When I said “Using distributions is dangerous” I think I meant to claim: often people would do better to think discretely, considering a few different scenarios, rather than trying to draw a distribution. I think sometimes people are able to reason well about uncertainty with a few discrete buckets but instead they draw a distribution (perhaps because Squiggle leads them to?) and reasoning about distributions is tough. Including me! I think in many contexts I’d produce a better EV estimate by putting probabilities and values on several discrete buckets and summing the products than by trying to draw a continuous distribution and then taking its expectation. If I had to draw a continuous distribution, I’d often start with the discrete buckets and then draw a distribution to approximate them!
Also sometimes the EV you assign a parameter is upstream of your distribution for that parameter, and in such cases there’s no need to draw a distribution if your goal is just an EV estimate.
Some variables—election outcomes, AI timelines—are very amenable to distributions. But trying to draw a distribution for value produced by a project is rough.
if you’re voting in an election where one candidate has a 6% edge, your vote has roughly a 1 in 12 chance of changing the outcome!
This is mostly false! You have to think about σ. If two candidates are tied ex ante, that doesn’t mean your vote is infinitely powerful. The crucial question is probability that your vote will flip the election. And on your particular example, maybe you’d have a ~1/12 chance of flipping the election if your candidate’s vote margin was a random number between −0 and −12, but “your candidate is expected to lose by 12 votes” is lower tractability than that because there’s a 50% chance your candidate will lose by more than 12 votes, and there’s some chance that they’ll win without you, and the crucial −0 scenario is less likely than the −12 scenario.
the per-person impact is about the same.
No, people are affected more by the federal government than their local government. The federal government matters more than all local governments combined. But federal vs local government is not relevant to this post so I don’t want to get into it.
I don’t really have thoughts on this.
Most distributions are independent. E.g. the parameters in the BOTECs here.
Sometimes parameters are obviously correlated. E.g. “Congress wants to do good AI safety stuff” and “the president wants to do good AI safety stuff,” or “vote margin in the 2026 CO-08 election” and “seat margin in the House after the 2026 elections.”
Good point; I agree small opportunities can be great.
how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them?
This post is more like I have a priori observations than I know what processes work well in practice. I don’t claim the latter. But since you asked:
I don’t do a good job of finding small opportunities. When small opportunities come to my attention, my process is something like:
(If it’s out-of-scope of my expertise, drop it, unless an advisor is strongly vouching for it or it seems truly amazing or something.)
Do I have a great sense of how good it is, in particular because it’s just a small version of something I’ve investigated in the past? If so, use that.
Otherwise, is there someone else who should decide? If so, get them to decide.
Otherwise, are there positive second-order effects to actually investigating (e.g. maybe noticing or being able to evaluate many more opportunities like this)? If so, do that.
Otherwise, try to make a decision quickly. If downside risk is low and upside is maybe-great, then make the grant.
Er, that’s conflating “small grant” with “low-stakes.” Sometimes the amount of money is small but the opportunity is high-stakes — sometimes the upside is high; sometimes there are costs or downside risks much greater than the cost of the money. It’s the low-stakes opportunities that you want to decide quickly on.
An abbreviated heuristic is: if it’s in-scope and it seems great and it’s hard to imagine regretting it substantially more than if you lit the money on fire, just fund all such small opportunities. Funding lots of small opportunities is better than funding few.
Note that being exploitable has downsides beyond wasting money. (Internet people reading this, please don’t ask me for money because you read this; I’m very unlikely to give you money even for good things because my expertise is limited to a small fraction of good things.)
Probably in my domain relative to yours, (1) there’s way fewer small one-off opportunities and (2) a greater fraction of them have substantial downside risk.
For people following my daily posting: I have two more posts in me that I really want to write decently well: donations are super important and the US government is super important. The latter is particularly tricky — for one thing, most people say they already agree; there’s confusing inconsistencies in people’s current attitudes that I want to untangle. Anyway, today’s post and several future posts will be shorter and less important, to give me more time to write those two.
That would be fine too.
“Points” suggests absolute; “%” suggests relative; my current unit is relative-ish and using points might be confusing.
I don’t know why but I usually think in terms of cost per unit good, not good per unit cost. I said “1% future-improvement per $5B” but I really think like “$5B per 1% future-improvement.”
Stay tuned for the rest of the sequence!
I agree in part. But redistributing money/power from funders to workers is good for the world only insofar as the workers are better at turning money/power into goodness-for-the-world than the funders. Is that true for the AI safety ecosystem? There’s certainly some cases where it’s true, e.g. your principle would have produced good results in the case of Lightcone, but I think it’s mostly false; I think the funders are better at turning money into goodness than almost all of the workers. (Plus insofar as the workers aren’t altruists they’ll waste money on consumption.)
I wrote this in a conflict-theory way but the mistake-theory version might be equally important. Ruthlessness/malice aren’t necessary. Sometimes an agent doesn’t appreciate the costs here; improper credit assignment often naively makes sense. Sometimes a bureaucracy misallocates credit by default despite not even being agent-y enough to be ruthlessly goal-directed. And sometimes people just incorrectly opt not to take credit.
To be clear this is all epistemic status: a priori musings, which might be helpful but you shouldn’t defer to, inspired by some real events. It is not all observed facts.
Something I was wrong about: credit assignment.
I used to think: I‘m an altruist; it doesn’t matter whether I get credit for my contributions. Now I think getting credit is often important. In some contexts, when you do a good thing, a lot of the value comes indirectly via you getting empowered. And if others systematically steal your credit or block you from taking credit, you should be scared of the prospect that (1) you’re giving them power which they will use poorly or (2) you become dependent on them — if you’d gotten credit, then you’d be empowered and people would listen to you, but since they took credit, you’re stuck only able to exert influence by advising them, and they can stop listening to you.
There’s often reasonable-sounding arguments that it’s better if someone else takes credit, but you should (1) be suspicious and (2) just pay some costs to preserve proper credit assignment.
Great post. It’s too bad it didn’t get frontpage visibility.
Proposition: some kinds of agents don’t need to make object-level deals in advance. Especially for evidential/acausal reasons, or possibly because they made a meta-deal in the past. In the future, they do a bunch of thinking and then an insurance company pays everyone whose houses burned down (and everyone else pays the insurance company a little).
Proposition (perhaps based on controversial intuitions that various attitudes are objective/convergent): the system above is preferable to locking in low-level deals on e.g. power-sharing. But it’s fine to make such deals as long as we clarify that if the galaxy-brained stuff works out, we’ll follow that instead. (And if you feel confident that the galaxy-brained stuff will work out without need to coordinate in advance, you should still endorse deals to e.g. reduce conflict between agents who don’t share your view.)
some deals require uncertainty
Skill issue? (But no guarantee that all agents will be high-skill by the first crucial time.)
Surprised you didn’t bring this up. Curious what you think.
When I post on LW, I think the impact is something like:
20% informing people who browse LW
30% informing people who are linked to a post (by me or others), or for whom it’s overdetermined that they’ll read the post
20% people who read the posts helping me: they leave helpful comments or talk to me or [share docs with me / ask to collaborate with me / think of me when working on something related in the future]
30% giving me status or something, and causing people to [think of / respect] me in certain domains
3 and 4 overlap. In the last week there were two times a person/org asked me to help them with something that I was very excited to help with, and at least one of them was downstream of my recent posting.