Thanks for writing this up. I have various thoughts, but here’s the counterargument that I think people are most likely to miss, so I’ll make it here:
I think that one year from now, we will be a decent amount wiser than we are now, when considering what the best donation opportunities are. This means that one year from now, we may regret donation opportunities made today.
An example: last year I put a decent fraction of my wealth in a DAF. At the time, I hadn’t heard any warnings not to do that. Today, I think that it would have been better if I had not put that money in the DAF, because I think the best donation opportunities are not 501(c)(3)s.
Similarly, I find it plausible that if today I donate to the cause that I consider to be the best, a year from now I would wish to have that money back, because actually I’m currently wrong about what the best cause is.
I don’t think that this effect trumps the various effects that you guys point out. I just think it’s a substantial consideration in the opposite direction.
I think that the consideration you raise is important, but here’s something that came to mind while reading your comment:
An example: last year I put a decent fraction of my wealth in a DAF. At the time, I hadn’t heard any warnings not to do that. Today, I think that it would have been better if I had not put that money in the DAF, because I think the best donation opportunities are not 501(c)(3)s.
There’s an interpretation of this experience which has the opposite conclusion than the one you draw. Specifically, you didn’t try to make a donation, you tried to punt the choice down the road. If you had been more focused on maximizing impact using donations last year, that might have forced you to learn more about the situation and you might have noticed that political donations were a good opportunity.
I think this is a good point. At the same time, I suspect the main reason we’re likely to be wiser a year from now is that we’ll have done stuff over the coming year that we’ll learn from. And the more we spend over the next year, the more we’ll be able to do, leading to more learning. In some ways this feels like “yes, maybe from an individual level it’ll feel better to wait and learn more, but your spending now not only lets you learn better but also lets others learn better.” I think the factor I’m pointing to is actually substantial, in particular if you’re funding highly promising areas that are relatively new and that others are skeptical of or feel insufficiently knowledgeable about.
I don’t really buy this as a significant concern. (I agree it’s nonzero, just, pretty swamped by other things). It also feels like it’s abstracting over stuff that doesn’t make sense to abstract over.
Just looking at the arguments in the OP, this feels pretty dominated by “in the future there will be way more money around.” The bottleneck in the future will not be money, it’ll be attention on projects that are important but hard to reason about. Anything you can make a pretty clear case for being important, you’ll probably be able to get funding for.
This argument made sense as a consideration to me in the past, but, man we just look like we’re in the endgame[1] now. We will learn more, but not until the window for new projects to spin up is much much shorter. Now is the moment all the previous “wait till we have more information” might possibly have been for.
...
I think my main reason for sort of (awkwardly backwardsly) agree with this argument is “well, I think the people with a lot of frontier lab equity are probably systematically wrong about stuff, undervaluing “technical philosophy”, being too bullish on AI projects that seem likely to be net negative or sort of neutrally following a tide to me. So, from that case, maybe I do hope they wait.
But mostly, if you are uncertain or feel like you don’t know enough to start confidently making donations by now, you should specifically be looking for ways to invest in stuff that improves your understanding.
This argument also feels pretty swamped by “compounding growth of the various altruistic AI enterprises”. We want to be finding compounding resources that actually can help with the problems.
(“Money” isn’t actually a good proxy resource for this because it’s not the main bottleneck. Two compounding resources that feel more relevant are “Good (meta)cognitive processes entangled with the territory” an “Coordination capital pointed at the right goals.” See Compounding Resource X for more thoughts there)
If there is a project that could be getting off the ground now, or hiring more people to spin up more subprojects, or spearhead more communication initiatives that change the landscape of what future billionaires/politicians/researchers are thinking about… those projects could be growing and having second order effects. They could accumulating reputation that lets them help direct attention of new billionaires to more subtly important but undervalued things in tomorrow’s landscape.
Instead of thinking generically “I might learn more”, I think you should be making lists of the things you aren’t sure about, or, if you changed your mind about, it’d radically change your strategy, and figuring out how to find and invest in projects that reduce those uncertainties.
Even if you think LLMs are a dead end, there’s a pretty high chance of a ton of investment producing new trailheads, compute’s getting more/cheaper. If you wait a couple years, it seems pretty likely that you’ll know but you’ll have lost most of your potential leverage, and there won’t be enough time left for whatever projects you’re more knowledgeable enough about to pay off.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
I’m not sure about funding-size, but, one think to note is there’s government agencies and I think more government funding now.
I think the deal is we’re bottlenecked on vetting/legitimacy/legibility (and still will be in a couple years, by default). If you’re a billionaire, and aren’t really sure what would meaningfully help, right now it may feel like a more obvious move to found a company that do donations.
But I think “donate substantially to a thing you think is good and write up your reasons for thinking that thing is good”, is pretty useful. (If you do a good job with the writeup, I bet you get a noticeable multiplier on the donation target, somewhat via redirection and somewhat via getting more people to donate at all)
This does require being a more active philanthropist who’s treating it a bit more like a job. But I think if you have the sort of money the OP is talking about, it’s probably worth prioritizing that. But even if you’re not, I think we’re just bottlenecked on time so much more than money.
Example with fake numbers: my favorite intervention is X. My favorite intervention in a year will probably be (stuff very similar to) X. I value $1 for X now equally to $1.7 for X in a year. I value $1.7 for X in a year equally to $1.4 unrestricted in a year, since it’s possible that I’ll believe something else is substantially better than X. So I should wait to donate if my expected rate of return is >40%; without this consideration I’d only wait if my expected rate of return is >70%.
I mean, this argument holds generally for any kind of investment in future events. Supposing that some kind of TAI gets produced in the year y, investments made in the year y-10 are probably less likely to be accurate than investments made in year y-9, and so on for y-8… All the way to y-0 when we know for sure which group of actors will make TAI (which, of course, happens when they succeed). Unfortunately, the commensurate difficulty of using funding to make an impact also increases as we approach y-0.
So I agree with you that such considerations cannot provide too much sway, because on their own they justify indefinite inaction until it is definitely too late.
Thanks for writing this up. I have various thoughts, but here’s the counterargument that I think people are most likely to miss, so I’ll make it here:
I think that one year from now, we will be a decent amount wiser than we are now, when considering what the best donation opportunities are. This means that one year from now, we may regret donation opportunities made today.
An example: last year I put a decent fraction of my wealth in a DAF. At the time, I hadn’t heard any warnings not to do that. Today, I think that it would have been better if I had not put that money in the DAF, because I think the best donation opportunities are not 501(c)(3)s.
Similarly, I find it plausible that if today I donate to the cause that I consider to be the best, a year from now I would wish to have that money back, because actually I’m currently wrong about what the best cause is.
I don’t think that this effect trumps the various effects that you guys point out. I just think it’s a substantial consideration in the opposite direction.
I think that the consideration you raise is important, but here’s something that came to mind while reading your comment:
There’s an interpretation of this experience which has the opposite conclusion than the one you draw. Specifically, you didn’t try to make a donation, you tried to punt the choice down the road. If you had been more focused on maximizing impact using donations last year, that might have forced you to learn more about the situation and you might have noticed that political donations were a good opportunity.
I think this is a good point. At the same time, I suspect the main reason we’re likely to be wiser a year from now is that we’ll have done stuff over the coming year that we’ll learn from. And the more we spend over the next year, the more we’ll be able to do, leading to more learning. In some ways this feels like “yes, maybe from an individual level it’ll feel better to wait and learn more, but your spending now not only lets you learn better but also lets others learn better.” I think the factor I’m pointing to is actually substantial, in particular if you’re funding highly promising areas that are relatively new and that others are skeptical of or feel insufficiently knowledgeable about.
I don’t really buy this as a significant concern. (I agree it’s nonzero, just, pretty swamped by other things). It also feels like it’s abstracting over stuff that doesn’t make sense to abstract over.
Just looking at the arguments in the OP, this feels pretty dominated by “in the future there will be way more money around.” The bottleneck in the future will not be money, it’ll be attention on projects that are important but hard to reason about. Anything you can make a pretty clear case for being important, you’ll probably be able to get funding for.
This argument made sense as a consideration to me in the past, but, man we just look like we’re in the endgame[1] now. We will learn more, but not until the window for new projects to spin up is much much shorter. Now is the moment all the previous “wait till we have more information” might possibly have been for.
...
I think my main reason for sort of (awkwardly backwardsly) agree with this argument is “well, I think the people with a lot of frontier lab equity are probably systematically wrong about stuff, undervaluing “technical philosophy”, being too bullish on AI projects that seem likely to be net negative or sort of neutrally following a tide to me. So, from that case, maybe I do hope they wait.
But mostly, if you are uncertain or feel like you don’t know enough to start confidently making donations by now, you should specifically be looking for ways to invest in stuff that improves your understanding.
This argument also feels pretty swamped by “compounding growth of the various altruistic AI enterprises”. We want to be finding compounding resources that actually can help with the problems.
(“Money” isn’t actually a good proxy resource for this because it’s not the main bottleneck. Two compounding resources that feel more relevant are “Good (meta)cognitive processes entangled with the territory” an “Coordination capital pointed at the right goals.” See Compounding Resource X for more thoughts there)
If there is a project that could be getting off the ground now, or hiring more people to spin up more subprojects, or spearhead more communication initiatives that change the landscape of what future billionaires/politicians/researchers are thinking about… those projects could be growing and having second order effects. They could accumulating reputation that lets them help direct attention of new billionaires to more subtly important but undervalued things in tomorrow’s landscape.
Instead of thinking generically “I might learn more”, I think you should be making lists of the things you aren’t sure about, or, if you changed your mind about, it’d radically change your strategy, and figuring out how to find and invest in projects that reduce those uncertainties.
Even if you think LLMs are a dead end, there’s a pretty high chance of a ton of investment producing new trailheads, compute’s getting more/cheaper. If you wait a couple years, it seems pretty likely that you’ll know but you’ll have lost most of your potential leverage, and there won’t be enough time left for whatever projects you’re more knowledgeable enough about to pay off.
Perhaps. I expect there to be massively more donor interest after the CAIS letter, but it didn’t really seem to eventuate.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
I’m not sure about funding-size, but, one think to note is there’s government agencies and I think more government funding now.
I think the deal is we’re bottlenecked on vetting/legitimacy/legibility (and still will be in a couple years, by default). If you’re a billionaire, and aren’t really sure what would meaningfully help, right now it may feel like a more obvious move to found a company that do donations.
But I think “donate substantially to a thing you think is good and write up your reasons for thinking that thing is good”, is pretty useful. (If you do a good job with the writeup, I bet you get a noticeable multiplier on the donation target, somewhat via redirection and somewhat via getting more people to donate at all)
This does require being a more active philanthropist who’s treating it a bit more like a job. But I think if you have the sort of money the OP is talking about, it’s probably worth prioritizing that. But even if you’re not, I think we’re just bottlenecked on time so much more than money.
Example with fake numbers: my favorite intervention is X. My favorite intervention in a year will probably be (stuff very similar to) X. I value $1 for X now equally to $1.7 for X in a year. I value $1.7 for X in a year equally to $1.4 unrestricted in a year, since it’s possible that I’ll believe something else is substantially better than X. So I should wait to donate if my expected rate of return is >40%; without this consideration I’d only wait if my expected rate of return is >70%.
I mean, this argument holds generally for any kind of investment in future events. Supposing that some kind of TAI gets produced in the year y, investments made in the year y-10 are probably less likely to be accurate than investments made in year y-9, and so on for y-8… All the way to y-0 when we know for sure which group of actors will make TAI (which, of course, happens when they succeed). Unfortunately, the commensurate difficulty of using funding to make an impact also increases as we approach y-0.
So I agree with you that such considerations cannot provide too much sway, because on their own they justify indefinite inaction until it is definitely too late.
On the object level, you could probably arrange some kind of donation swap with someone who wants to donate to 501(c)(3)s, right?
They donate $X to the non-501(c)(3)s you want and you donate $X from your DAF to the 501(c)(3)s they want.
For some donation opportunities, e.g. political donations, that would be a crime.
oh, ty