I don’t really buy this as a significant concern. (I agree it’s nonzero, just, pretty swamped by other things). It also feels like it’s abstracting over stuff that doesn’t make sense to abstract over.
Just looking at the arguments in the OP, this feels pretty dominated by “in the future there will be way more money around.” The bottleneck in the future will not be money, it’ll be attention on projects that are important but hard to reason about. Anything you can make a pretty clear case for being important, you’ll probably be able to get funding for.
This argument made sense as a consideration to me in the past, but, man we just look like we’re in the endgame[1] now. We will learn more, but not until the window for new projects to spin up is much much shorter. Now is the moment all the previous “wait till we have more information” might possibly have been for.
...
I think my main reason for sort of (awkwardly backwardsly) agree with this argument is “well, I think the people with a lot of frontier lab equity are probably systematically wrong about stuff, undervaluing “technical philosophy”, being too bullish on AI projects that seem likely to be net negative or sort of neutrally following a tide to me. So, from that case, maybe I do hope they wait.
But mostly, if you are uncertain or feel like you don’t know enough to start confidently making donations by now, you should specifically be looking for ways to invest in stuff that improves your understanding.
This argument also feels pretty swamped by “compounding growth of the various altruistic AI enterprises”. We want to be finding compounding resources that actually can help with the problems.
(“Money” isn’t actually a good proxy resource for this because it’s not the main bottleneck. Two compounding resources that feel more relevant are “Good (meta)cognitive processes entangled with the territory” an “Coordination capital pointed at the right goals.” See Compounding Resource X for more thoughts there)
If there is a project that could be getting off the ground now, or hiring more people to spin up more subprojects, or spearhead more communication initiatives that change the landscape of what future billionaires/politicians/researchers are thinking about… those projects could be growing and having second order effects. They could accumulating reputation that lets them help direct attention of new billionaires to more subtly important but undervalued things in tomorrow’s landscape.
Instead of thinking generically “I might learn more”, I think you should be making lists of the things you aren’t sure about, or, if you changed your mind about, it’d radically change your strategy, and figuring out how to find and invest in projects that reduce those uncertainties.
Even if you think LLMs are a dead end, there’s a pretty high chance of a ton of investment producing new trailheads, compute’s getting more/cheaper. If you wait a couple years, it seems pretty likely that you’ll know but you’ll have lost most of your potential leverage, and there won’t be enough time left for whatever projects you’re more knowledgeable enough about to pay off.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
I’m not sure about funding-size, but, one think to note is there’s government agencies and I think more government funding now.
I think the deal is we’re bottlenecked on vetting/legitimacy/legibility (and still will be in a couple years, by default). If you’re a billionaire, and aren’t really sure what would meaningfully help, right now it may feel like a more obvious move to found a company that do donations.
But I think “donate substantially to a thing you think is good and write up your reasons for thinking that thing is good”, is pretty useful. (If you do a good job with the writeup, I bet you get a noticeable multiplier on the donation target, somewhat via redirection and somewhat via getting more people to donate at all)
This does require being a more active philanthropist who’s treating it a bit more like a job. But I think if you have the sort of money the OP is talking about, it’s probably worth prioritizing that. But even if you’re not, I think we’re just bottlenecked on time so much more than money.
I don’t really buy this as a significant concern. (I agree it’s nonzero, just, pretty swamped by other things). It also feels like it’s abstracting over stuff that doesn’t make sense to abstract over.
Just looking at the arguments in the OP, this feels pretty dominated by “in the future there will be way more money around.” The bottleneck in the future will not be money, it’ll be attention on projects that are important but hard to reason about. Anything you can make a pretty clear case for being important, you’ll probably be able to get funding for.
This argument made sense as a consideration to me in the past, but, man we just look like we’re in the endgame[1] now. We will learn more, but not until the window for new projects to spin up is much much shorter. Now is the moment all the previous “wait till we have more information” might possibly have been for.
...
I think my main reason for sort of (awkwardly backwardsly) agree with this argument is “well, I think the people with a lot of frontier lab equity are probably systematically wrong about stuff, undervaluing “technical philosophy”, being too bullish on AI projects that seem likely to be net negative or sort of neutrally following a tide to me. So, from that case, maybe I do hope they wait.
But mostly, if you are uncertain or feel like you don’t know enough to start confidently making donations by now, you should specifically be looking for ways to invest in stuff that improves your understanding.
This argument also feels pretty swamped by “compounding growth of the various altruistic AI enterprises”. We want to be finding compounding resources that actually can help with the problems.
(“Money” isn’t actually a good proxy resource for this because it’s not the main bottleneck. Two compounding resources that feel more relevant are “Good (meta)cognitive processes entangled with the territory” an “Coordination capital pointed at the right goals.” See Compounding Resource X for more thoughts there)
If there is a project that could be getting off the ground now, or hiring more people to spin up more subprojects, or spearhead more communication initiatives that change the landscape of what future billionaires/politicians/researchers are thinking about… those projects could be growing and having second order effects. They could accumulating reputation that lets them help direct attention of new billionaires to more subtly important but undervalued things in tomorrow’s landscape.
Instead of thinking generically “I might learn more”, I think you should be making lists of the things you aren’t sure about, or, if you changed your mind about, it’d radically change your strategy, and figuring out how to find and invest in projects that reduce those uncertainties.
Even if you think LLMs are a dead end, there’s a pretty high chance of a ton of investment producing new trailheads, compute’s getting more/cheaper. If you wait a couple years, it seems pretty likely that you’ll know but you’ll have lost most of your potential leverage, and there won’t be enough time left for whatever projects you’re more knowledgeable enough about to pay off.
Perhaps. I expect there to be massively more donor interest after the CAIS letter, but it didn’t really seem to eventuate.
I think this stuff just takes a while, and things happened to coincide with the collapse of FTX which masked much of the already existing growth (and the collapse of FTX indirectly also resulted in some decrease in other funders withdrawing funds).
I will gladly take bets with people that there will be a lot more money interested in the space in 2 years than there is now.
I’m not sure about funding-size, but, one think to note is there’s government agencies and I think more government funding now.
I think the deal is we’re bottlenecked on vetting/legitimacy/legibility (and still will be in a couple years, by default). If you’re a billionaire, and aren’t really sure what would meaningfully help, right now it may feel like a more obvious move to found a company that do donations.
But I think “donate substantially to a thing you think is good and write up your reasons for thinking that thing is good”, is pretty useful. (If you do a good job with the writeup, I bet you get a noticeable multiplier on the donation target, somewhat via redirection and somewhat via getting more people to donate at all)
This does require being a more active philanthropist who’s treating it a bit more like a job. But I think if you have the sort of money the OP is talking about, it’s probably worth prioritizing that. But even if you’re not, I think we’re just bottlenecked on time so much more than money.