The methodological diversity necessary to get any consilience in highly abstract areas makes it very hard for donors to evaluate such projects. Many of the ideas that form the basis of the AI memeplex, for instance, came from druggy-artist-scientists originally. So what happens in practice is that this stuff revolves around smoking gun type highly legible philosophical arguments, even though we know this is more hedgehog than fox, and that this guarantees we’ll only, on average, prepare for dangers that large numbers of people can comprehend.
Concretely: the more money you have, the higher the variance on weird projects you should be funding. If the entire funding portfolio of the Gates’ foundation are things almost everyone thinks sound like good ideas, that’s a failure. It’s understandable for small donors. You don’t want to ‘waste’ all your money only to have nothing you fund work. But if you have a 10 billion and thus need to spend 500 million to 1 billion a year just to not grow your fund, you should be spending a million here and there on things most people think are crazy (how quickly we forget concrete instances like initial responses to the shrinking objects to nanoscale idea?). This is fairly straightforward porting of reasoning from startup land.
the more money you have, the higher the variance on weird projects you should be funding.
Only if you’re sure the mean is positive—and there’s no reason to think that. In fact, it’s arguable that in a complex system, a priori, we should consider significant changes destabilizing and significantly net negative unless we have reason to think otherwise.
I think this argument is too general a counter argument and if extended to its logical conclusion becomes self defeating as consequentialist cluelessness applies equally to action and inaction.
No, it just means you just need an actual system model which is at least somewhat predictive in order to make decisions, and therefore have a better grasp on the expected value of your investments than “let’s try something, who knows, let’s just take risks.”
The methodological diversity necessary to get any consilience in highly abstract areas makes it very hard for donors to evaluate such projects. Many of the ideas that form the basis of the AI memeplex, for instance, came from druggy-artist-scientists originally. So what happens in practice is that this stuff revolves around smoking gun type highly legible philosophical arguments, even though we know this is more hedgehog than fox, and that this guarantees we’ll only, on average, prepare for dangers that large numbers of people can comprehend.
Concretely: the more money you have, the higher the variance on weird projects you should be funding. If the entire funding portfolio of the Gates’ foundation are things almost everyone thinks sound like good ideas, that’s a failure. It’s understandable for small donors. You don’t want to ‘waste’ all your money only to have nothing you fund work. But if you have a 10 billion and thus need to spend 500 million to 1 billion a year just to not grow your fund, you should be spending a million here and there on things most people think are crazy (how quickly we forget concrete instances like initial responses to the shrinking objects to nanoscale idea?). This is fairly straightforward porting of reasoning from startup land.
Only if you’re sure the mean is positive—and there’s no reason to think that. In fact, it’s arguable that in a complex system, a priori, we should consider significant changes destabilizing and significantly net negative unless we have reason to think otherwise.
I think this argument is too general a counter argument and if extended to its logical conclusion becomes self defeating as consequentialist cluelessness applies equally to action and inaction.
No, it just means you just need an actual system model which is at least somewhat predictive in order to make decisions, and therefore have a better grasp on the expected value of your investments than “let’s try something, who knows, let’s just take risks.”
I agree, I think I was mostly responding to the ” and there’s no reason to think that” since it is a case by case thing.