If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. There are plenty of research labs using old equipment, promising projects that don’t get funding, post-docs who move into industry because they’re discouraged about landing that tenure-track position, schools that can’t attract competent STEM teachers partly because there’s just so little money in it.
And of course, you can build institutions like OpenPhil to help reduce uncertainty about how to spend that money.
Using money or power to fix those problems is do-able. You don’t have to know everything. You can be a dart, or, if you’re lucky and hard-working, you can be a dart-thrower.
When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.
In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
Tho if you take analyses like Braden’s seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. “Science advances one funeral at a time,” in a way that seems detectable from analyzing the literature.
This isn’t to say that planning is worthless, and that no one can see the future. It’s to say that you can’t buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.
I’m starting to read Braden. The thing is, that if Braden’s analysis is true, then either:
We can filter for the right people, we’re just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
We truly can’t filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.
I think there’s a fairly rigorous, step-by-step, logical way to ground this whole argument we’re having, but I think it’s suffering from a lack of precision somehow...
There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.
There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it’s not straightforward to know which expert is right and that judgement can’t be brought.
I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.
The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.
The most valuable thing to do would be to observe what’s going on right now, and the possibilities we haven’t tried (or have abandoned). Insofar as we have credence in the “we know nothing” hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.
I think this is true if you’re looking for near-perfect scientists but if you’re assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).
>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?
I think the greater concern is that it’s hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.
There are plenty of research labs using old equipment
The people in those research labs probably believe that newer equipment is likely to yield the knowledge that we are seeking. Our labs now have much better equipment and much more people then before the Great Stagnation started.
Expensive equipment has the problem that it forces the researchers to focus on questions that can actually be answered with the expensive equipment and those questions might not be the best to focus on.
What does the NIH have to show for Bush doubling their budget?
Would philanthropy be better off it people just threw darts, or if they stuck to tried and true ways of giving? Is not even taking a gamble on a possible great outcome for the overall good a form of genuine altruism?
Well, if you’re a subscriber to mainstream EA, the idea is that neither traditionalism nor dart-throwing is best. We need a rigorous cost-benefit analysis.
If one believes that, yet also that less cost-benefit analysis is needed (or tractable) in science, that needs an explanation.
Again, I think that this post is getting at something important, but the definitions here aren’t precise enough to make it easy to apply to real issues. Like, can a billionaire use his money to buy a cost/benefit analysis of an investment of interest? Definitely.
But how can he evaluate it? Does he have to do it himself? Does he focus on creating an incentive structure for the people producing it? If so, what about Goodhart’s Law—how will he evaluate the incentive structure?
It’s “who will watch the watchmen” all the way down, but that’s a pretty defeatist perspective. My guess is that institutions do best when they adopt a variety of metrics and evaluative methods to make decisions, possibly including some randomization just to keep things spicy.
I imagine most good deeds or true altruism takes place on non-measurable scales. It’s the thought that counts right? A smile goes a long way, how can you measure a smile, or positive energy. Whether you throw a dart or follow a non dart follow method, maybe the positive energy put out means something, especially now.
Look at all the good Bill Gates does that I think is effective altruism and he gets vilified . It’s a weird thing. I remember watching a patriot act episode https://www.youtube.com/watch?v=mS9CFBlLOcg
You’re doing something (a good thing) that we call Babble. Freely coming up with ideas that all circle around a central question, without worrying too much about whether they’re silly, important, obvious, or any of the other reasons we hold stuff back.
I’d suggest going further. Feel free to use this comment thread (or make a shortform) to throw out ideas about “why philanthropy might benefit from more (or less) cost/benefit analysis”.
We often suggest trying to come up with 50 ideas all in one go. Have at it!
If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. There are plenty of research labs using old equipment, promising projects that don’t get funding, post-docs who move into industry because they’re discouraged about landing that tenure-track position, schools that can’t attract competent STEM teachers partly because there’s just so little money in it.
And of course, you can build institutions like OpenPhil to help reduce uncertainty about how to spend that money.
Using money or power to fix those problems is do-able. You don’t have to know everything. You can be a dart, or, if you’re lucky and hard-working, you can be a dart-thrower.
From the OP:
When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.
I would love to see John, or anyone with an interest in the subject, do an explainer on all the ways science organizes and coordinates to solve problems.
In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
Tho if you take analyses like Braden’s seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. “Science advances one funeral at a time,” in a way that seems detectable from analyzing the literature.
This isn’t to say that planning is worthless, and that no one can see the future. It’s to say that you can’t buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.
I’m starting to read Braden. The thing is, that if Braden’s analysis is true, then either:
We can filter for the right people, we’re just doing it wrong. We need to empower a few senior scientists who no longer have a dog in the fight to select who they think should be endowed with money for unconstrained research. Money can buy knowledge if you do it right.
We truly can’t filter for the right ideas. Either rich people need to do research, researchers need to get rich, or we need to just randomly dump money on researchers and hope that a few of them turn out to be the next Einstein.
I think there’s a fairly rigorous, step-by-step, logical way to ground this whole argument we’re having, but I think it’s suffering from a lack of precision somehow...
There seems to be a lack of knowledge in the people who fund science about how to structure the funding in an effective way.
There are some experts who think that they have an alternative proposal that leads to a much better return on investment. Those experts have some arguments for their position but it’s not straightforward to know which expert is right and that judgement can’t be brought.
I suspect being good at finding better scientists is very close to having a complete theory of scientific advancement and being able to automate the research itself.
The extreme form of that idea is If we could evaluate the quality of scientists, then we could fully computerize research. Since we cannot fully computerize research, we therefore have no ability to evaluate the quality of scientists.
The most valuable thing to do would be to observe what’s going on right now, and the possibilities we haven’t tried (or have abandoned). Insofar as we have credence in the “we know nothing” hypothesis, we should blindly dump money on random scientists. Our credence should never be zero, so this implies that some nonzero amount of random money-dumping is optimal.
I think this is true if you’re looking for near-perfect scientists but if you’re assessing current science to decide who to invest in there are lots of things you can do to get better at performing such assessments (e.g. here).
>In line with John’s argument here, we should develop a robust gears-level understanding of scientific funding and organization before assuming that more power or more money can’t help.
How about a metaculus/prediction market for scientific advances given an investment in X person or project? (where people put stake into the success of a person or project?) is this susceptible to bad incentives?
I think the greater concern is that it’s hard to measure. And yes, you could imagine that owning shares against, say, the efficacy of a vaccine being above a certain level could be read as an incentive to sabotage the effort to develop it.
The people in those research labs probably believe that newer equipment is likely to yield the knowledge that we are seeking. Our labs now have much better equipment and much more people then before the Great Stagnation started.
Expensive equipment has the problem that it forces the researchers to focus on questions that can actually be answered with the expensive equipment and those questions might not be the best to focus on.
What does the NIH have to show for Bush doubling their budget?
Would philanthropy be better off it people just threw darts, or if they stuck to tried and true ways of giving? Is not even taking a gamble on a possible great outcome for the overall good a form of genuine altruism?
Well, if you’re a subscriber to mainstream EA, the idea is that neither traditionalism nor dart-throwing is best. We need a rigorous cost-benefit analysis.
If one believes that, yet also that less cost-benefit analysis is needed (or tractable) in science, that needs an explanation.
Again, I think that this post is getting at something important, but the definitions here aren’t precise enough to make it easy to apply to real issues. Like, can a billionaire use his money to buy a cost/benefit analysis of an investment of interest? Definitely.
But how can he evaluate it? Does he have to do it himself? Does he focus on creating an incentive structure for the people producing it? If so, what about Goodhart’s Law—how will he evaluate the incentive structure?
It’s “who will watch the watchmen” all the way down, but that’s a pretty defeatist perspective. My guess is that institutions do best when they adopt a variety of metrics and evaluative methods to make decisions, possibly including some randomization just to keep things spicy.
I imagine most good deeds or true altruism takes place on non-measurable scales. It’s the thought that counts right? A smile goes a long way, how can you measure a smile, or positive energy. Whether you throw a dart or follow a non dart follow method, maybe the positive energy put out means something, especially now.
Look at all the good Bill Gates does that I think is effective altruism and he gets vilified . It’s a weird thing. I remember watching a patriot act episode https://www.youtube.com/watch?v=mS9CFBlLOcg
Welcome to LW, by the way :)
You’re doing something (a good thing) that we call Babble. Freely coming up with ideas that all circle around a central question, without worrying too much about whether they’re silly, important, obvious, or any of the other reasons we hold stuff back.
I’d suggest going further. Feel free to use this comment thread (or make a shortform) to throw out ideas about “why philanthropy might benefit from more (or less) cost/benefit analysis”.
We often suggest trying to come up with 50 ideas all in one go. Have at it!