I really like both of your comments in this thread, Luke.
Also note that MIRI has in fact spent most of its history on strategic research and movement-building, and now that those things are also being done pretty well by FHI, CEA, and CFAR, it makes sense for MIRI to do (what we think is) the most useful object-level thing (FAI research), especially since we have a comparative advantage there (Eliezer).
I’m glad you mentioned this. I should clarify that most of my uncertainty about continuing to donate to MIRI in the future is uncertainty about donating to MIRI vs. one of these other organizations. To the extent that it’s really important to have people at Google, the DoD, etc. be safety-conscious, it think it’s possible movement building might offer better returns than technical research right now… but I’m not sure about that, and I do think the technical research is valuable.
Right; I think it’s hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR — but if someone is giving to AMF then I assume they must care only about beings who happen to be living today (a Person-Affecting View), or else they have a very different model of the world than I do, one where the value of the far future is somehow not determined by the intelligence explosion.
Edit: To clarify, this isn’t an exhaustive list. E.g. I think GiveWell’s work is also exciting, though less in need of smaller donors right now because of Good Ventures.
He’s talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.
Or simply because the quality of research is positively correlated with ability to secure funding, and thus research that would not be done without your donations generally has the lowest expected value of all research. In case of malaria, we need quantity, in case of AI research, we need quality.
Increasing the quality of the far future. In principle there may be some way to have a lasting impact by making society better off for the indefinite future. I tend to think this is not very unlikely; it would be surprising if a social change (other than a values change or extinction) had an impact lasting for a significant fraction of civilization’s lifespan, and indeed I haven’t seen any plausible examples of such a change.
...
I think the most promising interventions at the moment are:
Increase the profile of effective strategies for decision-making, particularly with respect to policy-making and philanthropy.
They could also reasonably believe that marginal donations to the organizations listed would not reliably influence an intelligence explosion in a way that would have significant positive impact on the value of the far future. They might also believe that AMF donations would have a greater impact on potential intelligence explosions (for example, because an intelligence explosion is so far into the future that the best way to help is to ensure human prosperity up to the point where GAI research actually becomes useful).
They might also believe that AMF donations would have a greater impact on potential intelligence explosions
It is neither probable nor plausible that AMF, a credible maximum of short-term reliable known impact on lives saved valuing all current human lives equally, should happen to also possess a maximum of expected impact on future intelligence explosions. It is as likely as that donating to your local kitten shelter should be the maximum of immediate lives saved. This kind of miraculous excuse just doesn’t happen in real life.
OK. Granted. Even a belief that the AMF is better at affecting intelligence explosions is unlikely to justify the claim that it is the best, and thus not justify the behavior described.
Amazing how even after reading all Eliezer’s posts (many more than once), I can still get surprise, insight and irony at a rate sufficient enough to produce laughter for 1+ minute.
I’m curious as to why you include CEA—my impression was that GWWC and 80k both focus on charities like AMF anyway? Is that wrong, or does CEA do more than it’s component organizations?
Perhaps because GWWC’s founder Toby Ord is part of FHI, and because CEA now shares offices with FHI, CEA is finding / producing new far future focused EAs at a faster clip than, say, GiveWell (as far as I can tell).
I really like both of your comments in this thread, Luke.
I’m glad you mentioned this. I should clarify that most of my uncertainty about continuing to donate to MIRI in the future is uncertainty about donating to MIRI vs. one of these other organizations. To the extent that it’s really important to have people at Google, the DoD, etc. be safety-conscious, it think it’s possible movement building might offer better returns than technical research right now… but I’m not sure about that, and I do think the technical research is valuable.
Right; I think it’s hard to tell whether donations do more good at MIRI, FHI, CEA, or CFAR — but if someone is giving to AMF then I assume they must care only about beings who happen to be living today (a Person-Affecting View), or else they have a very different model of the world than I do, one where the value of the far future is somehow not determined by the intelligence explosion.
Edit: To clarify, this isn’t an exhaustive list. E.g. I think GiveWell’s work is also exciting, though less in need of smaller donors right now because of Good Ventures.
There is also the possibility that they believe that MIRI/FHI/CEA/CFAR will have no impact on the intelligence explosion or the far future.
He’s talking specifically about people donating to AMF. There are more things people can do than donate to AMF and donate to one of MIRI, FHI, CEA, and CFAR.
Correct.
Or simply because the quality of research is positively correlated with ability to secure funding, and thus research that would not be done without your donations generally has the lowest expected value of all research. In case of malaria, we need quantity, in case of AI research, we need quality.
Given the mention of Christiano above, I want to shout out one of his more important blog posts.
...
They could also reasonably believe that marginal donations to the organizations listed would not reliably influence an intelligence explosion in a way that would have significant positive impact on the value of the far future. They might also believe that AMF donations would have a greater impact on potential intelligence explosions (for example, because an intelligence explosion is so far into the future that the best way to help is to ensure human prosperity up to the point where GAI research actually becomes useful).
It is neither probable nor plausible that AMF, a credible maximum of short-term reliable known impact on lives saved valuing all current human lives equally, should happen to also possess a maximum of expected impact on future intelligence explosions. It is as likely as that donating to your local kitten shelter should be the maximum of immediate lives saved. This kind of miraculous excuse just doesn’t happen in real life.
OK. Granted. Even a belief that the AMF is better at affecting intelligence explosions is unlikely to justify the claim that it is the best, and thus not justify the behavior described.
Amazing how even after reading all Eliezer’s posts (many more than once), I can still get surprise, insight and irony at a rate sufficient enough to produce laughter for 1+ minute.
I’m curious as to why you include CEA—my impression was that GWWC and 80k both focus on charities like AMF anyway? Is that wrong, or does CEA do more than it’s component organizations?
Perhaps because GWWC’s founder Toby Ord is part of FHI, and because CEA now shares offices with FHI, CEA is finding / producing new far future focused EAs at a faster clip than, say, GiveWell (as far as I can tell).
I’m currently donating to FHI for the UK tax advantages, so that’s good to hear.
Bill Gates presents his rationale for attacking Malaria and Polio here.
I can’t make much sense of it personally—but at least he isn’t working on stopping global warming.