I have just spent a month in England interacting extensively with the EA movement here (maybe your impressions from the California EA summit differ, I’d be curious to hear). Donors interested in the far future are also considering donations to the following (all of these are from talks with actual people making concrete short-term choices; in addition to donations, people are also considering career choices post-college):
80,000 Hours, CEA and other movement building and capacity-increasing organizations (including CFAR), which also increase non-charity options (e.g. 80k helping people going into scientific funding agencies and political careers where they will be in a position to affect research and policy reactions to technologies relevant to x-risk and other trajectory changes)
AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes
Finding ways to fund such evaluation with RFMF, e.g. by paying for FHI or CEA hires to work on them
The FHI’s other work
A donor-advised fund investing the returns until such evaluations or more promising opportunities present themselves or are elicited by the fund (possibilities like Drexler’s nanotech panel, extensions of the DAGGRE methods, a Bayesian aggregation algorithm that greatly improves extraction of scientific expert opinion or science courts that could mobilize much more talent and resources to neglected problems with good cases, some key steps in biotech enhancement)
That’s why Peter Hurford posted the OP, because he’s an EA considering all these options, and wants to compare them to MIRI.
That is a sort of discussion my brain puts in a completely different category. Peter and Carl, please always give me a concrete alternative policy option that (allegedly) depends on a debate, if such is available; my brain is then far less likely to label the conversation “annoying useless meta objections that I want to just get over with as fast as possible”.
AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes
Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.
Well, in collaboration with FHI. As soon as Bostrom’s Superintelligence is released, we’ll probably be building on and around that to make whatever cases we think are reasonable to make.
I have just spent a month in England interacting extensively with the EA movement here (maybe your impressions from the California EA summit differ, I’d be curious to hear). Donors interested in the far future are also considering donations to the following (all of these are from talks with actual people making concrete short-term choices; in addition to donations, people are also considering career choices post-college):
80,000 Hours, CEA and other movement building and capacity-increasing organizations (including CFAR), which also increase non-charity options (e.g. 80k helping people going into scientific funding agencies and political careers where they will be in a position to affect research and policy reactions to technologies relevant to x-risk and other trajectory changes)
AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes
Finding ways to fund such evaluation with RFMF, e.g. by paying for FHI or CEA hires to work on them
The FHI’s other work
A donor-advised fund investing the returns until such evaluations or more promising opportunities present themselves or are elicited by the fund (possibilities like Drexler’s nanotech panel, extensions of the DAGGRE methods, a Bayesian aggregation algorithm that greatly improves extraction of scientific expert opinion or science courts that could mobilize much more talent and resources to neglected problems with good cases, some key steps in biotech enhancement)
That’s why Peter Hurford posted the OP, because he’s an EA considering all these options, and wants to compare them to MIRI.
That is a sort of discussion my brain puts in a completely different category. Peter and Carl, please always give me a concrete alternative policy option that (allegedly) depends on a debate, if such is available; my brain is then far less likely to label the conversation “annoying useless meta objections that I want to just get over with as fast as possible”.
Can we have a new top-level comment on this?
I edited my top-level comment to include the list and explanation.
Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.
Well, in collaboration with FHI. As soon as Bostrom’s Superintelligence is released, we’ll probably be building on and around that to make whatever cases we think are reasonable to make.