Donating early also gives the donor the ability to shape the ecosystem. I think one underappreciated factor is that there are current various organizations/people/perspectives that are essentially competing for resources and influence.
These organizations/people/perspectives often differ in meaningful ways. In the AI policy space, here are some examples of dimensions on which organizations vary:
Focus on superintelligence vs. broad discussion about how AI can be a big deal with benefits/costs.
Focus on misalignment vs. China competution vs. broad discussion of various threat models.
Solutions that are advocated for (e.g., need for major regulation/reform vs. focusing on incremental improvements).
Biases toward action vs. inaction.
Tendencies toward being loud (lots of comms/outreach) vs. quiet
Working with people/organizations with different worldviews vs. staying relatively insular
Extent to which the organization is trying to steer the world toward a particular vision/path vs. is highly uncertain and tries to focus on adding relatively uncontroversial low/risk information.
In my view, one of the most significant things about donating early is that you get to cherry pick which organizations/institutions/leaders have their voices amplified. Among the various groups that currently fall within some sort of broad “AI safety” umbrella, there are often rather large differences in their views about the world, about leadership, about politics, about how to be effective communicators, about what types of people or reasoning styles should be promoted, etc.
I have my own opinions on where various organizations are on each of these dimensions. Happy to share with potential donors if that is ever useful.
Donating early also gives the donor the ability to shape the ecosystem. I think one underappreciated factor is that there are current various organizations/people/perspectives that are essentially competing for resources and influence.
These organizations/people/perspectives often differ in meaningful ways. In the AI policy space, here are some examples of dimensions on which organizations vary:
Focus on superintelligence vs. broad discussion about how AI can be a big deal with benefits/costs.
Focus on misalignment vs. China competution vs. broad discussion of various threat models.
Solutions that are advocated for (e.g., need for major regulation/reform vs. focusing on incremental improvements).
Biases toward action vs. inaction.
Tendencies toward being loud (lots of comms/outreach) vs. quiet
Working with people/organizations with different worldviews vs. staying relatively insular
Extent to which the organization is trying to steer the world toward a particular vision/path vs. is highly uncertain and tries to focus on adding relatively uncontroversial low/risk information.
In my view, one of the most significant things about donating early is that you get to cherry pick which organizations/institutions/leaders have their voices amplified. Among the various groups that currently fall within some sort of broad “AI safety” umbrella, there are often rather large differences in their views about the world, about leadership, about politics, about how to be effective communicators, about what types of people or reasoning styles should be promoted, etc.
I have my own opinions on where various organizations are on each of these dimensions. Happy to share with potential donors if that is ever useful.