I’d say it depends on who you are and what your credentials are.
I expect that there’ll be orders of magnitude more funding going to AI safety and alignment research and anything to do with AI, basically. But I expect that funding to mostly go to the people it typically goes to, i.e. people who have prestigious credentials and who are good at writing grant applications etc. The funding from traditional AI safety donors will continue to be more open-minded and discerning, but it’ll probably continue at roughly the same levels as today rather than be orders of magnitude bigger.
(Unless… actually, I could see that being false, perhaps OpenPhil will pivot more to AI stuff and also just generally start spending a higher fraction of the money they have, enough to make a big difference?)
One thing I’m attempting to figure out is whether in that future, where government/traditional academic funding becomes dominant, there remains significant neglectedness in important subproblems because of how those systems tend to operate. I could see an OpenPhil pivot covering some of this, but it’d sure be nice to nail down at least a few more things when choosing between going all in on risky high EV ETG versus direct safety work.
There are some historical examples that might be informative, but it’s difficult for me to judge.
Thanks for the response. I’ve also been wondering if OpenAI might use some of the $10B Microsoft investment to fund external alignment researchers, tbh.
I’d say it depends on who you are and what your credentials are.
I expect that there’ll be orders of magnitude more funding going to AI safety and alignment research and anything to do with AI, basically. But I expect that funding to mostly go to the people it typically goes to, i.e. people who have prestigious credentials and who are good at writing grant applications etc. The funding from traditional AI safety donors will continue to be more open-minded and discerning, but it’ll probably continue at roughly the same levels as today rather than be orders of magnitude bigger.
(Unless… actually, I could see that being false, perhaps OpenPhil will pivot more to AI stuff and also just generally start spending a higher fraction of the money they have, enough to make a big difference?)
One thing I’m attempting to figure out is whether in that future, where government/traditional academic funding becomes dominant, there remains significant neglectedness in important subproblems because of how those systems tend to operate. I could see an OpenPhil pivot covering some of this, but it’d sure be nice to nail down at least a few more things when choosing between going all in on risky high EV ETG versus direct safety work.
There are some historical examples that might be informative, but it’s difficult for me to judge.
Thanks for the response.
I’ve also been wondering if OpenAI might use some of the $10B Microsoft investment to fund external alignment researchers, tbh.