Even just increasing the “minimum wage” of AI safety work could be great imo. If all additional donations did was double the incomes of people working on existing projects that seems positive. These donations go to real people in your network.
As someone who cares more than nil about finances, it was very difficult to justify working on AI safety when not at a frontier lab… so I stopped. (It’s also emotionally a bit hard to believe AI safety is so freakin important when it often doesn’t pay.) So I suspect greater donations help bring in more talent.
I can imagine some people are going to read this comment and think “But the really dedicatd people will work on AI safety at minimum wage!” Eh, I have expensive health issues and I intend to raise kids in San Francisco. Lots of the non-profit AI safety work pays <$120k. Seems like partner and I will need to make $350k+/yr.
It would make it more competitive with AI capabilities work, diminishing the massive incentive to do the one that kills us rather than the one that prevents the killing.
(Not that I endorse all safety projects as really being safety projects; nor all capabilities projects as being on-the-path-to-extinction capabilites projects.)
I reckon it’s a small relative effect on the bigger capabilities pool, but a big relative effect on the smaller safety pool, in terms of raising the level of talent it can compete for.
Oops, I read your comment as just saying “diminishing the massive incentive to do the one that kills us” and missed “rather than the one that prevents the killing”. Agree with small effect on capabilities pool, potentially big effect on safety pool.
Given that there are many competent people who’d be enthusiastic to work on AI Safety for <$50k/y, I think the field as a whole is better spreading out and accepting that people who want very high salaries will do other work. I think there’s some dynamics which are invisible to people near the center of the graph where the things you hear are AI safety has lots of money, lots of people are making $100k+/y, and they are not able to get tiny amounts of money to survive frugally. I’ve had grantees who lived off $2k/y, one who a few 100 dollars was massively helpful, and one who produced at the time the third highest upvoted research on the Alignment Forum whose 6 month upskilling grant was £10k. Even in more professional orgs like aisafety.info and aisafety.com, we’re running on entire orgs on less than one bay area programmer salary.
All else equal, yes, boosting incomes of high earners in AI Safety would be good. But please track the trade-offs of asking for salaries that could support dozens of people who are being careful with money, and model what’s going on for the person on the other side of this who sees extinction coming and wants to help but can’t meet basic needs despite having things they want to help with and being vastly more frugal.
It’s totally OK to say “I need x amount due to my life choices” and do something else if you don’t get x, but that doesn’t mean it’s correct for the field to allocate x to you.[1]
I say this despite having learned more from your writings on minds than all but a few people and expecting you’re way above average in being able to help. $200k+ can buy a huge amount outside the bay.
Even just increasing the “minimum wage” of AI safety work could be great imo. If all additional donations did was double the incomes of people working on existing projects that seems positive. These donations go to real people in your network.
As someone who cares more than nil about finances, it was very difficult to justify working on AI safety when not at a frontier lab… so I stopped. (It’s also emotionally a bit hard to believe AI safety is so freakin important when it often doesn’t pay.) So I suspect greater donations help bring in more talent.
I can imagine some people are going to read this comment and think “But the really dedicatd people will work on AI safety at minimum wage!” Eh, I have expensive health issues and I intend to raise kids in San Francisco. Lots of the non-profit AI safety work pays <$120k. Seems like partner and I will need to make $350k+/yr.
I don’t think that doubling the incomes of people working on existing projects would be a good use of resources.
It would make it more competitive with AI capabilities work, diminishing the massive incentive to do the one that kills us rather than the one that prevents the killing.
(Not that I endorse all safety projects as really being safety projects; nor all capabilities projects as being on-the-path-to-extinction capabilites projects.)
Surely this effect is tiny right? Like what fraction of capabilities researchers will plausibly change what they do?
I reckon it’s a small relative effect on the bigger capabilities pool, but a big relative effect on the smaller safety pool, in terms of raising the level of talent it can compete for.
Oops, I read your comment as just saying “diminishing the massive incentive to do the one that kills us” and missed “rather than the one that prevents the killing”. Agree with small effect on capabilities pool, potentially big effect on safety pool.
Given that there are many competent people who’d be enthusiastic to work on AI Safety for <$50k/y, I think the field as a whole is better spreading out and accepting that people who want very high salaries will do other work. I think there’s some dynamics which are invisible to people near the center of the graph where the things you hear are AI safety has lots of money, lots of people are making $100k+/y, and they are not able to get tiny amounts of money to survive frugally. I’ve had grantees who lived off $2k/y, one who a few 100 dollars was massively helpful, and one who produced at the time the third highest upvoted research on the Alignment Forum whose 6 month upskilling grant was £10k. Even in more professional orgs like aisafety.info and aisafety.com, we’re running on entire orgs on less than one bay area programmer salary.
All else equal, yes, boosting incomes of high earners in AI Safety would be good. But please track the trade-offs of asking for salaries that could support dozens of people who are being careful with money, and model what’s going on for the person on the other side of this who sees extinction coming and wants to help but can’t meet basic needs despite having things they want to help with and being vastly more frugal.
It’s totally OK to say “I need x amount due to my life choices” and do something else if you don’t get x, but that doesn’t mean it’s correct for the field to allocate x to you.[1]
I say this despite having learned more from your writings on minds than all but a few people and expecting you’re way above average in being able to help. $200k+ can buy a huge amount outside the bay.