Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren’t the bottleneck right now, and the other problems we have are more interesting.
Can you clarify what you mean by this? I see two main possibilities for what you might mean:
There are many talented people who want to work on AI alignment, but are doing something else instead.
There are many talented people working on AI alignment, but they’re not very productive.
If you mean the first one, I think it would be worth it to survey people who are interested in AI alignment but are currently doing something else—ask each of them, why aren’t they working on AI alignment? Have they ever applied for a grant or job in the area? If not, why not? Is money a big concern, such that if it were more freely available they’d start working on AI alignment independently? Or is it that they’d want to join an existing org, but open positions are too scarce?
I think that “There are many talented people who want to work on AI alignment, but are doing something else instead.” is likely to be true. I met at least 2 talented people who tried to get into AI Safety but who weren’t able to because open positions / internships were too scarce. One of them at least tried hard (i.e applied for many positions and couldn’t find one (scarcity), despite the fact that he was one of the top french students in ML). If there was money / positions, I think that there are chances that he would work on AI alignment independently. Connor Leahy in one of his podcasts mentions something similar aswell.
I want to point out that cashing out “talented” might be tricky. My observation is that talent for technical alignment work is not implied/caused by talent in maths and/or ML. It’s not bad to have any of this, but I can think of many incredible people in maths/ML I know who seem way less promising to me than some person with the right mindset and approach.
Can you clarify what you mean by this? I see two main possibilities for what you might mean:
There are many talented people who want to work on AI alignment, but are doing something else instead.
There are many talented people working on AI alignment, but they’re not very productive.
If you mean the first one, I think it would be worth it to survey people who are interested in AI alignment but are currently doing something else—ask each of them, why aren’t they working on AI alignment? Have they ever applied for a grant or job in the area? If not, why not? Is money a big concern, such that if it were more freely available they’d start working on AI alignment independently? Or is it that they’d want to join an existing org, but open positions are too scarce?
I think that “There are many talented people who want to work on AI alignment, but are doing something else instead.” is likely to be true. I met at least 2 talented people who tried to get into AI Safety but who weren’t able to because open positions / internships were too scarce. One of them at least tried hard (i.e applied for many positions and couldn’t find one (scarcity), despite the fact that he was one of the top french students in ML). If there was money / positions, I think that there are chances that he would work on AI alignment independently.
Connor Leahy in one of his podcasts mentions something similar aswell.
That’s the impression I have.
I want to point out that cashing out “talented” might be tricky. My observation is that talent for technical alignment work is not implied/caused by talent in maths and/or ML. It’s not bad to have any of this, but I can think of many incredible people in maths/ML I know who seem way less promising to me than some person with the right mindset and approach.
Yeah, I mean the first. Good survey question ideas :)