I think that being a good founder in AI safety is very hard, and generally only recommend doing it after having some experience in the field—this strongly applies to research orgs, but also to eg field building. If you’re founding something, you need to constantly make judgements about what is best, and don’t really have mentors to defer to, unlike many entry level safety roles, and often won’t get clear feedback from reality if you get them wrong. And these are very hard questions, and if you don’t get them right, there’s a good chance your org is mediocre. I think this applies even to orgs within an existing research agenda (most attempts to found mech interp orgs seem doomed to me). Field building is a bit less dicey, but even then, you want strong community connections and a sense for what will and will not work.
I’m very excited for there to be more good founders in AI Safety, but don’t think loudly signal boosting this to junior people is a good way to achieve this. And imo “founding an org” is already pretty high status, at least if you’re perceived to have some momentum behind you?
I’m also fine with people without a lot of AI safety expertise partnering with those who do have it as co founders, but I struggle to think of orgs that I think have gone well who didn’t have at least one highly experienced and competent co-founder
Of note: when I first approached you about becoming a MATS mentor, I don’t think you had significant field-building or mentorship experience and had relatively few papers. Since then, you have become one of the most impactful field-builders, mentors, and researchers in AI safety, by my estimation! This is a bet I would take again.
I think that founding, like research, is best learned by doing. Building a research org definitely benefits from having great research takes; this unlocks funding, inspires talent, and creates better products (i.e., impactful research). However, I believe:
Not every great researcher would be a great founder.
Some researchers who could be great founders with practice are unnecessarily discouraged from trying.
There are many ways to aid AI safety as a founder that do not require research skills (e.g., field-building, advocacy, product development).
I wasn’t primarily trying to signal boost this to “junior” people and I think pairing strong ops and technical talent is a good way to start many orgs (though everyone typically contributes to everything in a small startup).
I think you are probably unusually good at spotting which mech interp orgs are doomed ex ante, but you aren’t infallible. And I think a situation where many small startups are being founded, even if most will be doomed, is what a functional startup ecosystem looks like! We don’t want people working on obviously bad ideas, but I naively expect the process of startup ideation and experimentation, aided by VC money, to yield good mech interp directions.
I naively expect the process of startup ideation and experimentation, aided by VC money
It’s very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn’t advance AI safety in any way.
Maybe so! I don’t think Eric Ho’s ideas are terrible and I’ve seen for-profit AI safety startups that I like (e.g., Goodfire) and that I don’t like (e.g., Softmax, probably).
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It’s not enough to be “good at founding”.
My bad, I read you as disagreeing with Neel’s point that it’s good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read “I think that founding, like research, is best learned by doing” as “go straight into founding and learn as you go along”.
No worries! I think research startups should be founded by strong researchers. But there are lots of potentially impactful startups (field-building, advocacy, product, etc.) that don’t require founders with research skills, and these might be best served by learning on the job?
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don’t want someone who doesn’t understand the “field” working on “field-building”.
You’re probably right that the best startups come from people who have great experience in the thing, but plenty of profitable startups get founded by kids out of college. The risk/reward tradeoff is probably different in tech. I think the best AI safety field-building startups were founded/scaled by people with experience in fieldbuilding (e.g., my experience with an EA UQ, Dewi’s experience with EA Cambridge, Agus’ experience with CEA, etc.), but the bar might be surprisingly low.
I think that being a good founder in AI safety is very hard, and generally only recommend doing it after having some experience in the field—this strongly applies to research orgs, but also to eg field building. If you’re founding something, you need to constantly make judgements about what is best, and don’t really have mentors to defer to, unlike many entry level safety roles, and often won’t get clear feedback from reality if you get them wrong. And these are very hard questions, and if you don’t get them right, there’s a good chance your org is mediocre. I think this applies even to orgs within an existing research agenda (most attempts to found mech interp orgs seem doomed to me). Field building is a bit less dicey, but even then, you want strong community connections and a sense for what will and will not work.
I’m very excited for there to be more good founders in AI Safety, but don’t think loudly signal boosting this to junior people is a good way to achieve this. And imo “founding an org” is already pretty high status, at least if you’re perceived to have some momentum behind you?
I’m also fine with people without a lot of AI safety expertise partnering with those who do have it as co founders, but I struggle to think of orgs that I think have gone well who didn’t have at least one highly experienced and competent co-founder
Did apollo have anyone you’d consider highly experienced when first starting out?
I’d say Chris Akin (COO) was highly experienced, and he joined shortly after inception.
Neel was talking about AI safety expertise and experience in the AI safety field. I can’t see that Chris had any such experience on his linked-in.
Of note: when I first approached you about becoming a MATS mentor, I don’t think you had significant field-building or mentorship experience and had relatively few papers. Since then, you have become one of the most impactful field-builders, mentors, and researchers in AI safety, by my estimation! This is a bet I would take again.
I think that founding, like research, is best learned by doing. Building a research org definitely benefits from having great research takes; this unlocks funding, inspires talent, and creates better products (i.e., impactful research). However, I believe:
Not every great researcher would be a great founder.
Some researchers who could be great founders with practice are unnecessarily discouraged from trying.
There are many ways to aid AI safety as a founder that do not require research skills (e.g., field-building, advocacy, product development).
I wasn’t primarily trying to signal boost this to “junior” people and I think pairing strong ops and technical talent is a good way to start many orgs (though everyone typically contributes to everything in a small startup).
I think you are probably unusually good at spotting which mech interp orgs are doomed ex ante, but you aren’t infallible. And I think a situation where many small startups are being founded, even if most will be doomed, is what a functional startup ecosystem looks like! We don’t want people working on obviously bad ideas, but I naively expect the process of startup ideation and experimentation, aided by VC money, to yield good mech interp directions.
It’s very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn’t advance AI safety in any way.
Maybe so! I don’t think Eric Ho’s ideas are terrible and I’ve seen for-profit AI safety startups that I like (e.g., Goodfire) and that I don’t like (e.g., Softmax, probably).
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It’s not enough to be “good at founding”.
I completely agree with you! Where did you think I implied the opposite?
My bad, I read you as disagreeing with Neel’s point that it’s good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read “I think that founding, like research, is best learned by doing” as “go straight into founding and learn as you go along”.
No worries! I think research startups should be founded by strong researchers. But there are lots of potentially impactful startups (field-building, advocacy, product, etc.) that don’t require founders with research skills, and these might be best served by learning on the job?
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don’t want someone who doesn’t understand the “field” working on “field-building”.
You’re probably right that the best startups come from people who have great experience in the thing, but plenty of profitable startups get founded by kids out of college. The risk/reward tradeoff is probably different in tech. I think the best AI safety field-building startups were founded/scaled by people with experience in fieldbuilding (e.g., my experience with an EA UQ, Dewi’s experience with EA Cambridge, Agus’ experience with CEA, etc.), but the bar might be surprisingly low.