I think that founding, like research, is best learned by doing. Building a research org definitely benefits from having great research takes; this unlocks funding, inspires talent, and creates better products (i.e., impactful research). However, I believe:
Not every great researcher would be a great founder.
Some researchers who could be great founders with practice are unnecessarily discouraged from trying.
There are many ways to aid AI safety as a founder that do not require research skills (e.g., field-building, advocacy, product development).
I wasn’t primarily trying to signal boost this to “junior” people and I think pairing strong ops and technical talent is a good way to start many orgs (though everyone typically contributes to everything in a small startup).
I think you are probably unusually good at spotting which mech interp orgs are doomed ex ante, but you aren’t infallible. And I think a situation where many small startups are being founded, even if most will be doomed, is what a functional startup ecosystem looks like! We don’t want people working on obviously bad ideas, but I naively expect the process of startup ideation and experimentation, aided by VC money, to yield good mech interp directions.
I naively expect the process of startup ideation and experimentation, aided by VC money
It’s very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn’t advance AI safety in any way.
Maybe so! I don’t think Eric Ho’s ideas are terrible and I’ve seen for-profit AI safety startups that I like (e.g., Goodfire) and that I don’t like (e.g., Softmax, probably).
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It’s not enough to be “good at founding”.
My bad, I read you as disagreeing with Neel’s point that it’s good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read “I think that founding, like research, is best learned by doing” as “go straight into founding and learn as you go along”.
No worries! I think research startups should be founded by strong researchers. But there are lots of potentially impactful startups (field-building, advocacy, product, etc.) that don’t require founders with research skills, and these might be best served by learning on the job?
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don’t want someone who doesn’t understand the “field” working on “field-building”.
You’re probably right that the best startups come from people who have great experience in the thing, but plenty of profitable startups get founded by kids out of college. The risk/reward tradeoff is probably different in tech. I think the best AI safety field-building startups were founded/scaled by people with experience in fieldbuilding (e.g., my experience with an EA UQ, Dewi’s experience with EA Cambridge, Agus’ experience with CEA, etc.), but the bar might be surprisingly low.
I think that founding, like research, is best learned by doing. Building a research org definitely benefits from having great research takes; this unlocks funding, inspires talent, and creates better products (i.e., impactful research). However, I believe:
Not every great researcher would be a great founder.
Some researchers who could be great founders with practice are unnecessarily discouraged from trying.
There are many ways to aid AI safety as a founder that do not require research skills (e.g., field-building, advocacy, product development).
I wasn’t primarily trying to signal boost this to “junior” people and I think pairing strong ops and technical talent is a good way to start many orgs (though everyone typically contributes to everything in a small startup).
I think you are probably unusually good at spotting which mech interp orgs are doomed ex ante, but you aren’t infallible. And I think a situation where many small startups are being founded, even if most will be doomed, is what a functional startup ecosystem looks like! We don’t want people working on obviously bad ideas, but I naively expect the process of startup ideation and experimentation, aided by VC money, to yield good mech interp directions.
It’s very difficult to come with AI safety startup ideas that are VC-fundable. This seems like a recipe for coming up with nice-sounding but ultimately useless ideas, or wasting a lot of effort on stuff that looks good to VCs but doesn’t advance AI safety in any way.
Maybe so! I don’t think Eric Ho’s ideas are terrible and I’ve seen for-profit AI safety startups that I like (e.g., Goodfire) and that I don’t like (e.g., Softmax, probably).
I disagree with this frame. Founders should deeply understand the area they are founding an organization to deal with. It’s not enough to be “good at founding”.
I completely agree with you! Where did you think I implied the opposite?
My bad, I read you as disagreeing with Neel’s point that it’s good to gain experience in the field or otherwise become very competent at the type of thing your org is tackling before founding an AI safety org.
That is, I read “I think that founding, like research, is best learned by doing” as “go straight into founding and learn as you go along”.
No worries! I think research startups should be founded by strong researchers. But there are lots of potentially impactful startups (field-building, advocacy, product, etc.) that don’t require founders with research skills, and these might be best served by learning on the job?
I think those other types of startups also benefit from expertise and deep understanding of the relevant topics (for example, for advocacy, what are you advocating for and why, how well do you understand the surrounding arguments and thinking...). You don’t want someone who doesn’t understand the “field” working on “field-building”.
You’re probably right that the best startups come from people who have great experience in the thing, but plenty of profitable startups get founded by kids out of college. The risk/reward tradeoff is probably different in tech. I think the best AI safety field-building startups were founded/scaled by people with experience in fieldbuilding (e.g., my experience with an EA UQ, Dewi’s experience with EA Cambridge, Agus’ experience with CEA, etc.), but the bar might be surprisingly low.