I definitely think marginal founders should focus on low-hanging fruit for impact. Do you have a list of potential startup ideas you like?
I have a different opinion about the utility of red teaming pitches/ToCs; based on experience, I think this can help spot blindspots in the ecosystem! I also think many AI safety founders, funders etc. are walking around with a long list of things they want someone to build; I have one, at least, and I’ve read a few.
I’m also not so sure that another evals or auditing company would be bad. There are only 3-4 decent-sized AI safety evals orgs! That’s a small number of people to analyze large, ever-changing models with vast threat surfaces. There’s plenty of room for differentiation and specialization (e.g., biorisk, cyber-risk, AI control evals, AI elicitation evals, human manipulation risk, bio R&D capabilities, AI coordination risk, etc.).
Maybe this is irrelevant, but I’d be surprised if a tech founder was deterred from founding a startup because a similar startup already exists, if there was high demand. In some cases, I might be concerned (e.g., regulatory capture of token government auditors), but I’m not concerned by doubling of Apollo, Goodfire, METR, Transluce, MATS, etc. Competition can be good! Maybe not as good as filling a gap, but it doesn’t seem net harmful to have more orgs working on the same problem; there’s plenty of funding, space to differentiate, and problems to work on!
I definitely think marginal founders should focus on low-hanging fruit for impact. Do you have a list of potential startup ideas you like?
I have a different opinion about the utility of red teaming pitches/ToCs; based on experience, I think this can help spot blindspots in the ecosystem! I also think many AI safety founders, funders etc. are walking around with a long list of things they want someone to build; I have one, at least, and I’ve read a few.
I’m also not so sure that another evals or auditing company would be bad. There are only 3-4 decent-sized AI safety evals orgs! That’s a small number of people to analyze large, ever-changing models with vast threat surfaces. There’s plenty of room for differentiation and specialization (e.g., biorisk, cyber-risk, AI control evals, AI elicitation evals, human manipulation risk, bio R&D capabilities, AI coordination risk, etc.).
Maybe this is irrelevant, but I’d be surprised if a tech founder was deterred from founding a startup because a similar startup already exists, if there was high demand. In some cases, I might be concerned (e.g., regulatory capture of token government auditors), but I’m not concerned by doubling of Apollo, Goodfire, METR, Transluce, MATS, etc. Competition can be good! Maybe not as good as filling a gap, but it doesn’t seem net harmful to have more orgs working on the same problem; there’s plenty of funding, space to differentiate, and problems to work on!