Even if AI safety founders should be risk averse, I think we should do better at supporting the relatively few competent founder-types who are deeply interested in AI safety.
I suspect that we disagree significantly on the potential downside risk of most AI safety startups. I think it’s relatively hard to have a significant negative impact, particularly one that outweighs the expected benefits, given how much optimization pressure is being applied to advancing AI capabilities across the economy. Creating a new frontier AI company (e.g., Mistral-sized) or a toxic advocacy org would be notable exception. Maybe Mechanize and Calaveras are exceptions too?
Note that at least Anthropic has a hard time finding talent that is also mission-aligned, which they prefer, particularly for safety teams.
Cheers, Cleo!
Even if AI safety founders should be risk averse, I think we should do better at supporting the relatively few competent founder-types who are deeply interested in AI safety.
I suspect that we disagree significantly on the potential downside risk of most AI safety startups. I think it’s relatively hard to have a significant negative impact, particularly one that outweighs the expected benefits, given how much optimization pressure is being applied to advancing AI capabilities across the economy. Creating a new frontier AI company (e.g., Mistral-sized) or a toxic advocacy org would be notable exception. Maybe Mechanize and Calaveras are exceptions too?
Note that at least Anthropic has a hard time finding talent that is also mission-aligned, which they prefer, particularly for safety teams.