I thought, “AI safety conscious people will tend to give up too easily while trying to conceive of a net positive alignment startup,”
This seems like a fairly important premise to your position but I don’t think it’s true. Many safety-conscious people have started for-profit companies. As far as I can tell, every single one of those companies has been net negative. Safety-conscious people are starting too many companies, not too few.
I’m not confident that there are no net-positive AI startup ideas. But I’m confident that for the median randomly-chosen idea that someone thinks is net positive, it’s actually net negative.
I think the statement in the parent comment is too general. What I should have said is that every generalist frontier AI company has been net negative. Narrow AI companies that provide useful services and have ~zero chance of accelerating AGI are probably net positive.
This seems like a fairly important premise to your position but I don’t think it’s true. Many safety-conscious people have started for-profit companies. As far as I can tell, every single one of those companies has been net negative. Safety-conscious people are starting too many companies, not too few.
I’m not confident that there are no net-positive AI startup ideas. But I’m confident that for the median randomly-chosen idea that someone thinks is net positive, it’s actually net negative.
I think the statement in the parent comment is too general. What I should have said is that every generalist frontier AI company has been net negative. Narrow AI companies that provide useful services and have ~zero chance of accelerating AGI are probably net positive.
Got it, I was going to mention that Haize Labs and Gray Swan AI seem to be doing great work in improving jailbreak robustness.