Startups often pivot away from their initial idea when they realize that it won’t make money.
AI safety startups need to not only come up with an idea that makes money AND helps AI safety but also ensure that the safety remains through all future pivots.
If you combine the fact that power corrupts your world models with the general startup person being power hungry as well as AI Safety being a hot topic, you also get a bunch of well meaning people doing things that are going to be net-negative in the future. I’m personally not sure that the VC model actually even makes sense for AI Safety Startups given some of the things I’ve seen in the space.
Speaking from personal experience I found that it’s easy to skimp out on operational infrastructure like a value aligned board or a more proper incentive scheme. You have no time so instead you start prototyping a product yet that means you get this path dependence where if you succeed, you suddenly have a lot less time. As a consequence the culture changes because the incentives are now different. You start hiring people and things become more capability focused. And voila, you’re now in a capabilities/AI safety startup and it’s unclear what it is.
So get a good board and don’t commit to something unless you have it in contract form or similar that you will have at least a PBC structure if not something even more extreme as the underlying company model. The main problem I’ve seen here is if your co-founder(s) is/are being cagey about it, I would move on to new people at least if you care about safety.
Best way to start an AI safety startup is get enough high status credentials and track record that you can ask your investors to go fuck themselves if they ever ask you to make revenue. Only half-joking. Most AI research (not product) companies have no revenue today, or are trading at an insane P/S multiple.
Startups often pivot away from their initial idea when they realize that it won’t make money.
AI safety startups need to not only come up with an idea that makes money AND helps AI safety but also ensure that the safety remains through all future pivots.
[Crossposted from twitter]
If you combine the fact that power corrupts your world models with the general startup person being power hungry as well as AI Safety being a hot topic, you also get a bunch of well meaning people doing things that are going to be net-negative in the future. I’m personally not sure that the VC model actually even makes sense for AI Safety Startups given some of the things I’ve seen in the space.
Speaking from personal experience I found that it’s easy to skimp out on operational infrastructure like a value aligned board or a more proper incentive scheme. You have no time so instead you start prototyping a product yet that means you get this path dependence where if you succeed, you suddenly have a lot less time. As a consequence the culture changes because the incentives are now different. You start hiring people and things become more capability focused. And voila, you’re now in a capabilities/AI safety startup and it’s unclear what it is.
So get a good board and don’t commit to something unless you have it in contract form or similar that you will have at least a PBC structure if not something even more extreme as the underlying company model. The main problem I’ve seen here is if your co-founder(s) is/are being cagey about it, I would move on to new people at least if you care about safety.
I think what you’re saying is that they need to be aligned.
Best way to start an AI safety startup is get enough high status credentials and track record that you can ask your investors to go fuck themselves if they ever ask you to make revenue. Only half-joking. Most AI research (not product) companies have no revenue today, or are trading at an insane P/S multiple.
Silicon Valley episode: No revenue