If you combine the fact that power corrupts your world models with the general startup person being power hungry as well as AI Safety being a hot topic, you also get a bunch of well meaning people doing things that are going to be net-negative in the future. I’m personally not sure that the VC model actually even makes sense for AI Safety Startups given some of the things I’ve seen in the space.
Speaking from personal experience I found that it’s easy to skimp out on operational infrastructure like a value aligned board or a more proper incentive scheme. You have no time so instead you start prototyping a product yet that means you get this path dependence where if you succeed, you suddenly have a lot less time. As a consequence the culture changes because the incentives are now different. You start hiring people and things become more capability focused. And voila, you’re now in a capabilities/AI safety startup and it’s unclear what it is.
So get a good board and don’t commit to something unless you have it in contract form or similar that you will have at least a PBC structure if not something even more extreme as the underlying company model. The main problem I’ve seen here is if your co-founder(s) is/are being cagey about it, I would move on to new people at least if you care about safety.
If you combine the fact that power corrupts your world models with the general startup person being power hungry as well as AI Safety being a hot topic, you also get a bunch of well meaning people doing things that are going to be net-negative in the future. I’m personally not sure that the VC model actually even makes sense for AI Safety Startups given some of the things I’ve seen in the space.
Speaking from personal experience I found that it’s easy to skimp out on operational infrastructure like a value aligned board or a more proper incentive scheme. You have no time so instead you start prototyping a product yet that means you get this path dependence where if you succeed, you suddenly have a lot less time. As a consequence the culture changes because the incentives are now different. You start hiring people and things become more capability focused. And voila, you’re now in a capabilities/AI safety startup and it’s unclear what it is.
So get a good board and don’t commit to something unless you have it in contract form or similar that you will have at least a PBC structure if not something even more extreme as the underlying company model. The main problem I’ve seen here is if your co-founder(s) is/are being cagey about it, I would move on to new people at least if you care about safety.