I don’t think you need a vision for how to solve the entire alignment problem yourself. It’s setting the bar too high. When you start a startup, you can’t possibly have the whole plan laid out up front. You’re going to change it as you go along, as you get feedback from users and discover what people really need.
What you can do is make sure that your startup’s incentives are aligned correctly at the start. Solve your own alignment. The most important questions here are, who is your customer? and how do you make money?
For example, if you make money by selling e-commerce ads against a consumer product, the incentives on your company will inevitably push you toward making a more addictive, more mass-market product.
For another example, if you make money by selling services to companies training AI models, your company’s incentives will be to broaden the market as much as possible, help all sorts of companies train all sorts of AI models, and offer all the sorts of services they want.
In the long run, it seems like companies often follow their own natural incentives, more than they follow the personal preferences of the founder.
All of this makes it tricky to start a pro-alignment company but I think it is worth trying because when people do create a successful company it creates a nexus of smart people and money to spend that can attack a lot of problems that aren’t possible in the “nonprofit research” world.
You’re going to change it as you go along, as you get feedback from users and discover what people really need.
This is one part I feel iffy on because I’m concerned that following the customer gradient will lead to a local minima that will eventually detach from where I’d like to go.
That said, it definitely feels correct to reflect on one’s alignment and incentives. The pull is real:
All of this makes it tricky to start a pro-alignment company but I think it is worth trying because when people do create a successful company it creates a nexus of smart people and money to spend that can attack a lot of problems that aren’t possible in the “nonprofit research” world.
Yeah, that’s the vision! I’d have given up and taken another route if I didn’t think there was value in pursuing a pro-safety company.
I don’t think you need a vision for how to solve the entire alignment problem yourself. It’s setting the bar too high. When you start a startup, you can’t possibly have the whole plan laid out up front. You’re going to change it as you go along, as you get feedback from users and discover what people really need.
What you can do is make sure that your startup’s incentives are aligned correctly at the start. Solve your own alignment. The most important questions here are, who is your customer? and how do you make money?
For example, if you make money by selling e-commerce ads against a consumer product, the incentives on your company will inevitably push you toward making a more addictive, more mass-market product.
For another example, if you make money by selling services to companies training AI models, your company’s incentives will be to broaden the market as much as possible, help all sorts of companies train all sorts of AI models, and offer all the sorts of services they want.
In the long run, it seems like companies often follow their own natural incentives, more than they follow the personal preferences of the founder.
All of this makes it tricky to start a pro-alignment company but I think it is worth trying because when people do create a successful company it creates a nexus of smart people and money to spend that can attack a lot of problems that aren’t possible in the “nonprofit research” world.
This is one part I feel iffy on because I’m concerned that following the customer gradient will lead to a local minima that will eventually detach from where I’d like to go.
That said, it definitely feels correct to reflect on one’s alignment and incentives. The pull is real:
Yeah, that’s the vision! I’d have given up and taken another route if I didn’t think there was value in pursuing a pro-safety company.