(No, “you need huge profits to solve alignment” isn’t a good excuse — we had nowhere near exhausted the alignment research that can be done without huge profits.)
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.
If you want to get huge profits to solve alignment, and are smart/capable enough to start a successful big AI lab, you are probably also smart/capable enough to do some other thing that makes you a lot of money without the side effect of increasing P(doom).
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.
If you want to get huge profits to solve alignment, and are smart/capable enough to start a successful big AI lab, you are probably also smart/capable enough to do some other thing that makes you a lot of money without the side effect of increasing P(doom).