I’m personally skeptical that this work is better-optimized for improving AI capabilities than other work being done in industry. In general, I’m skeptical of perspectives that work that the rationalist/EA/alignment crowd does Pareto-dominates the other work going on—that is, that it’s significantly better for both alignment and capabilities than standard work, such that others are simply making a mistake by not working on it regardless of what their goals are or how much they care about alignment. I think sometimes this could be the case, but I wouldn’t bet on it being a large effect. In general, I expect work optimized to help with alignment to be worse on average at pushing forward capabilities, and vice versa.
I’m personally skeptical that this work is better-optimized for improving AI capabilities than other work being done in industry. In general, I’m skeptical of perspectives that work that the rationalist/EA/alignment crowd does Pareto-dominates the other work going on—that is, that it’s significantly better for both alignment and capabilities than standard work, such that others are simply making a mistake by not working on it regardless of what their goals are or how much they care about alignment. I think sometimes this could be the case, but I wouldn’t bet on it being a large effect. In general, I expect work optimized to help with alignment to be worse on average at pushing forward capabilities, and vice versa.