However, to the extent that people want to stop people from doing empirical safety research on ML systems as they actually are in practice
Does anyone want to stop this? I think some people just contest the usefulness of improving RLHF / RLAIF / constitutional AI as safety research and also think that it has capabilties/profit externalities. E.g. see discussion here.
(I personally think this this research is probably net positive, but typically not very important to advance at current margins from an altruistic perspective.)
That said, “there exist such posts” is not really why I wrote this. The idea I really want to push back on is one that I have heard several times in IRL conversations, though I don’t know if I’ve ever seen it online. It goes like
There are two cars in a race. One is alignment, and one is capabilities. If the capabilities car hits the finish line first, we all die, and if the alignment car hits the finish line first, everything is good forever. Currently the capabilities car is winning. Some things, like RLHF and mechanistic interpretability research, speed up both cars. Speeding up both cars brings us closer to death, so those types of research are bad and we should focus on the types of research that only help alignment, like agent foundations. Also we should ensure that nobody else can do AI capabilities research.
Maybe almost nobody holds that set of beliefs! I am noticing now that my list of articles arguing that prosaic alignment strategies are harmful in expectation are by a pretty short list of authors.
Does anyone want to stop this? I think some people just contest the usefulness of improving RLHF / RLAIF / constitutional AI as safety research and also think that it has capabilties/profit externalities. E.g. see discussion here.
(I personally think this this research is probably net positive, but typically not very important to advance at current margins from an altruistic perspective.)
Yes, there are a number of posts to that effect.
That said, “there exist such posts” is not really why I wrote this. The idea I really want to push back on is one that I have heard several times in IRL conversations, though I don’t know if I’ve ever seen it online. It goes like
Maybe almost nobody holds that set of beliefs! I am noticing now that my list of articles arguing that prosaic alignment strategies are harmful in expectation are by a pretty short list of authors.