I think it’s the primary reason why OpenAI leadership cares about InstructGPT and is willing to dedicate substantial personel and financial resources on it. I expect that when OpenAI leadership is making tradeoffs of different types of training, the primary question is commercial viability, not safety.
Similarly, if InstructGPT would hurt commercial viability, I expect it would not get deployed (I think individual researchers would likely still be able to work on it, though I think they would be unlikely to be able to hire others to work on it, or get substantial financial resources to scale it).
Could you clarify what you mean by “the primary point” here? As in: the primary actual effect? Or the primary intended effect? From whose perspective?
I think it’s the primary reason why OpenAI leadership cares about InstructGPT and is willing to dedicate substantial personel and financial resources on it. I expect that when OpenAI leadership is making tradeoffs of different types of training, the primary question is commercial viability, not safety.
Similarly, if InstructGPT would hurt commercial viability, I expect it would not get deployed (I think individual researchers would likely still be able to work on it, though I think they would be unlikely to be able to hire others to work on it, or get substantial financial resources to scale it).