Relevant: Alignment as a Bottleneck to Usefulness of GPT-3
between alignment and capabilities, which is the main bottleneck to getting value out of GPT-like models, both in the short term and the long(er) term?
Relevant: Alignment as a Bottleneck to Usefulness of GPT-3