I strongly agree with the message in this post, but think the title is misleading. When I read it, it seemed to imply that alignment is distinct from near-term alignment concerns, while after having read it, it’s specifically about how AI is used in the near-term. A title like “AI Alignment is distinct from how it is used in the near-term” would feel better by me.
I’m concerned about this, because I think the long-term vs near-term safety distinctions are somewhat overrated, and really wish these communities would collaborate more and focus more on the common ground! But the distinction is a common view-point, and what this title pattern matched to.
I strongly agree with the message in this post, but think the title is misleading. When I read it, it seemed to imply that alignment is distinct from near-term alignment concerns, while after having read it, it’s specifically about how AI is used in the near-term. A title like “AI Alignment is distinct from how it is used in the near-term” would feel better by me.
I’m concerned about this, because I think the long-term vs near-term safety distinctions are somewhat overrated, and really wish these communities would collaborate more and focus more on the common ground! But the distinction is a common view-point, and what this title pattern matched to.
(Partially inspired by Stephen Casper’s post)
I also interpreted it this way and was confused for a while. I think your suggested title is clearer, Neel.