We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won’t have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.
I can imagine being concerned about the next generation of GPT though.
OpenAI seems to do enough diligence that GPT-3 itself is no concern. If however Yandex, Tencent or Baidu create a similar project, things would look different, so the concern isn’t so much GPT-3.
We already filter a lot of comments by well-meaning internet citizens who just kind of get confused about what LessWrong is about, and are spouting only mostly coherent sentences. So I think we overall won’t have much of a problem with moderating this and our processes deal with it pretty well, at least for this generation of GPT-3 without finetuning (I can imagine finetuned versions of GPT-3 to be good enough to cause problems even for us). Karma also helps a lot.
I can imagine being concerned about the next generation of GPT though.
OpenAI seems to do enough diligence that GPT-3 itself is no concern. If however Yandex, Tencent or Baidu create a similar project, things would look different, so the concern isn’t so much GPT-3.