confused claims that treat (base) GPT3 and other generative models as traditional rational agents
I’m pretty surprised to hear that anyone made such claims in the first place. Do you have examples of this?
I think this mainly comes up in person with people who’ve just read the intro AI Safety materials, but one example on LW is What exactly is GPT-3′s base objective?.
I’m pretty surprised to hear that anyone made such claims in the first place. Do you have examples of this?
I think this mainly comes up in person with people who’ve just read the intro AI Safety materials, but one example on LW is What exactly is GPT-3′s base objective?.