It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask. “Is this in American or European date format?” For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.
It doesn’t seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.
It’s also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask.
You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3′s beliefs are more belief-like than those of humans.
Yes, sometimes we don’t notice. We miss a lot. But there are also ordinary clarifications like “did I hear you correctly” and “what did you mean by that?” Noticing that you didn’t understand something isn’t rare. If we didn’t notice when something seems absurd, jokes wouldn’t work.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask. “Is this in American or European date format?” For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.
It doesn’t seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.
It’s also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.
You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3′s beliefs are more belief-like than those of humans.
Yes, sometimes we don’t notice. We miss a lot. But there are also ordinary clarifications like “did I hear you correctly” and “what did you mean by that?” Noticing that you didn’t understand something isn’t rare. If we didn’t notice when something seems absurd, jokes wouldn’t work.