Saying ChatGPT is “lying” is an anthropomorphism— unless you think it’s conscious?
The issue is instantly muddied when using terms like “lying” or “bullshitting”[1], which imply levels of intelligence simply not in existence yet. Not even with models that were produced literally today. Unless my prior experiences and the history of robotics have somehow been disconnected from the timeline I’m inhabiting. Not impossible. Who can say. Maybe someone who knows me, but even then… it’s questionable. :)
I get the idea that “Real Soon Now, we will have those levels!” but we don’t, and using that language to refer to what we do have, which is not that, makes the communication harder— or less specific/accurate if you will— which is, funnily enough, sorta what you are talking about! NLP control of robots is neat, and I get why we want the understanding to be real clear, but neither of the links you shared of the latest and greatest imply we need to worry about “lying” yet. Accuracy? Yes 100%
If for “truth” (as opposed to lies), you mean something more like “accuracy” or “confidence”, you can instruct ChatGPT to also give its confidence level when it replies. Some have found that to be helpful.
If you think “truth” is some binary thing, I’m not so sure that’s the case once you get into even the mildest of complexities[2]. “It depends” is really the only bulletproof answer.
For what it’s worth, when there are, let’s call them binary truths, there is some recent-ish work[3] in having the response verified automatically by ensuring that the opposite of the answer is false, as it were.
If a model rarely has literally “no idea”, then what would you expect? What’s the threshold for “knowing” something? Tuning responses is one of the hard things to do, but as I mentioned before, you can peer into some of these “thought process” if you will[4], literally by just asking it to add that information in the response.
Which is bloody amazing! I’m not trying to downplay what we’ve (the royal we) have already achieved. Mainly it would be good if we are all on the same page though, as it were, at least as much as is possible (some folks think True Agreement is actually impossible, but I think we can get close).
Saying ChatGPT is “lying” is an anthropomorphism— unless you think it’s conscious?
The issue is instantly muddied when using terms like “lying” or “bullshitting”[1], which imply levels of intelligence simply not in existence yet. Not even with models that were produced literally today. Unless my prior experiences and the history of robotics have somehow been disconnected from the timeline I’m inhabiting. Not impossible. Who can say. Maybe someone who knows me, but even then… it’s questionable. :)
I get the idea that “Real Soon Now, we will have those levels!” but we don’t, and using that language to refer to what we do have, which is not that, makes the communication harder— or less specific/accurate if you will— which is, funnily enough, sorta what you are talking about! NLP control of robots is neat, and I get why we want the understanding to be real clear, but neither of the links you shared of the latest and greatest imply we need to worry about “lying” yet. Accuracy? Yes 100%
If for “truth” (as opposed to lies), you mean something more like “accuracy” or “confidence”, you can instruct ChatGPT to also give its confidence level when it replies. Some have found that to be helpful.
If you think “truth” is some binary thing, I’m not so sure that’s the case once you get into even the mildest of complexities[2]. “It depends” is really the only bulletproof answer.
For what it’s worth, when there are, let’s call them binary truths, there is some recent-ish work[3] in having the response verified automatically by ensuring that the opposite of the answer is false, as it were.
If a model rarely has literally “no idea”, then what would you expect? What’s the threshold for “knowing” something? Tuning responses is one of the hard things to do, but as I mentioned before, you can peer into some of these “thought process” if you will[4], literally by just asking it to add that information in the response.
Which is bloody amazing! I’m not trying to downplay what we’ve (the royal we) have already achieved. Mainly it would be good if we are all on the same page though, as it were, at least as much as is possible (some folks think True Agreement is actually impossible, but I think we can get close).
The nature of “Truth” is one of the Hard Questions for humans— much less our programs.
Don’t get me started on the limits of provability in formal axiomatic theories!
Discovering Latent Knowledge in Language Models Without Supervision
But please don’t[5]. ChatGPT is not “thinking” in the human sense
won’t? that’s the opposite of will, right? grammar is hard (for me, if not some programs =])