Kudos for a really interesting area of inquiry! You are investigating the nature of language revealing what is happening in the mind that led to uttering it, and how this impacts our relationship to LLM-generated text. It comes from either no mind, or from a whole new kind of mind, depending on how you look at it, and it’s interesting how that affects how language works and how we should engage with it.
Some parts of the article depend on which form of LLM utterance we are talking about. It’s true, as the article states, if you take a Google search AI help, then there is no way to ask more questions. Each assertion is a one-off utterance that is not necessarily connected to any larger conversation. (Though don’t put anything past Google’s engineering!)
There are other ways to use an LLM, though, in particular chat mode. With chat mode, a conversation thread is accumulated with both your and the LLM’s statements. When you use an LLM in this mode, then the later statements do reflect the earlier ones, much like a dialog between two humans. Also, if you used this mode in a court room, it would be possible to cross-examine the AI and ask it more questions.
Interestingly, an AI chat can be cloned, so someone who wanted to could develop the perfect line of questions to ask the AI. This is very different from interrogating a human, where you only get one shot at asking questions to the real person. This leads to something else that’s very dangerous for a courtroom: you can practice easking qusetions to an AI until you get what you want, and then delete all your practice attempts for no one to ever see again. You can even have a separate AI drive the process and look for ways to trick the first AI into saying something you would like it to say.
A similar thing is happening on social media. We each get a miniscule fraction of all the things that anyone is uttering to each other. The messages that do get through the filter are often very interesting and convincing, but they’ve been cherry picked to be just that way. You shouldn’t use that process for anything you care about, and I suppose you should be careful about certain kinds of AI responses in the future.
Kudos for a really interesting area of inquiry! You are investigating the nature of language revealing what is happening in the mind that led to uttering it, and how this impacts our relationship to LLM-generated text. It comes from either no mind, or from a whole new kind of mind, depending on how you look at it, and it’s interesting how that affects how language works and how we should engage with it.
Some parts of the article depend on which form of LLM utterance we are talking about. It’s true, as the article states, if you take a Google search AI help, then there is no way to ask more questions. Each assertion is a one-off utterance that is not necessarily connected to any larger conversation. (Though don’t put anything past Google’s engineering!)
There are other ways to use an LLM, though, in particular chat mode. With chat mode, a conversation thread is accumulated with both your and the LLM’s statements. When you use an LLM in this mode, then the later statements do reflect the earlier ones, much like a dialog between two humans. Also, if you used this mode in a court room, it would be possible to cross-examine the AI and ask it more questions.
Interestingly, an AI chat can be cloned, so someone who wanted to could develop the perfect line of questions to ask the AI. This is very different from interrogating a human, where you only get one shot at asking questions to the real person. This leads to something else that’s very dangerous for a courtroom: you can practice easking qusetions to an AI until you get what you want, and then delete all your practice attempts for no one to ever see again. You can even have a separate AI drive the process and look for ways to trick the first AI into saying something you would like it to say.
A similar thing is happening on social media. We each get a miniscule fraction of all the things that anyone is uttering to each other. The messages that do get through the filter are often very interesting and convincing, but they’ve been cherry picked to be just that way. You shouldn’t use that process for anything you care about, and I suppose you should be careful about certain kinds of AI responses in the future.
Is this LLM-generated? My eyes glazed over in about 3 seconds.