I think it’s clear that when you read a book the meaning is a product of both you and the book, because if instead you read a different book you’d arrive at different meaning, and different people reading the same book get to-some-extent-similar meanings from it. So “the meaning comes from you” / “the meaning comes from me” is too simple. It seems to me that generally you get more-similar meanings when you keep the book the same and change the reader than when you keep the reader the same and change the book, though of course it depends on how big a change you make in either case, so I would say more of the meaning is in the text than in the reader. (For the avoidance of doubt: no, I do not believe that there’s some literal meaning-stuff that we could distil from books and readers and measure. “In” there is a metaphor. Obviously.)
I agree that there are many questions to which we don’t have answers, and that more specific and concrete questions may be more illuminating than very broad and vague ones like “does the text emited by an LLM have meaning?”.
I don’t know how well the GPT/Wolfram|Alpha integration works (I seem to remember reading somewhere that it’s very flaky, but maybe they’ve made it better), but I suggest that to whatever extent it successfully results in users getting information that’s correct on account of Alpha’s databases having been filled with data derived from how the world actually is, and its algorithms having been designed to match how mathematics actually works, that’s an indication that in some useful sense some kind of meaning is being (yes, metaphorically) transmitted.
...the question of whether or not the language produced by LLMs is meaningful is up to us. Do you trust it? Do WE trust it? Why or why not?
That’s the position I’m considering. If you understand “WE” to mean society as a whole, then the answer is that the question is under discussion and is undetermined. But some individuals do seem to trust the text from certain LLMs at least under certain circumstances. For the most part I trust the output of ChatGPT and GPT-4, with which I have considerably less experience than I do with ChatGPT. I know that both systems make mistakes of various kinds, including what is called “hallucination.” It’s not clear to me that that differentiates them from ordinary humans, who make mistakes and often say things without foundation in reality.
I think it’s clear that when you read a book the meaning is a product of both you and the book, because if instead you read a different book you’d arrive at different meaning, and different people reading the same book get to-some-extent-similar meanings from it. So “the meaning comes from you” / “the meaning comes from me” is too simple. It seems to me that generally you get more-similar meanings when you keep the book the same and change the reader than when you keep the reader the same and change the book, though of course it depends on how big a change you make in either case, so I would say more of the meaning is in the text than in the reader. (For the avoidance of doubt: no, I do not believe that there’s some literal meaning-stuff that we could distil from books and readers and measure. “In” there is a metaphor. Obviously.)
I agree that there are many questions to which we don’t have answers, and that more specific and concrete questions may be more illuminating than very broad and vague ones like “does the text emited by an LLM have meaning?”.
I don’t know how well the GPT/Wolfram|Alpha integration works (I seem to remember reading somewhere that it’s very flaky, but maybe they’ve made it better), but I suggest that to whatever extent it successfully results in users getting information that’s correct on account of Alpha’s databases having been filled with data derived from how the world actually is, and its algorithms having been designed to match how mathematics actually works, that’s an indication that in some useful sense some kind of meaning is being (yes, metaphorically) transmitted.
I’ve just posted something at my home blog, New Savanna, in which I consider the idea that