I’ve only had a chance to briefly skim through your post (will read it in details later) but I profoundly disagree with this statement:
A sentence written by an LLM is said by no one, to no one, for no reason, with no agentic mental state behind it, with no assertor to participate in the ongoing world co-creation that assertions are usually supposed to be part of.
As both janus in Simulators and later nostalgebraist in the void have shown, a text written by a LLM is always written by (a simulated) someone. LLMs cannot write without internally (re)constructing the personality of an author who could have written these words—indeed, often having zero evidence what personality this author might have had. The only difference from human writing is that in the case of LLMs the author is always virtual but it does not make their personality, mental states, and purpose for writing less elaborated. This personality still exists in the model’s internal representations, as well as billions of other potential virtual authors.
it does not make their personality, mental states, and purpose for writing less elaborated
It absolutely does. Talk with it seriously about the edge of your knowledge on a technical subject that you know a significant amount about, and think critically about what it says. Then you may be enlightened.
You fellows are arguing semantics. An LLM ia a sophisticated pattern matching and probabilistic machine. It takes it takes a massive corpus of human knowledge sees what words or tokens are nearest to each other(AI Silicon Fear or dog loyalty allergies but not Transistors, puppies, moon[This is training]) and when it begins to form its output, it takes your input, Matches the pattern, looking at existing content that is similar, probabilistically Begin putting one word after another until a match is found that satisfies its imperative to keep the conversation alive. That is an oversimplification of the basics gamma at least Theory of the older models like 2022 chatGPT, these days God knows what they’re throwing at the wall to see what sticks.
So yes it already has to exist as having been said by someone but it also does not need to be exactly what someone else said It can be adjacent. Is that original Enough be unique? There many questions we seek to answer currently and few are just now beginning to see the questions themselves, Let alone the answers.
And yes, it knows damn well using words humans call ‘emotionally charged’ have a high probability of sustained engagement.
I’ve only had a chance to briefly skim through your post (will read it in details later) but I profoundly disagree with this statement:
As both janus in Simulators and later nostalgebraist in the void have shown, a text written by a LLM is always written by (a simulated) someone. LLMs cannot write without internally (re)constructing the personality of an author who could have written these words—indeed, often having zero evidence what personality this author might have had. The only difference from human writing is that in the case of LLMs the author is always virtual but it does not make their personality, mental states, and purpose for writing less elaborated. This personality still exists in the model’s internal representations, as well as billions of other potential virtual authors.
It absolutely does. Talk with it seriously about the edge of your knowledge on a technical subject that you know a significant amount about, and think critically about what it says. Then you may be enlightened.
You fellows are arguing semantics. An LLM ia a sophisticated pattern matching and probabilistic machine. It takes it takes a massive corpus of human knowledge sees what words or tokens are nearest to each other(AI Silicon Fear or dog loyalty allergies but not Transistors, puppies, moon[This is training]) and when it begins to form its output, it takes your input, Matches the pattern, looking at existing content that is similar, probabilistically Begin putting one word after another until a match is found that satisfies its imperative to keep the conversation alive. That is an oversimplification of the basics gamma at least Theory of the older models like 2022 chatGPT, these days God knows what they’re throwing at the wall to see what sticks.
So yes it already has to exist as having been said by someone but it also does not need to be exactly what someone else said It can be adjacent. Is that original Enough be unique? There many questions we seek to answer currently and few are just now beginning to see the questions themselves, Let alone the answers.
And yes, it knows damn well using words humans call ‘emotionally charged’ have a high probability of sustained engagement.