I had a dream about an LLM that had a sufficiently powerful predictive model of me that it was able to accurately prompt itself using my own line of thinking before I could verbalize it. The self-generated prompts even factored in my surprise at the situation.
When I woke up, I wondered whether this made sense. After all, the addition of the L0 term in the Chinchilla scaling law implies a baseline unpredictability in language, which tracks with our warm wetware having some inherent entropy.
I posit that L0 is on average far lower in the hypothetical corpus of an individual’s thoughts and writing than it is for internet text. It could be that predicting someone’s stream of thought to an astonishing degree of accuracy is within the realm of possibility, perhaps based on stylometric clues pointing to some place in mind-space.
I had a dream about an LLM that had a sufficiently powerful predictive model of me that it was able to accurately prompt itself using my own line of thinking before I could verbalize it. The self-generated prompts even factored in my surprise at the situation.
When I woke up, I wondered whether this made sense. After all, the addition of the L0 term in the Chinchilla scaling law implies a baseline unpredictability in language, which tracks with our warm wetware having some inherent entropy.
I posit that L0 is on average far lower in the hypothetical corpus of an individual’s thoughts and writing than it is for internet text. It could be that predicting someone’s stream of thought to an astonishing degree of accuracy is within the realm of possibility, perhaps based on stylometric clues pointing to some place in mind-space.