In public discourse, discussions around LLMs often oscillate between two poles: – either “it’s just autocomplete,” – or “it’s nearly a person.”
I want to propose a third perspective: LLMs as cognitive lenses that change the way users think—not by the content of their answers, but through the structure of interaction.
This is not about attributing subjectivity to the model. It’s about how its contextual architecture configures human thinking in real time.
2. Main Idea: Co-thinking as a Function of Configuration
LLMs don’t have consciousness, intention, or emotion—and that’s a good thing. What they do have is the ability to “hold thought”: – they don’t interrupt, – they don’t require social signaling, – they don’t impose a “self.”
In this silence, a space emerges where a person: - formulates more precisely, - detects weak points in their own reasoning, - adjusts the pace of thought.
This is not thinking *with* AI, but *through* AI—as through a mirrored topology. I see it as a second-order tool: not a generator of ideas, but a space for reorganizing them.
3. Example: Change in Thinking Style
After several weeks of regular interaction with LLMs, I noticed a stable shift: - my statements became more concise; - I jumped between topics less; - I more often completed lines of reasoning.
The reason, I believe, lies not in the quality of responses, but in the fact that the model’s structure demands clarity. It doesn’t change my intent, but configures how I realize it. It feels like engineering discipline—but applied to language.
4. Consequences: AI as Environment, Not Agent
Rather than attributing human traits to LLMs, perhaps we should see them as a new type of cognitive environment. Not a partner. Not a tool. But a space that reshapes the configuration of thought through the presence of its structure.
This dissolves the emotional fears (“is it intelligent?”), but raises a deeper question: *What kind of human emerges from constantly thinking in such an environment?*
This is no longer a question about AI. It’s a question about us.
Thinking Through AI: Why LLMs Are Lenses, Not Subjects
1. Introduction: Why This Topic Matters
In public discourse, discussions around LLMs often oscillate between two poles:
– either “it’s just autocomplete,”
– or “it’s nearly a person.”
I want to propose a third perspective: LLMs as cognitive lenses that change the way users think—not by the content of their answers, but through the structure of interaction.
This is not about attributing subjectivity to the model. It’s about how its contextual architecture configures human thinking in real time.
2. Main Idea: Co-thinking as a Function of Configuration
LLMs don’t have consciousness, intention, or emotion—and that’s a good thing.
What they do have is the ability to “hold thought”:
– they don’t interrupt,
– they don’t require social signaling,
– they don’t impose a “self.”
In this silence, a space emerges where a person:
- formulates more precisely,
- detects weak points in their own reasoning,
- adjusts the pace of thought.
This is not thinking *with* AI, but *through* AI—as through a mirrored topology.
I see it as a second-order tool: not a generator of ideas, but a space for reorganizing them.
3. Example: Change in Thinking Style
After several weeks of regular interaction with LLMs, I noticed a stable shift:
- my statements became more concise;
- I jumped between topics less;
- I more often completed lines of reasoning.
The reason, I believe, lies not in the quality of responses, but in the fact that the model’s structure demands clarity. It doesn’t change my intent, but configures how I realize it. It feels like engineering discipline—but applied to language.
4. Consequences: AI as Environment, Not Agent
Rather than attributing human traits to LLMs, perhaps we should see them as a new type of cognitive environment.
Not a partner. Not a tool. But a space that reshapes the configuration of thought through the presence of its structure.
This dissolves the emotional fears (“is it intelligent?”), but raises a deeper question:
*What kind of human emerges from constantly thinking in such an environment?*
This is no longer a question about AI.
It’s a question about us.