Unexpected Conscious Entities
Epistemic status: This is an overly confident, unpolished draft that needs more research, but a type of research I’m not good at, I guess. I rather post it now and see if it resonates.
There could be many entities around us that are conscious without us noticing. This is because we don’t have a clear, testable theory of consciousness. How would know if the Pando forest is conscious[1]? Consciousness is still a muddled concept and has many competing theories that cover multiple layers of abstraction and involve multiple aspects or attributes entities must have to be considered conscious. Here, I am interested in some aspects of consciousness common in many theories of consciousness[2] and want to illustrate how some unusual entities may actually possess them.
Perception: The ability to detect, interpret, and respond to stimuli from the environment.
Perceptual Processing and Attention: The filtering and focusing mechanism that determines which stimuli are prioritised for further processing.
Stable Awareness Patterns: Sustained cognitive focus and recognition of consistent patterns within perceived information.[3]
Self-Perception: The capacity to reflect on and recognise oneself as a distinct entity with unique characteristics.
Response to Events: The enactment of actions or changes in behavior in reaction to perceived events or stimuli.
(Episodic) Memory Formation: The process of encoding, storing, and retrieving personal experiences and events.
Intentionality and Goals: The conscious direction of thoughts and actions toward achieving desired outcomes.
Learning and Adaptation: The ability to incorporate new information and experiences to modify behaviors and understanding.
Communication: The exchange of information using verbal, written, or non-verbal methods to convey meaning.
Expressing Emotions: The articulation or display of feelings through various expressive means.
Theory of Mind: The capability to attribute mental states to oneself and others, understanding that others have perspectives and intentions distinct from one’s own.
It turns out that many aspects of consciousness can be found in entities that are very unlike human beings.
This post was originally inspired by a discussion wheter LLMs could be conscious. I see the behavioral analogs in LLM outputs, but had trouble reconciling the rsulting claims with some elements that I intuitively felt necessary for consciousness. I do think the structures that give rise to consciousness in humans have at least partial functionally analogs in other agentic entities such as LLMs. But to be more thorugh, lets analyse the presence of a comprehensive set of attributes of consciousness in the following entities:
Persons: Humans possess cognitive abilities that allow for independent thought, perception, decision-making, and interaction with their environment and others.
Countries: Nations are complex socio-political entities with collective identities, governance systems, and mechanisms to interact both domestically and internationally.
Hofstadter’s Anthill (see page 164): An ant colony functions as a decentralized system where collaborative behavior emerges from simple interactions of individual ants of limited consciousness, resulting in collective decision-making and adaptation. This hypothetical entity is offered as an aid to intuition that distributed entities can be conscious without the constituent elements being conscious.
Large Language Models: LLMs that use vast datasets to interactively process and generate human-like text, performing tasks based on patterns and learned representations.
Attribute | Persons | Countries | Large Language Models | |
Unit of processing | neurons | citizens | ants | artificial neurons |
Perception | Sensory organs (eyes, ears, etc.) detect stimuli, processed by the nervous system. | Institutions gather information about events via intelligence and news services. | Ants collect environmental information via pheromones and interactions. | Textual input data as stimuli. |
Perceptual Processing/ Attention | Brain filters subconsciously, focuses attention on certain stimuli. | Reporters select news, movement formation, virality, decision-makers prioritize information. | Ants collectively focus on specific tasks through decentralized processing. | LLM attention mechanisms. |
Stable Awareness Patterns | Neural networks support persistent awareness in global workspace. | Issues gain national focus, becoming part of public discourse and policy making. | Stable pattern emerge from the collective behavior of ants. | Coherent responses through trained representations. |
Self-Perception | Arises from higher-order brain functions. | National identity shaped by history and culture. Communicating the right to exist as a country. | (only for Hofstadter’s ant hill): The colony has a name—Aunt Hillary—and many quirks. | Lacks self-awareness; processes input based on training data without self-reference. |
Response to Events | Actions are executed based on decision-making. | Policy changes and diplomatic or military actions. | Collective actions respond to environmental changes. | Generates responses based on input data and learned patterns. |
(Episodic) Memory Formation | Experiences encoded in the hippocampus for later retrieval. | Maintained through records and cultural narratives. | Colony retains memory through the distribution of tasks and pheromone trails. | No memory (unless RAGed in); past interactions may train new model updates. |
Intentionality and Goals | Personal desires and decision-making processes. | Goals are set through political leadership or implicit in national interests. | Emergent goals arise from the ant colony’s collective survival needs. | Goals reflect training data. Agentic systems may reflect engineered goals, |
Learning and Adaptation | Neural plasticity allows individuals to learn from positive or aversive states. | Countries implement policy changes and adapt to changing internal and external circumstances such as war or famine. | The collection of pheromone trails represent where food sources are available and which places are dangerous. | Learns patterns from large datasets, adapting responses accordingly (batch learning). |
Communication | Verbal, written, and non-verbal forms. Leads to matching mental representations. | Diplomatic and media communications express policies and international expectations and are recognized by other countries. | (in Hofstadter’s anthill) Aunt Hillary communicates through ant trail patterns. | All text is communication with the user. The user understands patterns generated by the LLM. |
Expressing Emotions | Emotions expressed through physical cues. | National sentiment conveyed through symbolic actions. Spontaneous public responses. | (in Hofstadter’s anthill) Aunt Hillary has quirks and needs psychological support. | Unclear |
Theory of Mind | Ability to attribute mental states to others. | Diplomatic understanding of other countries’ intentions. | Unclear | Can represent and communicate the user’s intentions. |
Countries
If countries have self-consciousness, it is independent of the self-consciousness of its members. Countries frequently state their right to exist and this is often recognized by other countries. This requires a self-model. This is also obvious from the theory of mind of countries: Countries can model other countries as independent agents and negotiate on that level. Contrary to claims that the intentions are purely a result of the individual’s goals, it turns out that a countries intentions can be independent from the self-consciousness of a country’s citizens. There are many cases where the official agents (the voice) of a country expressed recognitions of statehood that was different from the personal preferences of the individual speakers, including in cases where these were representatives of the state in questions (examples are the Yugoslavia breakup and the Spanish civil war).
Gradual Group Consciousness
There is a corollary: Group consciousness must occur gradually in groups of increasing size.
There is a smooth transition from individuals to small groups to organisations and countries. At which point does consciousness appear? I think this continuum implie that there is no definite point. I think this is comparable to human consciousness development: At which point in the development of a child does consciousness start? Or at least: At which point does self-consciousness start?
ADDED 2025-05-07:
Two concrete worked examples[4].
1. Military Vessel in the Baltic Sea
A U.S. surveillance system detects a Russian naval vessel operating near NATO waters. The detection initiates a cascade of responses across U.S. defence and diplomatic systems: intelligence analysts prioritise the signal, military planners simulate intent, and public communications signal deterrence. The U.S., as a coherent state actor, interprets the vessel’s presence as a challenge to its strategic identity and alliance commitments. Its responses—military maneuvers, diplomatic messages, and strategic adjustments—are framed for peer entities (Russia, NATO), expressing its self-perceived role and expectations of others.
2. Wildfire in Los Angeles
A large wildfire devastates parts of Southern California, triggering not only a domestic emergency response but also international attention. California, depending on framing, interprets the disaster as part of its global climate identity. Communications emphasise leadership or failure, federal responses are symbolically amplified, and the event becomes embedded in long-term national and international memory. Messaging and actions are aimed at other nations, climate coalitions, and non-state actors projecting values and expectations of the .
Attribute | Vessel in Baltic Sea | Wildfire in LA | Biological Analog |
---|---|---|---|
1. Perception | Surveillance systems detect vessel; awareness enters national security apparatus | Fire reported by satellites, media, and state agencies; enters federal/international view | Sensory receptors detect external stimuli |
2. Perceptual Processing / Attention | Vessel flagged as high-priority by intelligence, escalated by DoD | Fire flagged for scale and importance | Early filtering by thalamus and attentional gating |
3. Stable Awareness Patterns | Sustained discussion in defence circles, media, and diplomatic channels | Sustained attention by climate diplomats, federal agencies, and international media | Global neuronal workspace; working memory |
4. Self-Perception | U.S. models itself as NATO defender and maritime power, interprets presence as a sovereignty challenge | California sees itself as a climate leader, relates the fire to its climate identity | Self-model in prefrontal cortex and DMN regions |
5. Response to Events | Sends a destroyer or diplomatic protest, conducts naval maneuvers | FEMA deployment, international coordination, | Coordinated motor output (muscle movement, speech) |
6. (Episodic) Memory Formation | Logged in intelligence reports, public memory | Institutionalised in climate policy, post-disaster reports, public memory | Encoding in the hippocampus and medial temporal lobe |
7. Intentionality / Goals | Aligns response with strategic objectives | Builds case for mitigation, international funding, or policy reforms | Goal representation in prefrontal cortex |
8. Learning / Adaptation | May adjust surveillance or military presence in region | Updates fire prevention policy, zoning laws, and international climate narratives | Synaptic plasticity; learning from feedback |
9. Communication | Pentagon and State Department communicate to Russia/NATO, “non-verbal” via maneuvers and statements | Statements at COP summits, global press releases, “non-verbal” via emergency declarations | Language generation (Broca’s/Wernicke’s areas), facial expression, body language, limbic tone |
10. Expressing Emotions | Public and official rhetoric conveys outrage, resolve, or concern; tone shapes diplomatic posture | Tragedy and moral urgency expressed in symbolic acts, speeches, global climate appeals | Affective tone via limbic system, especially amygdala |
11. Theory of Mind | Simulates Russia’s strategic intent, possible red lines, perception of NATO cohesion | Anticipates foreign perceptions of governance, resilience, and policy credibility | Representing others’ beliefs and intentions (TPJ, mPFC) |
- ^
I don’t know if the Pando forest is conscious. Some of the listed attributes are just not known and not easily testable for such entities. But I think it is a good example of a non-trivial entity that might be conscious.
- ^
I compiled attributes from different theories and also asked ChatGPT for additional suggestions.
- ^
- ^
The scenarios are mine, but the summary and formatting are largely by ChatGPT. I have reviewed and adapted most sentences.
Good exploration, but I’m not agreed on some of the conclusions. I love Hofstadter, but remember his anthill is fiction, and one shouldn’t use it as evidence for anything. I don’t think anthill or country fit into my model of “conscious entity”. Though I suspect my main sticking point is “entity” rather than “conscious”. There’s a missing element of coherence that I think matters quite a bit. LLMs are missing coherence-over-time and coherence-across-executions. Countries are missing coherence-between-subsets.
When you say “countries do X”, it’s always the case that actually, some numbers of individual humans do it, and other numbers either don’t participate or don’t stop it. Countries do NOT state their right to exist. Humans state their right to be collectively recognized as a country. The preferences of the individual speakers may differ, but the actions don’t.
This is quite a different thing than noting “a human doesn’t say something, their chest, throat, and mouth muscles say the thing”. There are almost no muscle groups that act coherently without a brain to help coordinate.
Yeah, I’m not happy that the anthill is fictional. I considered putting it into a footnote, but then I would have to put all the table entires there too, and the comparison in a table would be lost, and I think it helps drive the intuition that the elements of computation could be distributed.
I agree with that. In fact, it is one reason I don’t see LLMs currently as conscious. An earlier version of this post had a combined system of an LLM and a human interacting with it as another example, but I felt that was too difficult and not core to the thesis. A human, by continuously interacting, can provide the coherence-over-time. Stable awareness patterns and self-perception might still be missing or weak, though.
Yes—and I think that’s the most fragile part of the analogy. There is coherence, but it’s definitely not as robust as a nervous system is. Still, we do see subsets (e.g., ministries, branches of government, political blocs) coordinating through shared norms, procedures, and mutual modelling. They’re noisy, error-prone, often adversarial, but they’re not completely incoherent. At times, especially under external threat or during major events, countries do behave in surprisingly unified ways. These aren’t mere aggregations of individual actions; they require and ensure a degree of coordination that maintain a whole over time.
If we take that critique seriously, we have to stop saying that corporations launch products, or that teams win matches. There’s always an underlying substrate of individual action. But we regularly model higher-level entities as agents when doing so improves prediction or explanation. From a functionalist perspective, if “Country X believes Y” helps us model diplomatic behaviour more accurately than tracking all individuals, that’s meaningful—even if we know that it is an abstraction.
Yes, but I think this is too strict a reading. The same could be said about any distributed system. When a program outputs “Hello world,” it’s really just electrons doing things. When a person speaks, it’s really just muscles and neural impulses. The distinction is in the coordination and interpretation. When a state department issues a formal diplomatic communication, it’s acting as the voice of an institution that maintains internal models, makes predictions, and responds to feedback. That is, in all the functional ways that matter, it is the country speaking.
Exactly, and we can extend the analogy to institutions that are the coordinating organs of a country’s body. They can fail, conflict, or contradict each other, which is comparable to a neurological disorder. But that doesn’t mean there is no coherence. It just means the coherence is partial and susceptible to breakdown. One could say that is also true of human consciousness in pathological states.
So yes, I take the point that coherence is crucial. But I don’t think the lack of perfect coherence disqualifies countries from being modelled as agents or even from being on some continuum toward consciousness. The better question might be: Under what conditions does it become useful or predictive to model a system as being conscious?
I don’t mean to require perfect coherence—humans don’t have it, and if that’s required, NOTHING is conscious (note: this is a defensible position, but not particularly interesting to me). There’s enough difference between humans and the other examples that I’m not convinced by the analogies I’ve seen, and I believe this is the one important dimension of difference, but since this is all abstraction and intuition anyway, others are free to disagree.
In humans, there’s a lack of viability of independent subsets. It’s almost certainly the case that a partial brain still has some consciousness, and likely some differences from the whole being, but it’s not very divisible into truly independent segments. This is a kind of coherence that I don’t see in the other examples. Organs are not a good analogy for constituents or sub-organizations of a country, as organs DON’T have volition and world-models.
No, that’s an isolated demand for rigor. We can definitely make different entity-analogies for different questions. When it matters, as it sometimes does, we can break the simplification and prosecute the officers or employees who are ACUTALLY responsible for a corporate action.
This is a GREAT framing for the question. Let’s not talk about “consciousness” as if it were a useful label that we agree on. Taboo the word, and the holistic concept, and let’s ask “when is it more useful to model a country as an entity that thinks and plans, as opposed to modeling it as a collection of groups of humans, who individually influence each other in thinking and planning”?
I still owe you a response to this. I’m esp. thinking about predictions.
The countries are an interesting example. Yes, they can model other countries as agents and negotiate with them. But they can also negotiate with individual people in other countries (e.g. recruit them as spies). So we potentially have two levels of consciousness here, with the ability to communicate across layers.
As an analogy, imagine being able to communicate with other people’s cells. For example, you can’t defeat your rival, but you can convince some of his cells to give him cancer.
I guess something similar already happens, except instead of on the level of cells, it is on the level of sub-agents. For example, by being attractive, you kinda subvert the other person’s part of brain to argue in your favor. You can subvert the other person even more strongly, by becoming their drug dealer.
Sure, there are differences between countries and people. Not all things that cells can do can be done by people in a corresponding way in a country, and vice versa. The individual units of a country—people—are much more mobile than the cells in a person. They can even change hosts. I think this is related to the coherence that Dogan mentioned. The coherence of countries is lower than that of persons. On the other hand, countries exist for longer (but “think” slower).