I love Hofstadter, but remember his anthill is fiction, and one shouldn’t use it as evidence for anything.
Yeah, I’m not happy that the anthill is fictional. I considered putting it into a footnote, but then I would have to put all the table entires there too, and the comparison in a table would be lost, and I think it helps drive the intuition that the elements of computation could be distributed.
Though I suspect my main sticking point is “entity” rather than “conscious”. There’s a missing element of coherence that I think matters quite a bit. LLMs are missing coherence-over-time and coherence-across-executions.
I agree with that. In fact, it is one reason I don’t see LLMs currently as conscious. An earlier version of this post had a combined system of an LLM and a human interacting with it as another example, but I felt that was too difficult and not core to the thesis. A human, by continuously interacting, can provide the coherence-over-time. Stable awareness patterns and self-perception might still be missing or weak, though.
Countries are missing coherence-between-subsets.
Yes—and I think that’s the most fragile part of the analogy. There is coherence, but it’s definitely not as robust as a nervous system is. Still, we do see subsets (e.g., ministries, branches of government, political blocs) coordinating through shared norms, procedures, and mutual modelling. They’re noisy, error-prone, often adversarial, but they’re not completely incoherent. At times, especially under external threat or during major events, countries do behave in surprisingly unified ways. These aren’t mere aggregations of individual actions; they require and ensure a degree of coordination that maintain a whole over time.
When you say “countries do X”, it’s always the case that actually, some numbers of individual humans do it, and other numbers either don’t participate or don’t stop it
If we take that critique seriously, we have to stop saying that corporations launch products, or that teams win matches. There’s always an underlying substrate of individual action. But we regularly model higher-level entities as agents when doing so improves prediction or explanation. From a functionalist perspective, if “Country X believes Y” helps us model diplomatic behaviour more accurately than tracking all individuals, that’s meaningful—even if we know that it is an abstraction.
Countries do NOT state their right to exist. Humans state their right to be collectively recognized as a country.
Yes, but I think this is too strict a reading. The same could be said about any distributed system. When a program outputs “Hello world,” it’s really just electrons doing things. When a person speaks, it’s really just muscles and neural impulses. The distinction is in the coordination and interpretation. When a state department issues a formal diplomatic communication, it’s acting as the voice of an institution that maintains internal models, makes predictions, and responds to feedback. That is, in all the functional ways that matter, it is the country speaking.
There are almost no muscle groups that act coherently without a brain to help coordinate.
Exactly, and we can extend the analogy to institutions that are the coordinating organs of a country’s body. They can fail, conflict, or contradict each other, which is comparable to a neurological disorder. But that doesn’t mean there is no coherence. It just means the coherence is partial and susceptible to breakdown. One could say that is also true of human consciousness in pathological states.
So yes, I take the point that coherence is crucial. But I don’t think the lack of perfect coherence disqualifies countries from being modelled as agents or even from being on some continuum toward consciousness. The better question might be: Under what conditions does it become useful or predictive to model a system as being conscious?
I don’t mean to require perfect coherence—humans don’t have it, and if that’s required, NOTHING is conscious (note: this is a defensible position, but not particularly interesting to me). There’s enough difference between humans and the other examples that I’m not convinced by the analogies I’ve seen, and I believe this is the one important dimension of difference, but since this is all abstraction and intuition anyway, others are free to disagree.
In humans, there’s a lack of viability of independent subsets. It’s almost certainly the case that a partial brain still has some consciousness, and likely some differences from the whole being, but it’s not very divisible into truly independent segments. This is a kind of coherence that I don’t see in the other examples. Organs are not a good analogy for constituents or sub-organizations of a country, as organs DON’T have volition and world-models.
If we take that critique seriously, we have to stop saying that corporations launch products, or that teams win matches.
No, that’s an isolated demand for rigor. We can definitely make different entity-analogies for different questions. When it matters, as it sometimes does, we can break the simplification and prosecute the officers or employees who are ACUTALLY responsible for a corporate action.
Under what conditions does it become useful or predictive to model a system as being conscious?
This is a GREAT framing for the question. Let’s not talk about “consciousness” as if it were a useful label that we agree on. Taboo the word, and the holistic concept, and let’s ask “when is it more useful to model a country as an entity that thinks and plans, as opposed to modeling it as a collection of groups of humans, who individually influence each other in thinking and planning”?
Yeah, I’m not happy that the anthill is fictional. I considered putting it into a footnote, but then I would have to put all the table entires there too, and the comparison in a table would be lost, and I think it helps drive the intuition that the elements of computation could be distributed.
I agree with that. In fact, it is one reason I don’t see LLMs currently as conscious. An earlier version of this post had a combined system of an LLM and a human interacting with it as another example, but I felt that was too difficult and not core to the thesis. A human, by continuously interacting, can provide the coherence-over-time. Stable awareness patterns and self-perception might still be missing or weak, though.
Yes—and I think that’s the most fragile part of the analogy. There is coherence, but it’s definitely not as robust as a nervous system is. Still, we do see subsets (e.g., ministries, branches of government, political blocs) coordinating through shared norms, procedures, and mutual modelling. They’re noisy, error-prone, often adversarial, but they’re not completely incoherent. At times, especially under external threat or during major events, countries do behave in surprisingly unified ways. These aren’t mere aggregations of individual actions; they require and ensure a degree of coordination that maintain a whole over time.
If we take that critique seriously, we have to stop saying that corporations launch products, or that teams win matches. There’s always an underlying substrate of individual action. But we regularly model higher-level entities as agents when doing so improves prediction or explanation. From a functionalist perspective, if “Country X believes Y” helps us model diplomatic behaviour more accurately than tracking all individuals, that’s meaningful—even if we know that it is an abstraction.
Yes, but I think this is too strict a reading. The same could be said about any distributed system. When a program outputs “Hello world,” it’s really just electrons doing things. When a person speaks, it’s really just muscles and neural impulses. The distinction is in the coordination and interpretation. When a state department issues a formal diplomatic communication, it’s acting as the voice of an institution that maintains internal models, makes predictions, and responds to feedback. That is, in all the functional ways that matter, it is the country speaking.
Exactly, and we can extend the analogy to institutions that are the coordinating organs of a country’s body. They can fail, conflict, or contradict each other, which is comparable to a neurological disorder. But that doesn’t mean there is no coherence. It just means the coherence is partial and susceptible to breakdown. One could say that is also true of human consciousness in pathological states.
So yes, I take the point that coherence is crucial. But I don’t think the lack of perfect coherence disqualifies countries from being modelled as agents or even from being on some continuum toward consciousness. The better question might be: Under what conditions does it become useful or predictive to model a system as being conscious?
I don’t mean to require perfect coherence—humans don’t have it, and if that’s required, NOTHING is conscious (note: this is a defensible position, but not particularly interesting to me). There’s enough difference between humans and the other examples that I’m not convinced by the analogies I’ve seen, and I believe this is the one important dimension of difference, but since this is all abstraction and intuition anyway, others are free to disagree.
In humans, there’s a lack of viability of independent subsets. It’s almost certainly the case that a partial brain still has some consciousness, and likely some differences from the whole being, but it’s not very divisible into truly independent segments. This is a kind of coherence that I don’t see in the other examples. Organs are not a good analogy for constituents or sub-organizations of a country, as organs DON’T have volition and world-models.
No, that’s an isolated demand for rigor. We can definitely make different entity-analogies for different questions. When it matters, as it sometimes does, we can break the simplification and prosecute the officers or employees who are ACUTALLY responsible for a corporate action.
This is a GREAT framing for the question. Let’s not talk about “consciousness” as if it were a useful label that we agree on. Taboo the word, and the holistic concept, and let’s ask “when is it more useful to model a country as an entity that thinks and plans, as opposed to modeling it as a collection of groups of humans, who individually influence each other in thinking and planning”?
I still owe you a response to this. I’m esp. thinking about predictions.