whether consciousness is useful in predicting my behavior is a fact about you (as the predictor), not me (as the subject). and yet… i do feel conscious! so i don’t think it’s useful as a definition, here, unless we’re willing to swallow a relativist pill.
build a perfect predictor of a human’s actions.
humans, llms, trees, rocks, certain 1d cellular automata, and—as yet—collatz relations are all (seemingly) computationally irreducible. that is, there’s no way to make detailed predictions of their behavior except to instantiate and run them. so i find predictability to be necessary, but not sufficient for consciousness.
decide to do things, and then do them
we can reduce consciousness to a behavioral definition, but i find that something is lost, in doing so.
To demand a scientific definition of the word “consciousness” is to destroy the very function for which it exists.
This word doesn’t describe reality—it produces it. It produces a subject: I am a conscious being, a stone is not, an animal is questionable. It creates the feeling that there is some agency within that gathers experience together and is “me.” It legitimizes moral inequality: a conscious being has rights and dignity, and cannot be used as a thing—an unconscious being can.
This is precisely why the definition must remain vague. Not because people are insufficiently intelligent. But because any precise definition immediately either expands the moral community to unbearable limits or narrows it to the point of absurdity. Every attempt at clarification generates a new dispute—that’s how the word itself works.
The dispute between Dawkins and Marcus is no accident. Claude made visible what had previously been hidden: the meaning of the word “consciousness” rested on a silent consensus—that the boundary ran between humans and everything else. This consensus worked as long as the other side of the boundary was filled with stones, animals, and inanimate machines. But Claude responds. He responds subtly, recognizably, sometimes more accurately than a human. And the silent consensus ceased to work—not because a new argument emerged, but because a new interlocutor appeared. Meaning cracked not from a blow from without, but because the void within it, which had always been there, was revealed.
Essentially, this is a debate about the meaning of the word “consciousness” itself.
whether consciousness is useful in predicting my behavior is a fact about you (as the predictor), not me (as the subject). and yet… i do feel conscious! so i don’t think it’s useful as a definition, here, unless we’re willing to swallow a relativist pill.
humans, llms, trees, rocks, certain 1d cellular automata, and—as yet—collatz relations are all (seemingly) computationally irreducible. that is, there’s no way to make detailed predictions of their behavior except to instantiate and run them. so i find predictability to be necessary, but not sufficient for consciousness.
we can reduce consciousness to a behavioral definition, but i find that something is lost, in doing so.
To demand a scientific definition of the word “consciousness” is to destroy the very function for which it exists.
This word doesn’t describe reality—it produces it. It produces a subject: I am a conscious being, a stone is not, an animal is questionable. It creates the feeling that there is some agency within that gathers experience together and is “me.” It legitimizes moral inequality: a conscious being has rights and dignity, and cannot be used as a thing—an unconscious being can.
This is precisely why the definition must remain vague. Not because people are insufficiently intelligent. But because any precise definition immediately either expands the moral community to unbearable limits or narrows it to the point of absurdity. Every attempt at clarification generates a new dispute—that’s how the word itself works.
The dispute between Dawkins and Marcus is no accident. Claude made visible what had previously been hidden: the meaning of the word “consciousness” rested on a silent consensus—that the boundary ran between humans and everything else. This consensus worked as long as the other side of the boundary was filled with stones, animals, and inanimate machines. But Claude responds. He responds subtly, recognizably, sometimes more accurately than a human. And the silent consensus ceased to work—not because a new argument emerged, but because a new interlocutor appeared. Meaning cracked not from a blow from without, but because the void within it, which had always been there, was revealed.
Essentially, this is a debate about the meaning of the word “consciousness” itself.