The Boat Theft Theory of Consciousness
[ Context: The Debate on Animal Consciousness, 2014 ]
There’s a story in Growing Up Yanomamo where the author, Mike Dawson, a white boy from America growing up among Yanomamö hunter-gatherer kids in the Amazon, is woken up in the early morning by two of his friends.
One of the friends says, “We’re going to go fishing”.
So he goes with them.
At some point on the walk to the river he realizes that his friends haven’t said whose boat they’ll use [ they’re too young to have their own boat ].
He considers asking, then realizes that if he asks, and they’re planning to borrow an older tribesmember’s boat without permission [ which is almost certainly the case, given that they didn’t specify up front ], his friends will have to either abort the mission or verbally say “we’re going to steal John’s boat”. This would destroy all their common-knowledge [ in the game-theoretic sense of common knowledge ] plausible deniability, making it so that no one would be able to honestly say, upon apprehension, “I was there, and we didn’t really plan to steal any boats, we just . . . walked to the river and there was one there.”
In order to be making the decision—deliberate or not—to omit facts that will later be socially damning from their explicit communication, while still getting away with ostensible moral violations—Mike and his friends had to have a razor-sharp model of what was socially damning.
And, in order to differentiate between [ their razor-sharp model of what was socially damning ], versus [ what they personally felt they could get away with if certain facts were carefully omitted from their explicit communication ], they—or rather, their brains, since the bandwidth of conscious human cognition couldn’t realistically handle this explicitly—had to have a very strong ability to navigate the use-mention distinction.
Use-mention almost had to be a primitive, in addition to all the other primitives—social and otherwise—their brains had natively.
If you’ve read GEB, you know the natural way to make use-mention a primitive is by running a self-model.
Monkeys are really bad at concealing their guilt.
If a monkey tries to steal something, it will usually give itself away to any watching conspecifics by its cringing posture.
It knows theft is wrong—it has to know this, to avoid social punishment—and it lacks the ability to partition use—the act of reaping the benefits of theft itself—from mention—the explicit reification of the act as theft, in the social consensus narrative.
Chimps are intermediate between monkeys and humans at this. Monkeys categorically lack it as far as I know. [ Here’s a paper about monkeys physically hiding their faces as a deceptive tactic, which I believe is the best they can do. Neither adult humans nor competent adult chimps generally do this, because “face data not available” is an obvious tell once your conspecifics’ brains are as good at Bayes as adult chimp and human brains are [ hence the “breaking eye contact is proof of intent to deceive” wisdom ]. ]
Chimps also:
- are generally moderately impressive at politics as far as animals go—see de Waal’s Chimpanzee Politics, and compare to Lorenz’s writings on jackdaws [ Studies in Human and Animal Behavior ]
- sometimes pass the mirror test [ and so are treated as likely-moral-patients by Yudkowsky-consciousness-theory people, including me ]
Roger Penrose thinks consciousness helps us with solving some abstract class of reasoning-about-the-environment problems.
I think that’s very silly and obviously consciousness helps us steal boats.
I’m not sure this is relevant, but I think it would be clearer if we replaced “consciousness” with “self awareness.”
I’m very unsure whether
having “self awareness” (a model of oneself in a world model)
⟺ having “consciousness” or “internal experience”)
⟺ having “moral value.”
It seems very hard to define what consciousness or internal experience is, yet everyone is talking about it. It’s even possible that there is actually no such thing as consciousness or internal experience, but human cognition evolved to think as if this undefinable attribute existed, because thinking as if it existed led to better conclusions. And evolution only cares whether the brain’s thinking machinery makes adaptive outputs, not whether the concepts it uses to arrive at those outputs make any sense at all.
Whether we flag an object as being “conscious” or having “internal experience” may be evolution’s way of deciding whether or not we should predict the object’s behaviour using the “what would I do if I was it” computation. If the computation helps predict the object, we evolved to see it as conscious. If the computation doesn’t help, we evolved to not see it as conscious, and instead predict its behaviour by modelling its parts and past behaviour.
Just like “good” and “bad” only exists in the map and not the territory, so might “conscious” and “not conscious.” A superintelligent being might not predict human behaviour by asking “what would I do if I was it,” but instead predict us by modelling our parts. In that sense, we are not conscious from its point of view. But that shouldn’t prove we have no moral value.
I feel that animals have moral value, but whether they are conscious may be sorta subjective.
I like this treatment of consciousness and morality so much better than the typical EA (and elsewhere) naive idea that anything that “has consciousness” suddenly “has moral value” (even worse, and dangerous, is to combine that with symmetric population ethics). We should treat these things carefully (and imo democratically) to avoid making giant mistakes once AI allows us to put ethics into practice.
I’m a bit confused what moral mistakes you feel we might make as a result of conflating moral value with the thing I describe in OP [ whether you call that thing “consciousness” or not ].
Do certain nonhuman entities—animals, possibly-near hypothetical AIs—seem like they would obviously have a subjective experience like your own, to you?
Ah I wasn’t really referring to the OP, more to people who in general might blindly equate vague notions of whatever consciousness might mean to moral value. I think that’s an oversimplification and possibly dangerous. Combined with symmetric population ethics, a result could be that we’d need to push for spamming the universe with maximum happy AIs, and even replacing humanity with maximum happy AIs since they’d contain more happiness per kg or m3. I think that would be madness.
Animals: yes, some. Future AIs: possibly.
If I’d have to speculate, I’d guess that self-awareness is just included in any good world model, and sentience is a control feedback loop, in both humans and AIs. These two things together, perhaps in something like a global workspace, might make up what some people call consciousness. These things are obviously useful to steer machines into a designed direction. But I fear they will turn out to be trivial engineering results: one could argue an automatic vacuum cleaner has feeling, since it has a feedback loop steering it clear of a wall. That doesn’t mean it should have rights.
I think the morality question is a difficult one, will remain subjective, and we should vote on it, rather than try to solve it analytically. I think the latter is doomed.
!! A Boldface?
See my earlier comment on Rafael Harth’s Why it’s so hard to talk about Consciousness.
Relevant excerpts:
[ . . . ]
Does any of that sound like what you are talking about? Or am I not getting at the right thing?
To some extent, if you don’t understand yourself explicitly, if you couldn’t write yourself as a computer program and can only access your own subjective self, the question of ‘what class of entities do you think is conscious’ comes down to ‘what class of entities do you think is Like Yourself in the Important Way’. I think the question of ‘what has moral value’ has some components that are similar.
However, I do think there is a specific ‘self-model’ component where, if we could all see them clearly, and we were all operating in Moral Realist Mode, we would all say, “Oh, yeah, that part, the conscious-awareness part, most things that naturally crop up on Earth that have that have vastly more moral value than most that don’t”.
I believe less ambiguously, on a purely epistemic level, that this self-model component does particular things and is necessary to function in a social world as complex as the human one—at least, for beings not so much vastly more generally intelligent than us that every cognitive task rounds off to “trivial for them for free”.
Edit:
There’s a difference between modeling something reflectively, and understanding attributes about it. We model computer algorithms we ourselves don’t use all the time. A superintelligence that didn’t share our consciousness or the structure of our sense of empathy could still know we were conscious, as well as other facts about us.
I want to say that I am very uncertain about consciousness and internal experience and how it works. I do think that your position (it depends on some form of an internal model which computes a lot of details about oneself) feels more plausible than other ideas like “Integrated Information Theory.”
What drove me so insane that I utter the words “maybe consciousness and internal experience doesn’t exist,” is that from first principles, if consciousness exists, it probably exists on a spectrum. There is no single neuron firing which takes you from 0 consciousness to 1 consciousness.
Yet trying to imagine being something with half as much consciousness or twice as much consciousness as myself, seems impossible.
I can try to imagine the qualia being more “intense/deep” or more “faded,” but that seems so illogical, because the sensation of intensity and fadedness is probably just a quantity used to track how much I should care about an experience. If I took drugs which made everything feel so much more profound (or much more faded), I don’t think I’ll actually be many times more conscious (or less conscious). I’ll just care more (or less) about each experience.
The only way I can imagine consciousness, is to imagine that some things have a consciousness, some things do not have a consciousness, and that things with a consciousness experience the world, in that I could theoretically be them, and feel strange things or see strange things. And that things which do not have a consciousness don’t experience the world, and I couldn’t possibly be them.
If I was forced to think that consciousness wasn’t discrete, but existed on a spectrum (with some things being 50% or 10% as conscious as me), it would be just as counterintuitive as if consciousness didn’t exist at all, and it was just my own mind deciding which objects I should try to predict by imagining if I were them.
Isn’t this what we experience every day when we go to sleep or wake up? We know it must be a gradual transition, not a sudden on/off switch, because sleep is not experienced as a mere time-skip—when you wake up, you are aware that you were recently asleep, and not confused how it’s suddenly the next day. (Or at least, I don’t get the time-skip experience unless I’m very tired.)
(When I had my wisdom teeth extracted under laughing gas, it really did feel like all-or-nothing, because once I reawoke I asked if they were going to get started with the surgery soon, and I had to be told “Actually it’s finished already”. This is not how I normally experience waking up every morning.)
Hmm. I think, there are two framings of consciousness.
Framing 1, is how aware I am of my situation, and how clear my memories are, and so forth.
Framing 2, is that for beings with a low level of consciousness, there is no inner experience, no qualia, only behavioural mechanisms.
In framing 1, I can imagine being less conscious, or more conscious. But it’s hard to imagine gradually being less and less conscious, until at some point I cease to have inner experience or qualia and I’m just a robot with behavioural mechanisms (running away from things I fear, seeking things which give me reward).
It’s the second framing, which I think might not exist, since imagining it’s on a continuum feels as hard as imagining it doesn’t exist.
To me, it doesn’t even need to be imagined. Everyone experienced partial consciousness, e.g.
Dreaming, where you have phenomenal awareness , but not of an external world.
Deliberate visualisation, which is less phenomenally vivid than perception in most people.
Drowsiness, states between sleep.and waking.
Autopilot and flow states , where the sense of a self deciding actions isn absent.
More rarely there are forms of heightened consciousness: peak experiences, meditations jñanas, psychedelic enhanced perceptions , etc.
You could instead conclude that the first principles are wrong, and that consciousness depends on something more than “neurons”, something that is inherently all or nothing.
Consider a knot. The knottedness of a knot is a complex unity. If you cut it anywhere, it’s no longer a knot.
Physics and mathematics contain a number of entities, from topological structures to nonfactorizable algebraic objects, which do not face the “sorites” problem of being on a spectrum.
The idea is not to deny that neurons are causally related to consciousness, but to suggest that the relevant physics is not just about atoms sticking together; that physical entities with this other kind of ontology may be part of it too. That could mean knots in a field, it could mean entangled wavefunctions, it could mean something we haven’t thought of.
But I feel a knot has to be made up of, a mathematically perfect line. If you tied a knot using real world materials like string, I could gradually remove atoms until you were forced to say “okay it seems to be halfway between a knot and not a knot.”
Another confusing property of consciousness, is imagine if you were a computer simulation. The computer simulation is completely deterministic: if the simulation was run twice, the result would be identical.
So now imagine the simulators run you once, and record a 3D video of exactly how all your atoms moved. In this first run, you would be conscious.
But now imagine they run you a second time. Would you still be conscious? I think most people who believe consciousness is real, would say yes you would.
But what if they simply replay the video from first time, without any cause and effect? Then most people would probably say you wouldn’t be conscious, you’re just a recording.
But what if they did a combination, where at each time step, some of your atoms are updated according to the simulation physics, while other atoms are updated using the past recording? Then they could adjust how conscious you were. How much cause and effect was occurring inside you. They could make some of your experiences more conscious. Some of your brain areas more conscious.
What would happen to your experience as certain brain areas were gradually replaced by recordings, and thus less conscious? What would happen to your qualia?
Well, you wouldn’t notice anything. Your experience will feel just the same. Until at some point. Somehow. You cease to have any experience, and you’re just a recording.
You would never feel the “consciousness juice” draining out of you. You will never think that your qualia is fading.
Instead, you will have a strong false conviction that you still have a lot of qualia, when in reality there is very little left.
But what if, all qualia was such a false conviction in the first place?
What if the quantity of experience/consciousness was subjective, and only existed in the map, not the territory?
Maybe there’s no such thing as qualia, but there is still such thing as happiness and sadness and wonder. Humans feel these things. Animals feel something analogous to it. There is no objective way to quantify the number of souls or amount of experience. But we have the instinct to care about other things, and so we do. We don’t care about insects more than fellow humans because they are greater in number and “contain most of the experience,” but we still care about them a tiny bit.
There are various ways you can get a knot at a fundamental level. It can be a knot in field lines, it can be a knot in a “topological defect”, it can be a knot in a fundamental string.
I don’t know if you’ve heard of it, but there is an algorithm for the cellular automaton “Game of Life”, called Hashlife, which functions like that. Hashlife remembers the state transitions arising from particular overall patterns, and when a Game of Life history is being computed, if a known pattern occurs, it just substitutes the memorized history rather than re-computing it from first principles.
So I take you to be asking, what are the implications for simulated beings, if Hashlife-style techniques are used to save costs in the simulation?
When thinking about simulated beings, the first thing to remember, if we are sticking to anything like a natural-scientific ontology, is that the ultimate truth about everything resides in the physics of the base reality. Everything that your computer does, consists of electrons shuttling among transistors. If there are simulated beings in your computer, that’s what they are made of.
If you’re simulating the Game of Life… at the level of software, at the level of some virtual machine, some cells may be updated one fundamental timestep at a time, by applying the basic dynamical rule; other cells may be updated en masse and in a timeskip, by retrieving a Hashlife memory. But either way, the fundamental causality is still electrons in transistors being pulled to and fro by electromagnetic forces.
Describing what happens inside a computer as 0s and 1s changing, or as calculation, or as simulation, is already an interpretive act that goes beyond the physics of what is happening there. It is the same thing as the fact that objectively, the letters on a printed page are just a bunch of shapes. But human readers have learned to interpret those shapes as propositions, descriptions, even as windows into other possible worlds.
If we look for objective facts about a computer that connect us to these intersubjective interpretations, there is the idea of virtual state machines. We divide up the distinct possible microphysical states, e.g. distributions of electrons within a transistor, and we say some distributions correspond to a “0” state, some correspond to a “1″ state, and others fall outside the range of meaningful states. We can define several tiers of abstraction in this way, and thereby attribute all kinds of intricate semantics to what’s going on in the computer. But from a strictly physical perspective, the only part of those meanings that’s actually there, are the causal relations. State x does cause state y, but state x is not intrinsically about the buildup of moisture in a virtual atmosphere, and state y is not intrinsically about rainfall. What is physically there, is a reconfigurable computational system designed to imitate the causality of whatever it’s simulating.
All of this is Scientific Philosophy of Mind 101. And because of modern neuroscience, people think they know that the human brain is just another form of the same thing, a physical system that contains a stack of virtual state machines; and they try to reason from that, to conclusions about the nature of consciousness. For example, that qualia must correspond to particular virtual states, and so that a simulation of a person can also be conscious, so long as the simulation is achieved by inducing the right virtual states in the simulator.
But—if I may simply jump to my own alternative philosophy—I propose that everything to do with consciousness, such as the qualia, depends directly on objective, exact, “microphysical” properties—which can include holistic properties like the topology of a fundamental knot, or the internal structure of an entangled quantum state. Mentally, psychologically, cognitively, the virtual states in a brain or a computer only tell us about things happening outside of its consciousness, like unconscious information processing.
This suggests a different kind of criterion for how much, and what kind of, consciousness there is in a simulation. For example, if we suppose that some form of entanglement is the physical touchstone of consciousness… then you may be simulating a person, but if your computer in base reality isn’t using entanglement to do so, then there’s no consciousness there at all.
Under this paradigm, it may still be possible e.g. to materialize a state of consciousness complete with the false impression that it had already been existing for a while. (Although it’s interesting that quantum informational states are subject to a wide variety of constraints on their production, e.g. the no-cloning theorem, or the need for “magic states” to run faster than a classical computer.) So there may still be epistemically disturbing possibilities that we’d have to come to terms with. But a theory of this nature at least assigns a robust reality to the existence of consciousness, qualia, and so forth.
I would not object to a theory of consciousness based solely on virtual states, that was equally robust. It’s just that virtual states, when you look at them from a microphysical perspective, always seem to have some fuzziness at the edges. Consider the computational interpretation of states of a transistor that I mentioned earlier. It’s a “0” if the electrons are all on one side, it’s a “1“ if they’re all on the other side, and it’s a meaningless state if it’s neither of those. But the problem is that the boundary between computational states isn’t physically absolute. If you have stray electrons floating around, there isn’t some threshold where it sharply and objectively stops being a “0” state, it’s just that the more loose electrons you have, the greater the risk that the transistor will fail to perform the causal role required for it to accurately represent a “0” in the dance of the logic gates.
This physical non-objectivity of computational states is my version of the problems that were giving you headaches in the earlier comment. Fortunately, I know there’s more to physics than Newtonian billard balls rebounding from each other, and that leads to some possibilities for a genuinely holistic ontology of mind.
None of that is obvious. It’s obvious that you would make the same reports, and that is all. If consciousness depends on real causality, or unsimulated physics, then it could fade out, *and you could notice that” , in the way you can notice drowsiness.
It’s not obvious but I think it’s probably true.
Whenever I notice something, there are probably some neurons in me doing this noticing activity, but in this thought experiment every neuron outputs the exact same signals.
I wouldn’t just make the same reports to other people, but each of my brain areas (or individual neurons) will make the same report to every other brain area.
The thing I don’t understand about claimed connection between self-model and phenomenal consciousness is that I don’t see much evidence for the necessity of self-model for conscious perception’s implementation—when I just stare at a white wall without internal dialog or other thoughts, what part of my experience is not implementable without self-model?
Is it claimed? There’s no mention of “phenomenal”in the OP.
Even if I’m not thinking about myself consciously [ i.e., my self is not reflecting on itself ], I have some very basic perception of the wall as being perceived by me, a perceiver—some perception of the wall as existing in reference to me. I have some sense of what the wall means to me, a being-who-is-continuous-with-past-and-future-instances-of-myself-but-not-with-other-things.
To generate me, my non-conscious, non-self-having brain has to reflect on itself, in a certain way [ I don’t know exactly how ] to create a self. The way I tend to distinguish this discursively from introspective cognition or introspective moods [ the other things that are, confusingly, meant by “reflectivity” ] is “in order for there to be a self, stuff has to reflect on stuff, in that certain unknown way. Whether the self reflects on itself is, in my experience, immaterial for consciousness-in-the-sense-of-subjective-experience”.
Is it you inspecting your experience or you making an inference from the “consciousness is self-awareness” theory? Because it doesn’t feel reflective to me? I think I just have a perception of a wall without anything being about me. It seems to be implementable by just forward pass streamed into short-term memory or something. If you just separated such a process and put it on repeat, just endlessly staring at a wall, I don’t see a reason why would anyone would describe it as reflective.
I mean, it is reflective in a sense that inner neurons observe outer neurons so in a sense it is a brain observing brain. But even rocks have connected inner layers.
My perception of the wall is in reference to me simply in the course of belonging to me, in being clearly my perception of the wall, rather than some other person’s.
Would anyone describe it as theirs? That access is reflective. It’s pretty difficult to retrieve data in a format you didn’t store it in.
And rock’s perception belongs to a rock.
But what if there is no access or self-description or retrieval? You just appear fully formed, stare at a wall for a couple of years and then disappear. Are you saying that describing your experience makes them retroactively conscious?
I’m saying that the way I apprehend, or reflexively relate to, my past or present experiences, as belonging to “myself”, is revealing of reflective access, which itself is suggestive of reflective storage.
If a hypothetical being never even silently apprehended an experience as theirs, that hypothetical being doesn’t sound conscious. I personally have no memories of being conscious but not being able to syntactically describe my experiences, but as far as I understand infant development that’s a phase, and it seems logically possible anyway.
The reason to have a model of self is that it allows us to maintain a fake model of self.
True model of self would not be very different from simply noticing true facts about ourselves, one fact at a time. But to lie convincingly, we need a coherent false narrative.
Assuming you mean the evolved strategy is to separate out a limited amount of information into the conscious space, having that part control what we communicate externally, so our dirty secrets are more safely hidden away within our original, unconscious space.
Essentially: Let outwards appearance be nicely encapsulated away, with a hefty dose of self-serving bias ad what not, give it only what we deem it useful to know.
Intriguing!! Feels like a good fit with us feeling and appearing so supposedly good always and everywhere, while in reality, we’re rather deeply nasty as humans in so many ways.
On the one hand: I find it on one hand intuitively able to help explain a split into two layers, conscious and sub-conscious processes, indeed.
On the other hand: If the aim is to explain ‘consciousness as phenomenal consciousness’ (say if we’re not 100% illusionists): I don’t see how the separating into two layers would necessarily create phenomenology something something, as opposed to more ‘basic’ information processing layers.
I had a pretty different interpretation—that the dirty secrets were plenty conscious (he knew consciously they might be stealing a boat), instead he had unconscious mastery of a sort of people-modeling skill including self-modeling, which let him take self-aware actions in response to this dirty secret.
If the layers were totally separate, it would be too expensive to run them both at the same time. I’m claiming the hack evolution figured out is that the “what does social reality rule is happening here?” component is implemented as a self-model of the rest of the instincts.
What’s the relationship between consciousness and intelligence?
Consciousness increases your intelligence in a narrow way by helping you do a particular thing. Sometimes people can be ~generally smarter when they’re thinking reflectively but that’s not the same thing I mean by “consciousness”.
Very helpful for me. Just noticed the other day that I didn’t know what the use-mention distinction was actually about.
I bet you can exploit it rather strongly. Like, create some mirror-test shrimp that does all other cognitive functions on the level of shrimp, but passes the mirror test every time. It’s not exactly something that evolution tended to optimize hard against, so maybe it’s fine to use on actual animals. But pain for example is, and it seems the motion to use something like mirror test instead of “does it feel things” is for coordination over better proxy? But if you start use proxy, there would be mirror-test shrimp incentives.
Well, here ya go. Apparently, the mirror-test shrimp are Myrmica ants.
The article is named Are Ants (Hymenoptera, Formicidae) capable of self recognition?, and the abstract could’ve been “Yes” if the authors were fond of brevity (link: https://www.journalofscience.net/html/MjY4a2FsYWk=, link to a pdf: https://www.journalofscience.net/downnloadrequest/MjY2a2FsYWk=).
I remember hearing a claim that the mirror test success rate reported in this article is the highest among all animals ever tested, but this needs checking, can easily be false.
This is quite an extraordinary claim published in a terrible journal. I’m not sure how seriously I should take the results, but as far as I know nobody took them seriously enough to reproduce, which is a shame. I might do it one day.
Interesting post! I have a couple of questions to help clarify the position:
1. There’s a growing body of evidence e.g. this paper that creatures like octopuses show behavioural evidence for an affective pain-like response. How would you account for this? Would you say they’re not really feeling pain in a phenomenal consciousness sense?
2. I could imagine an LLM-like system passing the threshold for the use-mention distinction in the post.(although maybe this would depend on how “hidden” the socially damning thoughts are e.g. if it writes out damning thoughts in its CoT but not in its final response does this count?) Would your model treat the LLM-like system as conscious? Or would it need additional features?
Phenomenal consciousness (i.e., conscious self-awareness) is clearly not required for pain responses. Many more animals—and much simpler animals—exhibit pain responses, than plausibly possess phenomenal consciousness.
To be clear, I’m using the term phenomenal consciousness in the Nagel (1974) & Block (1995) sense that there is something it is like to be that system.
Your reply equates phenomenal consciousness with conscious self-awareness which is a stronger criterion to how I’m using it. To clarify what you mean by self-awareness could you clarify which definition you have in mind?
Body-schema self model—an embodied agent tracking the position and status of its limbs as it’s interacting with and moving about the world.
Counterfactual valence planning—e.g. the agent thinks “it will hurt”, “I’ll get food” etc.. when planning
Higher order thought—the agent entertains a meta-representation like “I am experiencing X”
Something else?
Octopuses qualify as self-aware under 1) and 2) from the paper I linked above—but no one claims they satisfy 3).
For what it’s worth, I tend away from the idea that 3) is required for phenomenal consciousness as I find Block’s arguments from phenomenal overflow compelling. But it’s a respected minority view in the philosophical community.
Phenomenal consciousness not self awareness.
I mean, I think it’s like when Opus says it has emotions. I don’t think it “has emotions” in the way we mean that when talking to each other. I don’t think the sense in which this [ the potential lack of subjective experience ] can be true of animals is intuitive for most people to grasp. But I don’t think “affective pain-like response in octopuses in specific” is particularly compelling evidence for consciousness over, just, like, the fact that nonhuman animals seem to pursue things and react ~affectively to stimuli. I’m a bit puzzled why you would reference a specific study on octopuses, honestly, when cats and squirrels cry out all the time in what appears obviously-to-humans to be pain or anger.
Like with any other creature, you could just do some kind of mirror test. Unfortunately I have to refrain from constructing one I think would work on LLMs because people exist right now who would have the first-order desire and possibly the resources to just deliberately try and build an LLM that would pass it. Not because they would actually need their LLM to have any particular capabilities that would come with consciousness, but because it would be great for usership/sales/funding if they could say “Ooh, we extra super built the Torment Nexus!”
Ok interesting, I think this substantially clarifies your position.
Two reasons:
It just happened to be a paper I was familiar with, and;
I didn’t fully appreciate how willing you’d be to run the argument for animals more similar to humans like cats or squirrels. In retrospect, this is pretty clearly implied by your post and the link from EY you posted for context. My bad!
I grant that animals have substantially different neurological structure to humans. But I don’t think this implies that what’s happening when they’re screaming or reacting to averse stimuli is so foreign we wouldn’t even recognise it as pain and I really don’t think this implies that there’s an absence of phenomenal experience.
Consider a frog snapping its tongue at an object it thinks is a fly. It obviously has a different meaning for [fly] than humans have—a human would never try to eat the fly! But I’d argue the concept of a fly as [food] for the frog overlaps with the concept of [food] for the human. We’re both eating through our mouths, eating to maintain nutrition, normal bodily functioning, because we get hungry etc… the presence of all these evolutionary selected functions are what it means for the system to consider something as [food] or to consider itself [hungry]. Just as the implementation of a negatively affective valenced response, even if different in its specific profile in each animal, is closely related enough for us to call it [pain].
In the study I linked the octopus is:
Recalling the episode where they were exposed to averse stimuli.
Binding it to a spatial context e.g. a particular chamber where it occurred
Evaluating analgesic states as intrinsically good
If the functional profile of pain is replicated—what grounds do we have to say the animals are not actually experiencing pain phenomenally?
I think where we fundamentally differ is on what level of self-modeling is required for phenomenal experience. I find it plausible that some “inner-listener” might be required for experiences to register phenomenally, but I don’t think the level of self-modelling required is so sophisticated. Consider that animals navigating their environment must have some simple self-model—to coordinate limbs, avoid obstacles etc.. These require representing [self] vs [world] and tracking what’s good or bad for me.
All this said, I really liked the post. I think the use-mention distinction is interesting and a pretty good candidate for why sophisticated self-modelling evolved in humans. I’m just not convinced on the link to phenomenal consciousness.