Quick general thoughts on suffering and consciousness

Below, I’ve collected some of my thoughts on consciousness. Topics covered (in the post and/​or the comments below) include:

  • To what extent did subjective pain evolve as a social signal?

  • Why did consciousness evolve? What function(s) did it serve?

  • What would the evolutionary precursors of ‘full’ consciousness look like?

  • What sorts of human values are more or less likely to extend to unconscious things?

  • Is consciousness more like a machine (where there’s a sharp cutoff between ‘the machine works’ or ‘the machine doesn’t’), or is it more like a basic physical property like mass (where there’s a continuum from very small fundamental things that have small amounts of the property, all the way up to big macroscopic objects that have way more of the property)?

  • How should illusionism (the view that in an important sense we aren’t conscious, but non-phenomenally ‘appear’ to be conscious) change our answers to the questions above?


1. Pain signaling

In September 2019, I wrote on my LW shortform:

Rolf Degen, summarizing part of Barbara Finlay’s “The neuroscience of vision and pain”:

Humans may have evolved to experience far greater pain, malaise and suffering than the rest of the animal kingdom, due to their intense sociality giving them a reasonable chance of receiving help.

From the paper:

Several years ago, we proposed the idea that pain, and sickness behaviour had become systematically increased in humans compared with our primate relatives, because human intense sociality allowed that we could ask for help and have a reasonable chance of receiving it. We called this hypothesis ‘the pain of altruism’ [68]. This idea derives from, but is a substantive extension of Wall’s account of the placebo response [43]. Starting from human childbirth as an example (but applying the idea to all kinds of trauma and illness), we hypothesized that labour pains are more painful in humans so that we might get help, an ‘obligatory midwifery’ which most other primates avoid and which improves survival in human childbirth substantially ([67]; see also [69]). Additionally, labour pains do not arise from tissue damage, but rather predict possible tissue damage and a considerable chance of death. Pain and the duration of recovery after trauma are extended, because humans may expect to be provisioned and protected during such periods. The vigour and duration of immune responses after infection, with attendant malaise, are also increased. Noisy expression of pain and malaise, coupled with an unusual responsivity to such requests, was thought to be an adaptation.

We noted that similar effects might have been established in domesticated animals and pets, and addressed issues of ‘honest signalling’ that this kind of petition for help raised. No implication that no other primate ever supplied or asked for help from any other was intended, nor any claim that animals do not feel pain. Rather, animals would experience pain to the degree it was functional, to escape trauma and minimize movement after trauma, insofar as possible.

Finlay’s original article on the topic: “The pain of altruism”.

[Epistemic status: Thinking out loud]

If the evolutionary logic here is right, I’d naively also expect non-human animals to suffer more to the extent they’re (a) more social, and (b) better at communicating specific, achievable needs and desires.

There are reasons the logic might not generalize, though. Humans have fine-grained language that lets us express very complicated propositions about our internal states. That puts a lot of pressure on individual humans to have a totally ironclad, consistent “story” they can express to others. I’d expect there to be a lot more evolutionary pressure to actually experience suffering, since a human will be better at spotting holes in the narratives of a human who fakes it (compared to, e.g., a bonobo trying to detect whether another bonobo is really in that much pain).

It seems like there should be an arms race across many social species to give increasingly costly signals of distress, up until the costs outweigh the amount of help they can hope to get. But if you don’t have the language to actually express concrete propositions like “Bob took care of me the last time I got sick, six months ago, and he can attest that I had a hard time walking that time too”, then those costly signals might be mostly or entirely things like “shriek louder in response to percept X”, rather than things like “internally represent a hard-to-endure pain-state so I can more convincingly stick to a verbal narrative going forward about how hard-to-endure this was”.


2. To what extent is suffering conditional or complex?

In July 2020, I wrote on my shortform:

[Epistemic status: Piecemeal wild speculation; not the kind of reasoning you should gamble the future on.]

Some things that make me think suffering (or ‘pain-style suffering’ specifically) might be surprisingly neurologically conditional and/​or complex, and therefore more likely to be rare in non-human animals (and in subsystems of human brains, in AGI subsystems that aren’t highly optimized to function as high-fidelity models of humans, etc.):

1. Degen and Finlay’s social account of suffering above.

2. Which things we suffer from seems to depend heavily on mental narratives and mindset. See, e.g., Julia Galef’s Reflections on Pain, from the Burn Unit.

Pain management is one of the main things hypnosis appears to be useful for. Ability to cognitively regulate suffering is also one of the main claims of meditators, and seems related to existential psychotherapy’s claim that narratives are more important for well-being than material circumstances.

Even if suffering isn’t highly social (pace Degen and Finlay), its dependence on higher cognition suggests that it is much more complex and conditional than it might appear on initial introspection, which on its own reduces the probability of its showing up elsewhere: complex things are relatively unlikely a priori, are especially hard to evolve, and demand especially strong selection pressure if they’re to evolve and if they’re to be maintained.

(Note that suffering introspectively feels relatively basic, simple, and out of our control, even though it’s not. Note also that what things introspectively feel like is itself under selection pressure. If suffering felt complicated, derived, and dependent on our choices, then the whole suite of social thoughts and emotions related to deception and manipulation would be much more salient, both to sufferers and to people trying to evaluate others’ displays of suffering. This would muddle and complicate attempts by sufferers to consistently socially signal that their distress is important and real.)

3. When humans experience large sudden neurological changes and are able to remember and report on them, their later reports generally suggest positive states more often than negative ones. This seems true of near-death experiences and drug states, though the case of drugs is obviously filtered: the more pleasant and/​or reinforcing drugs will generally be the ones that get used more.

Sometimes people report remembering that a state change was scary or disorienting. But they rarely report feeling agonizing pain, and they often either endorse having had the experience (with the benefit of hindsight), or report having enjoyed it at the time, or both.

This suggests that humans’ capacity for suffering (especially more ‘pain-like’ suffering, as opposed to fear or anxiety) may be fragile and complex. Many different ways of disrupting brain function seem to prevent suffering, suggesting suffering is the more difficult and conjunctive state for a brain to get itself into; you need more of the brain’s machinery to be in working order in order to pull it off.

4. Similarly, I frequently hear about dreams that are scary or disorienting, but I don’t think I’ve ever heard of someone recalling having experienced severe pain from a dream, even when they remember dreaming that they were being physically damaged.

This may be for reasons of selection: if dreams were more unpleasant, people would be less inclined to go to sleep and their health would suffer. But it’s interesting that scary dreams are nonetheless common. This again seems to point toward ‘states that are further from the typical human state are much more likely to be capable of things like fear or distress, than to be capable of suffering-laden physical agony.’


3. Consciousness and suffering

Eliezer recently criticized “people who worry that chickens are sentient and suffering” but “don’t also worry that GPT-3 is sentient and maybe suffering”. (He thinks chickens and GPT-3 are both non-sentient.)

Jemist responded on LessWrong, and Nate Soares wrote a reply to Jemist that I like:

Instrumental status: off-the-cuff reply, out of a wish that more people in this community understood what the sequences have to say about how to do philosophy correctly (according to me).

> EY’s position seems to be that self-modelling is both necessary and sufficient for consciousness.

That is not how it seems to me. My read of his position is more like: “Don’t start by asking ‘what is consciousness’ or ‘what are qualia’; start by asking ‘what are the cognitive causes of people talking about consciousness and qualia’, because while abstractions like ‘consciousness’ and ‘qualia’ might turn out to be labels for our own confusions, the words people emit about them are physical observations that won’t disappear. Once one has figured out what is going on, they can plausibly rescue the notions of ‘qualia’ and ‘consciousness’, though their concepts might look fundamentally different, just as a physicist’s concept of ‘heat’ may differ from that of a layperson. Having done this exercise at least in part, I (Nate’s model of Eliezer) assert that consciousness/​qualia can be more-or-less rescued, and that there is a long list of things an algorithm has to do to ‘be conscious’ /​ ‘have qualia’ in the rescued sense. The mirror test seems to me like a decent proxy for at least one item on that list (and the presence of one might correlate with a handful of others, especially among animals with similar architectures to ours).”

> An ordering of consciousness as reported by humans might be:

> Asleep Human < Awake Human < Human on Psychedelics/​Zen Meditation

> I don’t know if EY agrees with this.

My model of Eliezer says “Insofar as humans do report this, it’s a fine observation to write down in your list of ‘stuff people say about consciousness’, which your completed theory of consciousness should explain. However, it would be an error to take this as much evidence about ‘consciousness’, because it would be an error to act like ‘consciousness’ is a coherent concept when one is so confused about it that they cannot describe the cognitive antecedents of human insistence that there’s an ineffable redness to red.”

> But what surprises me the most about EY’s position is his confidence in it.

My model of Eliezer says “The type of knowledge I claim to have, is knowledge of (at least many components of) a cognitive algorithm that looks to me like it codes for consciousness, in the sense that if you were to execute it then it would claim to have qualia for transparent reasons and for the same reasons that humans do, and to be correct about that claim in the same way that we are. From this epistemic vantage point, I can indeed see clearly that consciousness is not much intertwined with predictive processing, nor with the “binding problem”, etc. I have not named the long list of components that I have compiled, and you, who lack such a list, may well not be able to tell what consciousness is or isn’t intertwined with. However, you can still perhaps understand what it would feel like to believe you can see (at least a good part of) such an algorithm, and perhaps this will help you understand my confidence. Many things look a lot more certain, and a lot less confusing, once you begin to see how to program them.”

Some conversations I had on Twitter and Facebook, forking off of Eliezer’s tweet (somewhat arbitrarily ordered, and with ~4 small edits to my tweets):

Bernardo Subercaseux: I don’t understand this take at all. It’s clear to me that the best hypothesis for why chickens have reactions to physical damage that are consistent with our models of expressions of suffering, as also do babies, is bc they can suffer in a similar way that I do. Otoh, best hypothesis for why GPT-3 would say “don’t kill me” is simply because it’s a statistically likely response for a human to have said in a similar context. I think claiming that animals require a certain level of intelligence to experience pain is unfalsifiable...

Rob Bensinger: Many organisms with very simple nervous systems, or with no nervous systems at all, change their behavior in response to bodily damage—albeit not in the specific ways that chickens do. So there must be some more specific behavior you have in mind here.

As for GPT-3: if you trained an AI to perfectly imitate all human behaviors, then plausibly it would contain suffering subsystems. This is because real humans suffer, and a good way to predict a system (including a human brain) is to build a detailed emulation of it.

GPT-3 isn’t a perfect emulator of a human (and I don’t think it’s sentient), but there’s certainly a nontrivial question of how we can know it’s not sentient, and how sophisticated a human-imitator could get before we’d start wanting to assign non-tiny probability to sentience.

Bernardo Subercaseux: I don’t think it’s possible to perfectly imitate all human behavior for anything non-human, in the same fashion as we cannot perfectly imitate all chicken behaviors, of plant behaviors… I think being embodied as a full X is a requisite to perfectly imitate the behavior of X

Rob Bensinger: If an emulated human brain won’t act human-like unless it sees trees and grass, outputs motor actions like walking (with the associated stable sensory feedback), etc., then you can place the emulated brain in a virtual environment and get your predictions about humans that way.

Bernardo Subercaseux: My worry is that this converges to the “virtual environment” having to be exactly the real world with real trees and real grass and a real brain made of the same as ours and connected to as many things as ours is connected...

Rob Bensinger: Physics is local, so you don’t have to simulate the entire universe to accurately represent part of it.

E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail, but you wouldn’t need to simulate anything outside the room in any detail. You can just decide what you want the note to say, and then simulate a realistic-feeling, realistic-looking note coming into existence in the white room’s mail chute.

Bernardo Subercaseux:

[Physics is local, so you don’t have to simulate the entire universe to accurately represent part of it.

E.g., suppose you want to emulate how a human would respond to being slipped notes inside a locked white room. You might have to simulate the room in some sensory detail...]

i) not sure about the first considering quantum, but you might know more than I do.

ii) but I’m not saying the entire universe, just an actual human body.

In any case, I still think that a definite answer relies on understanding the physical processes of consciousness, and yet it seems to me that no AI at the moment is close to pose a serious challenge in terms of whether it has the ability to suffer. This in opposition to animals like pigs or chicken...

Rob Bensinger: QM allows for some nonlocal-looking phenomena in a sense, but it still has a speed-of-light limit.

I don’t understand what you mean by ‘actual human body’ or ‘embodied’. What specific properties of human bodies are important for human cognition, and expensive to simulate?

I think this is a reasonable POV: ‘Humans are related to chickens, so maybe chickens have minds sort of like a human’s and suffer in the situations humans would suffer in. GPT-3 isn’t related to us, so we should worry less about GPT-3, though both cases are worth worrying about.’

I don’t think ‘There’s no reason to worry whatsoever about whether GPT-3 suffers, whereas there are major reasons to worry animals might suffer’ is a reasonable POV, because I haven’t seen a high-confidence model of consciousness grounding that level of confidence in all of that.

Bernardo Subercaseux: doesn’t your tweet hold the same if you replace “GPT-3” by “old Casio calculator”, or “rock”?

Rob Bensinger: I’m modeling consciousness as ‘a complicated cognitive something-we-don’t-understand, which is connected enough to human verbal reporting that we can verbally report on it in great detail’.

GPT-3 and chickens have a huge number of (substantially non-overlapping) cognitive skills, very unlike a calculator or a rock. GPT-3 is more human-like in some (but not all) respects. Chickens, unlike GPT-3, are related to humans. I think these facts collectively imply uncertainty about whether chickens and/​or GPT-3 are conscious, accompanied by basically no uncertainty about whether rocks or calculators are conscious.

( Also, I agree that I was being imprecise in my earlier statement, so thanks for calling me out on that. 🙂 )

Bernardo Subercaseux: the relevance of verbal reporting is an entire conversation on its own IMO haha! Thanks for the thought-provoking conversation :) I think we agree on the core, and your comments made me appreciate the complexity of the question at hand!

Eli Tyre: I mean, one pretty straightforward thing to say:

IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain.

IF GPT-3 is sentient, I have no strong reason to think that it is or isn’t in pain.

Rob Bensinger: Chickens in factory farms definitely undergo a lot of bodily damage, illness, etc. If there are sentient processes in those chickens’ brains, then it seems like further arguments are needed to establish that the damage is registered by the sentient processes.

Then another argument for ‘the damage is registered as suffering’, and another (if we want to establish that such lives are net-negative) for ‘the overall suffering outweighs everything else’. This seems to require a model of what sentience is /​ how it works /​ what it’s for.

It might be that the explanation for all this is simple—that you get all this for free by positing a simple mechanism. So I’m not decomposing this to argue the prior must be low. I’m just pointing at what has to be established at all, and that it isn’t a freebie.

Eli Tyre: We have lots of intimate experience of how, for humans, damage and nociception leads to pain experience.

And the mappings make straightforward evolutionary sense. Once you’re over the hump of positing conscious experience at all, it makes sense that damage is experienced as negative conscious experience.

Conditioning on chickens being conscious at all, it seems like the prior is that their [conscious] experience of [nociception] follows basically the same pattern as a human’s.

It would be really surprising to me if humans were conscious and chickens were conscious, but humans were conscious of pain, while chickens weren’t?!?

That would seem to imply that conscious experience of pain is adaptive for humans but not for chickens?

Like, assuming that consciousness is that old on the phylogenitc tree, why is conscious experience of pain a separate thing that comes later?

I would expect pain to be one of the first things that organisms evolved to be conscious of.

Rob Bensinger: I think this is a plausible argument, and I’d probably bet in that direction. Not with much confidence, though, ‘cause it depends a lot on what the function of ‘consciousness’ is over and above things like ‘detecting damage to the body’ (which clearly doesn’t entail ‘conscious’).

My objection was to “IF chickens are sentient, then the chickens in factory farms are DEFINITELY in a lot of pain.”

I have no objection to ‘humans and chickens are related, so we can make a plausible guess that if they’re conscious, they suffer in situations where we’d suffer.’

Example: maybe consciousness evolved as a cog in some weird specific complicated function like ‘remembering the smell of your kin’ or, heck, ‘regulating body temperature’. Then later developed things like globalness /​ binding /​ verbal reportability, etc.

My sense is there’s a crux here like

Eli: ‘Conscious’ is a pretty simple, all-or-nothing thing that works the same everywhere. If a species is conscious, then we can get a good first-approximation picture by imagining that we’re inside that organism’s skull, piloting its body.

Me: ‘Conscious’ is incredibly complicated and weird. We have no idea how to build it. It seems like a huge mechanism hooked up to tons of things in human brains. Simpler versions of it might have a totally different function, be missing big parts, and work completely differently.

I might still bet in the same direction as you, because I know so little about which ways chicken consciousness would differ from my consciousness, so I’m forced to not make many big directional updates away from human anchors. But I expect way more unrecognizable-weirdness.

More specifically, re “why is conscious experience of pain a separate thing that comes later”: https://​​www.lesswrong.com/​​posts/​​HXyGXq9YmKdjqPseW/​​rob-b-s-shortform-feed?commentId=mZw9Jaxa3c3xrTSCY#mZw9Jaxa3c3xrTSCY [section 2 above] provides some reasons to think pain-suffering is relatively conditional, complex, social, high-level, etc. in humans.

And noticing + learning from body damage seems like a very simple function that we already understand how to build. If a poorly-understood thing like consc is going to show up in surprising places, it would probably be more associated with functions that are less straightforward.

E.g., it would be shocking if consc evolved to help organisms notice body damage, or learn to avoid such damage.

It would be less shocking if a weird esoteric aspect of intelligence eg in ‘aggregating neural signals in a specific efficiency-improving way’ caused consciousness.

But in that case we should be less confident, assuming chickens are conscious, that their consciousness is ‘hooked up’ to trivial-to-implement stuff like ‘learning to avoid bodily damage at all’.

silencenbetween:

This seems to require a model of what sentience is.

I think I basically agree with you, that there is a further question of pain=suffering? And that that would ideally be established.

But I feel unsure of this claim. Like, I have guesses about consciousness and I have priors and intuitions about it, and that leads me to feeling fairly confident that chickens experience pain.

But to my mind, we can never unequivocally establish consciousness, it’s always going to be a little bit of guesswork. There’s always a further question of first hand experience.

And in that world, models of consciousness refine our hunches of it and give us a better shared understanding, but they never conclusively tell us anything.

I think this is a stronger claim than just using Bayesian reasoning. Like, I don’t think you can have absolute certainty about anything…

but I also think consciousness inhabits a more precarious place. I’m just making the hard problem arguments, I guess, but I think they’re legit.

I don’t think the hard problem implies that you are in complete uncertainty about consciousness. I do think it implies something trickier about it relative to other phenomena.

Which to me implies that a model of the type you’re imagining wouldn’t conclusively solve the problem anymore than models of the sort “we’re close to each other evolutionarily” do.

I think models of it can help refine our guesses about it, give us clues, I just don’t see any particular model being the final arbiter of what counts as conscious.

And in that world I want to put more weight on the types of arguments that Eli is making. So, I guess my claim is that these lines of evidence should be about as compelling as other arguments.

Rob Bensinger: I think we’ll have a fully satisfying solution to the hard problem someday, though I’m pretty sure it will have to route through illusionism—not all parts of the phenomenon can be saved, even though that sounds paradoxical or crazy.

If we can’t solve the problem, though (the phil literature calls this view ‘mysterianism’), then I don’t think that’s a good reason to be more confident about which organisms are conscious, or to put more weight on our gut hunches.

I endorse the claim that the hard problem is legit (and hard), btw, and that it makes consciousness trickier to think about in some ways.

David Manheim:

[It might be that the explanation for all this is simple—that you get all this for free by positing a simple mechanism. So I’m not decomposing this to argue the prior must be low. I’m just pointing at what has to be established at all, and that it isn’t a freebie.]

Agreed—but my strong belief on how any sentience /​ qualia would need to work is that it would be beneficial to evolutionary fitness, meaning pain would need to be experienced as a (fairly strong) negative for it to exist.

Clearly, the same argument doesn’t apply to GPT-3.

Rob Bensinger: I’m not sure what you mean by ‘pain’, so I’m not sure what scenarios you’re denying (or why).

Are you denying the scenario: part of an organism’s brain is conscious (but not tracking or learning from bodily damage), and another (unconscious) part is tracking and learning from bodily damage?

Are you denying the scenario: a brain’s conscious states are changing in response to bodily damage, in ways that help the organism better avoid bodily damage, and none of these conscious changes feel ‘suffering-ish’ /​ ‘unpleasant’ to the organism?

I’m not asserting that these are all definitely plausible, or even possible—I don’t understand where consciousness comes from, so I don’t know which of these things are independent. But you seem to be saying that some of these things aren’t independent, and I’m not sure why.

David Manheim: Clearly something was lost here—I’m saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it’s beneficial.

Rob Bensinger: I don’t find it difficult to believe that a brain could have a conscious part doing something specific (eg, storing what places look like), and a separate unconscious part doing things like ‘learning which things cause bodily damage and tweaking behavior to avoid those’.

I also don’t find it difficult to believe that a conscious part of a brain could ‘learn which things cause bodily damage and tweak behavior to avoid those’ without experiencing anything as ‘bad’ per se.

Eg, imagine building a robot that has a conscious subsystem. The subsystem’s job is to help the robot avoid bodily damage, but the subsystem experiences this as being like a video game—it’s fun to rack up ‘the robot’s arm isn’t bleeding’ points.

David Manheim: That is possible for a [robot]. But in a chicken, you’re positing a design with separable features and subsystems that are conceptually distinct. Biology doesn’t work that way—it’s spaghetti towers the whole way down.

Rob Bensinger: What if my model of consciousness says ‘consciousness is an energy-intensive addition to a brain, and the more stuff you want to be conscious, the more expensive it is’? Then evolution will tend to make a minimum of the brain conscious—whatever is needed for some function.

people can be conscious about almost any signal that reaches their brain, largely depending on what they have been trained to pay attention to

This seems wrong to me—what do you mean by ‘almost any signal’? (Is there a paper you have in mind operationalizing this?)

David Manheim: I mean that people can choose /​ train to be conscious of their heartbeat, or pay attention to certain facial muscles, etc, even though most people are not aware of them. (And obviously small children need to be trained to pay attention to many different bodily signals.)

[Clearly something was lost here—I’m saying that the claim that there is a disconnect between conscious sensation and tracking bodily damage is a difficult one to believe. And if there is any such connection, the reason that physical damage is negative is that it’s beneficial.]

(I see—yeah I phrased my tweet poorly.)

I meant that pain would need to be experienced as a negative for consciousness to exist—otherwise it seems implausible that it would have evolved.

Rob Bensinger: I felt like there were a lot of unstated premises here, so I wanted to hear what premises you were building in (eg, your concept of what ‘pain’ is).

But even if we grant everything, I think the only conclusion is “pain is less positive than non-pain”, not “pain is negative”.

David Manheim: Yeah, I grant that the only conclusion I lead to is relative preference, not absolute value.

But for humans, I’m unsure that there is a coherent idea of valence distinct from our experienced range of sensation. Someone who’s never missed a meal finds skipping lunch painful.

Rob Bensinger: ? I think the idea of absolute valence is totally coherent for humans. There’s such a thing as hedonic treadmills, but the idea of ‘not experiencing a hedonic treadmill’ isn’t incoherent.

David Manheim: Being unique only up to to linear transformations implies that utilities don’t have a coherent notion of (psychological) valence, since you can always add some number to shift it. That’s not a hedonic treadmill, it’s about how experienced value is relative to other things.

Rob Bensinger: ‘There’s no coherent idea of valence distinct from our experienced range of sensation’ seems to imply that there’s no difference between ‘different degrees of horrible torture’ and ‘different degrees of bliss’, as long as the organism is constrained to one range or the other.

Seems very false!

David Manheim: It’s removed from my personal experience, but I don’t think you’re right. If you read Knut Hamsun’s “Hunger”, it really does seem clear that even in the midst of objectively painful experiences, people find happiness in slightly less pain.

On the other hand, all of us experience what is, historically, an unimaginably wonderful life. Of course, it’s my inside-view /​ typical mind assumption, but we experience a range of experienced misery and bliss that seems very much comparable to what writers discuss in the past.

Rob Bensinger: This seems like ‘hedonic treadmill often happens in humans’ evidence, which is wildly insufficient even for establishing ‘humans will perceive different hedonic ranges as 100% equivalent’, much less ‘this is true for all possible minds’ or ‘this is true for all sentient animals’.

“even in the midst of objectively painful experiences, people find happiness in slightly less pain” isn’t even the right claim. You want ‘people in objectively painful experiences can’t comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc’

David Manheim:

[This seems like ‘hedonic treadmill often happens in humans’ evidence, which is wildly insufficient even for establishing ‘humans will perceive different hedonic ranges as 100% equivalent’, much less ‘this is true for all possible minds’ or ‘this is true for all sentient animals’.]

Agreed—I’m not claiming it’s universal, just that it seems at least typical for humans.

[“even in the midst of objectively painful experiences, people find happiness in slightly less pain” isn’t even the right claim. You want ‘people in objectively painful experiences can’t comprehend the idea that their experience is worse, seem just as cheerful as anyone else, etc’]

Flip it, and it seems trivially true - ‘people in objectively wonderful experiences can’t comprehend the idea that their experience is better, seem just as likely to be sad or upset as anyone else, etc’

Rob Bensinger: I don’t think that’s true. E.g., I think people suffering from chronic pain acclimate a fair bit, but nowhere near completely. Their whole life just sucks a fair bit more; chronic pain isn’t a happiness- or welfare-preserving transformation.

Maybe people believe their experiences are a lot like other people’s, but that wouldn’t establish that humans (with the same variance of experience) really do have similar-utility lives. Even if you’re right about your own experience, you can be wrong about the other person’s in the comparison you’re making.

David Manheim:

Agreed—But I’m also unsure that there is any in-principle way to unambiguously resolve any claims about how good/​bad relative experiences are, so I’m not sure how to move forward about discussing this.

[I also don’t find it difficult to believe that a conscious part of a brain could ‘learn which things cause bodily damage and tweak behavior to avoid those’ without experiencing anything as ‘bad’ per se.]

That involves positing two separate systems which evidently don’t interact that happen to occupy the same substrate. I don’t see how that’s plausible in an evolved system.

Rob Bensinger: Presumably you don’t think in full generality ‘if a conscious system X interacts a bunch with another system Y, then Y must also be conscious’. So what kind of interaction makes consciousness ‘slosh over’?

I’d claim that there are complicated systems in my own brain that have tons of causal connections to the rest of my brain, but that I have zero conscious awareness of.

(Heck, I wouldn’t be shocked if some of those systems are suffering right now, independent of ‘my’ experience.)

Jacy Anthis:

[But in that case we should be less confident, assuming chickens are conscious, that their consciousness is ‘hooked up’ to trivial-to-implement stuff like ‘learning to avoid bodily damage at all’.]

I appreciate you sharing this, Rob. FWIW you and Eliezer seem confused about consciousness in a very typical way, No True Scotsmanning each operationalization that comes up with vague gestures at ineffable qualia. But once you’ve dismissed everything, nothing meaningful is left.

Rob Bensinger: I mean, my low-confidence best guess about where consciousness comes from is that it evolved in response to language. I’m not saying that it’s impossible to operationalize ‘consciousness’. But I do want to hear decompositions before I hear confident claims ‘X is conscious’.

Jacy Anthis: At least we agree on that front! I would extend that for ‘X is not conscious’, and I think other eliminativists like Brian Tomasik would agree that this is a huge problem in the discourse.

Rob Bensinger: Yep, I agree regarding ‘X is not conscious’.

(Maybe I think it’s fine for laypeople to be confident-by-default that rocks, electrons, etc are unconscious? As long as they aren’t so confident they could never update, if a good panpsychism argument arose.)

Sam Rosen: It’s awfully suspicious that:

  • Pigs in pain look and sound like what we would look and sound like if we were in pain.

  • Pigs have a similar brain to us with similar brain structures.

  • The parts of the brain that light up in pig’s brain when they are in pain are the same as the parts of the brain that light up in our brains when we are in pain. (I think this is true, but could totally be wrong.)

  • Pain plausibly evolved as a mechanism to deter receiving physical damage which pigs need just as much as humans.

  • Pain feels primal and simple—something a pig could understand. It’s not like counterfactual reasoning, abstraction, or complicated emotions like sonder.

It just strikes me as plausible that pigs can feel lust, thirst, pain and hunger—and humans merely evolved to learn how to talk about those things. It strikes me as less plausible that pigs unconsciously have mechanisms that control “lust”, “thirst,” “pain,” and “hunger” and humans became the first species on Earth that made all those unconscious functional mechanisms conscious.

(Like why did humans only make those unconscious processes conscious? Why didn’t humans, when emerging into consciousness, become conscious of our heart regulation and immune system and bones growing?)

It’s easier to evolve language and intelligence than it is to evolve language and intelligence PLUS a way of integrating and organizing lots of unconscious systems into a consciousness producing system where the attendant qualia of each subsystem incentivizes the correct functional response.

Rob Bensinger: Why do you think pigs evolved qualia, rather than evolving to do those things without qualia? Like, why does evolution like qualia?

Sam Rosen: I don’t know if evolution likes qualia. It might be happy to do things unconsciously. But thinking non-human animals aren’t conscious of pain or thirst or hunger or lust means adding a big step from apes to humans. Evolution prefers smooth gradients to big steps.

My last paragraph of my original comment is important for my argument.

Rob Bensinger: My leading guess about where consciousness comes from is that it evolved in response to language.

Once you can report fine-grained beliefs about your internal state (including your past actions, how they cohere with your present actions, how this coherence is virtuous rather than villainous, how your current state and future plans are all the expressions of a single Person with a consistent character, etc.), there’s suddenly a ton of evolutionary pressure for you to internally represent a ‘global you state’ to yourself, and for you to organize your brain’s visible outputs to all cohere with the ‘global you state’ narrative you share with others; where almost zero such pressure exists before language.

Like, a monkey that emits different screams when it’s angry, hungry, in pain, etc. can freely be a Machiavellian reasoner: it needs to scream in ways that at least somewhat track whether it’s really hungry (or in pain, etc.), or others will rapidly learn to distrust its signals and refuse to give aid. But this is a very low-bandwidth communication channel, and the monkey is free to have basically any internal state (incoherent, unreflective, unsympathetic-to-others, etc.) as long as it ends up producing cries in ways that others will take sufficiently seriously. (But not maximally seriously, since never defecting/​lying is surely not going to be the equilibrium here, at least for things like ‘I’m hungry’ signals.)

The game really does change radically when you’re no longer emitting an occasional scream, but are actually constructing sentences that tell stories about your entire goddamn brain, history, future behavior, etc.

Nell Watson: So, in your argument, would it follow then that feral human children or profoundly autistic human beings cannot feel pain, because they lack language to codify their conscious experience?

Rob Bensinger: Eliezer might say that? Since he does think human babies aren’t conscious, with very high confidence.

But my argument is evolutionary, not developmental. Evolution selected for consciousness once we had language (on my account), but that doesn’t mean consciousness has to depend on language developmentally.