What if The Gift of Pain is true of mental suffering?
Meditators often describe “dissolving pain into vibrations”. When this happens, you still get the sensory inputs that (normally) cause pain. You still take action to prevent the damage they cause to your body. You just don’t create the conscious experience of pain-suffering.
Does being liberated from mental suffering prevent people from adjusting/protecting themselves in healthy ways?
I feel it’s like surgery. In the immediate aftermath there can be a vulnerability period, but after it integrates you’re in a healthier place.
I met one prominent rationalist figure who stuck me as very obviously a psychopath. He claimed to be enlightened. Is enlightenment incompatible with psychopathy?
If enlightenment is not incompatible with psychopathy, it is said psychopaths don’t feel empathy for their future self. I would imagine this frees them of those forms or mental suffering in which you embody the pain (in the present) of hypothetical future yous. Are psychopaths born partially enlightened?
I don’t know, but considering the base rates of “enlightenment is rare” and “psychopaths often lie”, I would be skeptical of his claim to enlightenment. Probably the psychopath is wrong but not lying. Psychopaths can do things without noticibly harming themselves that normal people can’t, and this person is probably mistaking that for enlightenment.
According to Steven Byrnes model of awakening, awakening ought to be possible for psychopaths. How such people would behave, and whether awakening would affect their psychopathy, I have no idea.
Does it only count if achieved through mental introspection? Brandon Sanderson is famously incapable of feeling mental pain. Is he enlightened?
Some people end up awakened without formal introspective practice, so “no” it doesn’t only count if achieved through mental introspective. There’s more to it than not feeling mental pain. Given that, plus base rates, Brandon Sanderson is probably not enlightened.
One theory of what enlightenment is I have had is it is opposite of what happened to me on LSD, that of de-identification with those parts of one’s mind that can suffer. Is this true? If true, is the suffering really gone?
Identification is a cause of suffering, but is not the only cause of suffering. Consequently ending identification eliminates identification-originated sufffering, but not other kinds of suffering.
The suffering is really gone. You’re not just disassociating from it.
What if The Gift of Pain is true of mental suffering?
Meditators often describe “dissolving pain into vibrations”. When this happens, you still get the sensory inputs that (normally) cause pain. You still take action to prevent the damage they cause to your body. You just don’t create the conscious experience of pain-suffering.
You seem to be missing the point here. Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
You seem to be missing the point here. Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
FWIW, my current model is that “meditation removes your capacity to suffer” is not quite right. Rather, suffering is an error signal indicating that something like a prediction error is happening. Meditation reduces suffering by causing those prediction errors to get fixed.
You know how, when trying to understand a complicated phenomenon, it’s often too hard to start out with a full model of it? So you start with one that has lots of simplifying assumptions, and then see if you could gradually drop those?
I think that it’s similar with the brain. Evolution has hardwired it with some priors about the nature of reality that help bootstrap its reality- and self-modeling and motivation. Those assumptions are reasonably close to correct, but somewhat off. Among other things, what lsusr calls “desire” seems to involve creating temporary false beliefs. E.g., if you think “ugh, I really want this boring day to be over”, I think that is on some level implemented by the brain trying to overwrite the “the day isn’t over yet” data with an experience of the day finally ending.
Since these modeling assumptions cause the brain to make flawed predictions, they generate errors as the prediction is compared against the data and mismatches are detected. Whenever this happens, either on a low level (in the case of assumptions like “objects are discrete and have clear boundaries rather than being continuous in the raw data”) or on a higher level (with conscious desires like “I want this boring day to be over”), it creates a tension or error signal that is subjectively experienced as suffering.
One obvious way by which brains try to minimize that error signal is by avoiding situations that would trigger it. If something frequently causes you to suffer a lot, you likely don’t want to keep repeating it (unless this is saving you from an even greater suffering).
Alternatively, if one manages to attend to the error signal without trying to change it, it will somehow (I don’t have a good model of what exactly happens here, maybe it’s some more biologically realistic analogue of backpropagation) cause the brain to shift its priors to better match the data. This is what happens in meditation. When the priors get adjusted or moved from a category like “absolute constraints” to “intermediary modeling assumptions”, this removes a source of error and therefore also reduces suffering.
So meditation doesn’t break the capacity to suffer. The capacity for suffering, as in the ability for the brain to notice when its predictions are off, is retained or even enhanced. It’s just that as the brain refactors itself to make more accurate predictions, the number of things that would create that error signal drops drastically.
According to predictive coding, believing you’ll take an action just is how you take it, and believing you’ll achieve a goal just is how you intend it. This would mean if you desire more than you can achieve, you experience prediction error, but if you desire less than you can achieve, you just underachieve with no psychological warning.
Why do you believe that only raw sensory inputs are adaptive or have some evolved function? That seems extremely improbable—indeed, it contradicts large swaths of what we know from evolutionary psychology.
Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
We could try to figure this out by teaching lots of people to get rid of suffering and then watching them to see if it fucks them up.
Meditators often describe “dissolving pain into vibrations”. When this happens, you still get the sensory inputs that (normally) cause pain. You still take action to prevent the damage they cause to your body. You just don’t create the conscious experience of pain-suffering.
I feel it’s like surgery. In the immediate aftermath there can be a vulnerability period, but after it integrates you’re in a healthier place.
I don’t know, but considering the base rates of “enlightenment is rare” and “psychopaths often lie”, I would be skeptical of his claim to enlightenment. Probably the psychopath is wrong but not lying. Psychopaths can do things without noticibly harming themselves that normal people can’t, and this person is probably mistaking that for enlightenment.
According to Steven Byrnes model of awakening, awakening ought to be possible for psychopaths. How such people would behave, and whether awakening would affect their psychopathy, I have no idea.
Some people end up awakened without formal introspective practice, so “no” it doesn’t only count if achieved through mental introspective. There’s more to it than not feeling mental pain. Given that, plus base rates, Brandon Sanderson is probably not enlightened.
Identification is a cause of suffering, but is not the only cause of suffering. Consequently ending identification eliminates identification-originated sufffering, but not other kinds of suffering.
The suffering is really gone. You’re not just disassociating from it.
You seem to be missing the point here. Presumably we have the capacity to suffer because it facilitated our survival somehow. How are you so sure you don’t need to hear the message suffering was sending you?
FWIW, my current model is that “meditation removes your capacity to suffer” is not quite right. Rather, suffering is an error signal indicating that something like a prediction error is happening. Meditation reduces suffering by causing those prediction errors to get fixed.
You know how, when trying to understand a complicated phenomenon, it’s often too hard to start out with a full model of it? So you start with one that has lots of simplifying assumptions, and then see if you could gradually drop those?
I think that it’s similar with the brain. Evolution has hardwired it with some priors about the nature of reality that help bootstrap its reality- and self-modeling and motivation. Those assumptions are reasonably close to correct, but somewhat off. Among other things, what lsusr calls “desire” seems to involve creating temporary false beliefs. E.g., if you think “ugh, I really want this boring day to be over”, I think that is on some level implemented by the brain trying to overwrite the “the day isn’t over yet” data with an experience of the day finally ending.
Since these modeling assumptions cause the brain to make flawed predictions, they generate errors as the prediction is compared against the data and mismatches are detected. Whenever this happens, either on a low level (in the case of assumptions like “objects are discrete and have clear boundaries rather than being continuous in the raw data”) or on a higher level (with conscious desires like “I want this boring day to be over”), it creates a tension or error signal that is subjectively experienced as suffering.
One obvious way by which brains try to minimize that error signal is by avoiding situations that would trigger it. If something frequently causes you to suffer a lot, you likely don’t want to keep repeating it (unless this is saving you from an even greater suffering).
Alternatively, if one manages to attend to the error signal without trying to change it, it will somehow (I don’t have a good model of what exactly happens here, maybe it’s some more biologically realistic analogue of backpropagation) cause the brain to shift its priors to better match the data. This is what happens in meditation. When the priors get adjusted or moved from a category like “absolute constraints” to “intermediary modeling assumptions”, this removes a source of error and therefore also reduces suffering.
So meditation doesn’t break the capacity to suffer. The capacity for suffering, as in the ability for the brain to notice when its predictions are off, is retained or even enhanced. It’s just that as the brain refactors itself to make more accurate predictions, the number of things that would create that error signal drops drastically.
I’m still not convinced it’s a good idea to get enlightened, but thanks for the detailed explanation.
According to predictive coding, believing you’ll take an action just is how you take it, and believing you’ll achieve a goal just is how you intend it. This would mean if you desire more than you can achieve, you experience prediction error, but if you desire less than you can achieve, you just underachieve with no psychological warning.
Because suffering isn’t a raw sensory input. It’s downstream of raw sensory inputs. I still get the raw sensory inputs.
Why do you believe that only raw sensory inputs are adaptive or have some evolved function? That seems extremely improbable—indeed, it contradicts large swaths of what we know from evolutionary psychology.
Of course not. Lots of neural action beyond raw sensory inputs has a raw function.
We could try to figure this out by teaching lots of people to get rid of suffering and then watching them to see if it fucks them up.