My sense is that “enlightenment” is a perceptual-emotional shift rather than any change of belief or judgment, and this makes the communication difficult, same as communicating any other qualia to a person who hasn’t had it. It’s not unlike trying to communicate what a hypothetical novel color looks like to someone who hasn’t seen it.
Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. “I can see a wavelength you can’t.” In the case of enlightenment, however, we don’t have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience. (I’m reminded of the Mary’s Room thought experiment.)
In the case of Val’s Kensho, I don’t think I’ve ever occupied that mental state, but I’ve experienced enough variations in relevant dimensions of perception, emotion, and relation to reality that I get that he’s gone in a certain direction in a certain coordinate system of sorts. I don’t occupy the same perceptual-mental state though through my understanding alone, but I feel like I could follow if I did the right things.
I think the advice to get used to using fake frames as leading towards this is on point since it’s close to the skill of shifting one’s perceptual-emotional state. Rationalists focus on having a map which matches the territory and are therefore constanty drawing in new lines and editing old ones; Val’s pointing at the skill of reconsidering the ontology of the representation. What if roads, houses, and trees weren’t the basic units of a map? This thought maneuver requires a pulling back from one’s “object level models”, and I see that pulling back generalizing to pulling back entirely from models and being able to see “raw perception-emotion”. At that level, there are mental transformations possible which aren’t about beliefs or judgments. You don’t shift to consider death less bad, but your relationship to it is changed, even if it still horrific.
“Okay” is such an underqualified word for what I think Val is trying to convey. At least if it’s the same thing I have a sense of.
Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. “I can see a wavelength you can’t.” In the case of enlightenment, however, we don’t have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience.
If you could see ultraviolet colors, you could use that perception to, e.g., distinguish between an object that is radiating in UV from one which isn’t (which normal humans cannot do). It would be trivial to verify that you had some perception that others lacked (using any or all of a myriad reliable, repeatable, unambiguous experiments). No description of your internal state, or the true mechanism of your new power, would be necessary.
What is the analogous trivially-verifiable power that is bestowed by enlightenment?
I haven’t achieved any state profound enough that I’d consider it enlightenment, but I’ll answer based on my understanding and what I’ve experienced so far.
I don’t think there is a trivially-verifable power conferred by enlightenment, but I would wager that people who have experienced enlightened will perform systematically better at certain tasks, including:
Maintaining emotional stability and wellbeing regardless of circumstance, e.g. intense stress, uncertainty, tragic loss.
Better ability to stare directly at uncomfortable truths, and resultantly, less motivated cognition.
It’s a useful state to achieve if you plan to wake up each day, confront the sheer magnituted of the suffering that exists in the world, or carry the burden of trying to ensure the far future is as good as it could be, while hoping to be a psychologically well-adjusted and effective human. All the more so if the tasks you carry out push you to your limits[1].
It’d take resource-intensive experiments to measure these effects, but I’d still wager on their existence. Much of my confidence is because each time I feel myself move along theses dimensions, I reap marginal benefits.
[1] I think many EA’s suffer because they take on these tasks without the mental infrastructure required to bear them and still flourish.
Interesting! This is starting to sound quite a bit like something resembling verifiable claims (not quite, but much closer than most other stuff in this vein!).
Could you say a bit more about what sorts of experiments you envision, that could verify the effects you allude to? (Or, to put it another way: you said you’d wager on the existence of these effects—do you mind sketching out in more detail how we might construct the conditions of such a bet, with sufficient rigor to make it definitely resolvable?)
In any case, I very much appreciate this sort of response, thanks.
Likewise, I really appreciate Ruby’s replies here. I haven’t reflected deeply enough on the “perceptual-emotional shift” thing to know whether I fully agree, but it seems very plausible to me, and the claims he’s putting forward sound right to me.
Psychological resilience and motivated cognition are difficult to measure, but I’m very certain they’re real things. Not everything real and which has a large causal effect on the world is easily measured. I’m not inclined to sketch out protocols for measuring these things in this comment thread, but I’d recommend How To Measure Anything as the book I’d turn to if I was to try.
My sense is that “enlightenment” is a perceptual-emotional shift rather than any change of belief or judgment, and this makes the communication difficult, same as communicating any other qualia to a person who hasn’t had it. It’s not unlike trying to communicate what a hypothetical novel color looks like to someone who hasn’t seen it.
Of course, if I can see ultraviolet colors (due to some novel Crispr treatment or something), I can offer a good description of the mechanics which are producing my unique experience , i.e. “I can see a wavelength you can’t.” In the case of enlightenment, however, we don’t have commonly accepted and understood models like wavelength of light. If we did for qualia too, I think Val could communicate in an understandable what was going on his mind, even if the mechanical description cannot convey the actual experience. (I’m reminded of the Mary’s Room thought experiment.)
In the case of Val’s Kensho, I don’t think I’ve ever occupied that mental state, but I’ve experienced enough variations in relevant dimensions of perception, emotion, and relation to reality that I get that he’s gone in a certain direction in a certain coordinate system of sorts. I don’t occupy the same perceptual-mental state though through my understanding alone, but I feel like I could follow if I did the right things.
I think the advice to get used to using fake frames as leading towards this is on point since it’s close to the skill of shifting one’s perceptual-emotional state. Rationalists focus on having a map which matches the territory and are therefore constanty drawing in new lines and editing old ones; Val’s pointing at the skill of reconsidering the ontology of the representation. What if roads, houses, and trees weren’t the basic units of a map? This thought maneuver requires a pulling back from one’s “object level models”, and I see that pulling back generalizing to pulling back entirely from models and being able to see “raw perception-emotion”. At that level, there are mental transformations possible which aren’t about beliefs or judgments. You don’t shift to consider death less bad, but your relationship to it is changed, even if it still horrific.
“Okay” is such an underqualified word for what I think Val is trying to convey. At least if it’s the same thing I have a sense of.
If you could see ultraviolet colors, you could use that perception to, e.g., distinguish between an object that is radiating in UV from one which isn’t (which normal humans cannot do). It would be trivial to verify that you had some perception that others lacked (using any or all of a myriad reliable, repeatable, unambiguous experiments). No description of your internal state, or the true mechanism of your new power, would be necessary.
What is the analogous trivially-verifiable power that is bestowed by enlightenment?
I haven’t achieved any state profound enough that I’d consider it enlightenment, but I’ll answer based on my understanding and what I’ve experienced so far.
I don’t think there is a trivially-verifable power conferred by enlightenment, but I would wager that people who have experienced enlightened will perform systematically better at certain tasks, including:
Maintaining emotional stability and wellbeing regardless of circumstance, e.g. intense stress, uncertainty, tragic loss.
Better ability to stare directly at uncomfortable truths, and resultantly, less motivated cognition.
It’s a useful state to achieve if you plan to wake up each day, confront the sheer magnituted of the suffering that exists in the world, or carry the burden of trying to ensure the far future is as good as it could be, while hoping to be a psychologically well-adjusted and effective human. All the more so if the tasks you carry out push you to your limits[1].
It’d take resource-intensive experiments to measure these effects, but I’d still wager on their existence. Much of my confidence is because each time I feel myself move along theses dimensions, I reap marginal benefits.
[1] I think many EA’s suffer because they take on these tasks without the mental infrastructure required to bear them and still flourish.
Interesting! This is starting to sound quite a bit like something resembling verifiable claims (not quite, but much closer than most other stuff in this vein!).
Could you say a bit more about what sorts of experiments you envision, that could verify the effects you allude to? (Or, to put it another way: you said you’d wager on the existence of these effects—do you mind sketching out in more detail how we might construct the conditions of such a bet, with sufficient rigor to make it definitely resolvable?)
In any case, I very much appreciate this sort of response, thanks.
Likewise, I really appreciate Ruby’s replies here. I haven’t reflected deeply enough on the “perceptual-emotional shift” thing to know whether I fully agree, but it seems very plausible to me, and the claims he’s putting forward sound right to me.
Glad it’s helpful!
Psychological resilience and motivated cognition are difficult to measure, but I’m very certain they’re real things. Not everything real and which has a large causal effect on the world is easily measured. I’m not inclined to sketch out protocols for measuring these things in this comment thread, but I’d recommend How To Measure Anything as the book I’d turn to if I was to try.