I’ve been meditating since I was about 19, and before I came across rationality / effective altruism. There is quite a bit of overlap between the sets of things I’ve been able to learn from both schools of thought, but I think there are still a lot of very useful (possibly even necessary) things that can only be learned from meditative practices right now. This is not because rationality is inherently incapable of learning the same things, but because within rationality it would take very strong and well developed theories, perhaps developed through large scale empirical observations of human behavior, to come to the same conclusions. On the other hand, with meditation a lot of these same conclusions are just “obvious.”
Most of these things have to do with subtle issues of psychology, particularly with values and morality. For example, before I began meditating, I generally believed that:
Moral principles could be determined logically from a set of axioms that were “self-evidently true” and that once I deduced those things, I would simply follow them.
The set of things that seemed to make me happy, like having friends, being in love, feeling accomplished, were not incompatible with true moral principles, and in fact were instrumentally helpful in achieving terminal moral goals.
I intrinsically value what is moral. If it ever seemed like I valued what was not moral, I could chalk it up to temporary or easily surmountable issues, like vestigial animal instincts or lack of willpower. Basically desires that could be easily overridden.
Pleasure, pain, and emotions were more like guidelines, things that made it possible to act quickly in certain situations. Insofar as certain forms of pleasure were “intrinsic values” (like love) they did not interfere with moral goals. They were not things that determined my behavior very strongly, and certainly they didn’t have subtle cascading effects on the entire set of my beliefs.
After having meditated for a long time, many of these beliefs were eradicated. Right now it seems more likely that:
My values are not even consistent, let alone determined by moral principles. It’s not clear that deducing a good set of moral principles could even change my values.
My values are malleable, but not easily malleable in a direction that can be controlled by me (not without a ton of meditation, anyway).
The formalization of my values in my mind are not a good predictor of what my actions will be. A better predictor involves far more short term mechanisms in my psyche.
The beliefs I had prior to meditating were more likely constructed so that I could report these to other people in a way that would make them more likely to value me and approve of me.
Values that truly do seem hard to deconstruct are surprisingly selfish. For example, I assumed that I valued approval from other humans because this was an instrumental goal in helping me judge the quality of my actions. It now seems more likely that social approval is in fact an intrinsic goal, which is very worrying to me in regards to my ability to attain my altruistic goals.
If it turns out that meditating has given me better self-reflective capabilities, and the things I’ve observed are accurate, then this has some pretty far-reaching implications. If I’m not extremely atypical, then most people are probably very blind to their own intrinsic values. This is a worrying prospect for the long-term efficacy of effective altruism.
Hopefully this isn’t too controversial to say, but it seems to me like a lot of the main currents within EA are operating more-or-less along the lines of my prior-to-meditating beliefs. Here I’m thinking about the type of ethics where you are encouraged to maximize your altruistic output. Things like, “earn to give”, “choose only the career that maximizes your ability to be altruistic”, “donate as much of your time and energy as you can to being altruistic”, etc. Of course EA thought is very diverse, so this doesn’t represent all of it. But the way that my values currently seem structured, it’s probably unrealistic that I could actually fulfill these, unless I experienced an abnormally large amount of happiness for each altruistic act that outweighed most of my other values. It’s of course possible that I’m unusually selfish or even a sociopath, but my prior on that is very low.
On the other hand, if my values really are malleable, and it is possible to influence those values, then it makes sense for me to spend a lot of time deciding how that process should proceed. This is only possible because my values are inconsistent. If they were consistent, it would be against my values to change them, but it seems that once a set of values is inconsistent, it could actually make sense to try to alter them. And meditation might turn out to be one of the ways to make these kind of changes to your own mind.
I’ve been meditating since I was about 19, and before I came across rationality / effective altruism. There is quite a bit of overlap between the sets of things I’ve been able to learn from both schools of thought, but I think there are still a lot of very useful (possibly even necessary) things that can only be learned from meditative practices right now. This is not because rationality is inherently incapable of learning the same things, but because within rationality it would take very strong and well developed theories, perhaps developed through large scale empirical observations of human behavior, to come to the same conclusions. On the other hand, with meditation a lot of these same conclusions are just “obvious.”
Most of these things have to do with subtle issues of psychology, particularly with values and morality. For example, before I began meditating, I generally believed that:
Moral principles could be determined logically from a set of axioms that were “self-evidently true” and that once I deduced those things, I would simply follow them.
The set of things that seemed to make me happy, like having friends, being in love, feeling accomplished, were not incompatible with true moral principles, and in fact were instrumentally helpful in achieving terminal moral goals.
I intrinsically value what is moral. If it ever seemed like I valued what was not moral, I could chalk it up to temporary or easily surmountable issues, like vestigial animal instincts or lack of willpower. Basically desires that could be easily overridden.
Pleasure, pain, and emotions were more like guidelines, things that made it possible to act quickly in certain situations. Insofar as certain forms of pleasure were “intrinsic values” (like love) they did not interfere with moral goals. They were not things that determined my behavior very strongly, and certainly they didn’t have subtle cascading effects on the entire set of my beliefs.
After having meditated for a long time, many of these beliefs were eradicated. Right now it seems more likely that:
My values are not even consistent, let alone determined by moral principles. It’s not clear that deducing a good set of moral principles could even change my values.
My values are malleable, but not easily malleable in a direction that can be controlled by me (not without a ton of meditation, anyway).
The formalization of my values in my mind are not a good predictor of what my actions will be. A better predictor involves far more short term mechanisms in my psyche.
The beliefs I had prior to meditating were more likely constructed so that I could report these to other people in a way that would make them more likely to value me and approve of me.
Values that truly do seem hard to deconstruct are surprisingly selfish. For example, I assumed that I valued approval from other humans because this was an instrumental goal in helping me judge the quality of my actions. It now seems more likely that social approval is in fact an intrinsic goal, which is very worrying to me in regards to my ability to attain my altruistic goals.
If it turns out that meditating has given me better self-reflective capabilities, and the things I’ve observed are accurate, then this has some pretty far-reaching implications. If I’m not extremely atypical, then most people are probably very blind to their own intrinsic values. This is a worrying prospect for the long-term efficacy of effective altruism.
Hopefully this isn’t too controversial to say, but it seems to me like a lot of the main currents within EA are operating more-or-less along the lines of my prior-to-meditating beliefs. Here I’m thinking about the type of ethics where you are encouraged to maximize your altruistic output. Things like, “earn to give”, “choose only the career that maximizes your ability to be altruistic”, “donate as much of your time and energy as you can to being altruistic”, etc. Of course EA thought is very diverse, so this doesn’t represent all of it. But the way that my values currently seem structured, it’s probably unrealistic that I could actually fulfill these, unless I experienced an abnormally large amount of happiness for each altruistic act that outweighed most of my other values. It’s of course possible that I’m unusually selfish or even a sociopath, but my prior on that is very low.
On the other hand, if my values really are malleable, and it is possible to influence those values, then it makes sense for me to spend a lot of time deciding how that process should proceed. This is only possible because my values are inconsistent. If they were consistent, it would be against my values to change them, but it seems that once a set of values is inconsistent, it could actually make sense to try to alter them. And meditation might turn out to be one of the ways to make these kind of changes to your own mind.