Did those involved with CEA study the literature on human value drift — if so, what did they find? What is CEA’s own experience with it?
Examples I’ve witnessed several times each: Someone plans to do environmental law only but they end up in corporate law. Another person plans to become a professional philanthropist, but then fails to donate later, and instead spends money keeping up with the Joneses. Someone else plans to be a genuine, pleasant person but then they study “pickup artistry” and find that being a manipulative, cocky jerk actually does increase their success with women, and a bit later I discover they’re a cocky, manipulative jerk to everyone. (Note to everyone: there are lots of ways to increase one’s romantic success without becoming a cocky, manipulative jerk!)
I wish I knew how often this kind of value drift happens. Value drift with regard to professional philanthropy seems to happen a lot in the SI community; maybe it happens less often in communities focused on more “ground-level” causes like poverty reduction? What can be done to prevent it?
Of course, we probably don’t want to prevent some kinds of value drift, e.g. value drift that occurs strictly due to encountering new and better information. I used to care a lot about God’s will, until I gained information indicating God’s non-existence.
Hm. It seems like a couple of your examples may involve value drift due to behavioral reinforcement. The behavior of buying expensive stuff gets reinforced when friends act impressed, or the behavior of being a jerk gets reinforced when it gets you laid. If it’s entirely behavioral phenomenon, it seems possible that the “value drift” is only a drift in revealed preferences, not reflective ones.
Schelling fences or similar come to mind as a way to prevent this sort of behavior change in oneself (“be a decent person always”, “donate 30% of my income”).
I’ve had a fair amount of success detailing policies like these for myself to follow. Typically my policies have associated lag times before policy changes take effect, which I’ve found to be key for experimenting to see what’s convenient, workable, and makes reasonable compromises while not ditching the entire policy whenever I encounter problems. I don’t think the effectiveness of these policies is best explained in terms game theory, however. The way it feels from the inside is, if I draw up a policy when I’m in a relatively high-willpower state, it’s as though I can “lock in” a bunch of future decisions related to the policy that I’ll later make using minimal willpower.
If people are interested in the details of what I’ve learned makes for effective policy administration, I could probably write a discussion post about it.
Of course, we probably don’t want to prevent some kinds of value drift, e.g. value drift that occurs strictly due to encountering new and better information. I used to care a lot about God’s will, until I gained information indicating God’s non-existence.
Why do you think that does not describe the other examples you cited? Those examples are other people while this example is yourself. From inside change feels like finding the truth, while from outside it looks like value drift. What do those other people say about themselves? What do religious people you know say about you?
What you do changes who you are, and few young people’s plans for their life are going to survive longer than a tiny fraction of it.
For example, during the time I worked at Google, I observed that my coworkers’ values, including those of people I otherwise respected, were drifting towards the values of the organization (or the value of loyalty toward the organization). This was part of the reason I left. I suspect this kind of value drift happens to most people in any organization or job. It’s hard not to absorb the values you spend most of your time around, let alone the values that pay your salary.
So this is a case where I anticipated that my values were likely to drift, in a way incompatible with my current values, and removed myself from a situation I thought likely to cause that drift. (Mind you there were plenty of other reasons.)
So this is a case where I anticipated that my values were likely to drift, in a way incompatible with my current values, and removed myself from a situation I thought likely to cause that drift. (Mind you there were plenty of other reasons.)
Not to say that you weren’t right to do that, but I notice that the religious will sometimes avoid consorting with the non-religious for the same reason.
Faced with an experience that one foresees being changed by, how should one decide whether to go ahead with it? Given several sets of values, such that from the standpoint of any of them all the others look worse, how to decide which to adopt?
This is certainly really important for 80k—it’s on our list of strategic considerations to investigate.
We haven’t looked into it in depth already, beyond knowledge of some relevant psychology literature (e.g. being primed by images of money has been found to make people more selfish in a couple of (probably dodgy) studies).
We’ve put a couple of measures in place which seem like they might help to mitigate the types of drift that don’t involve updating on new information. First, making a public commitment to make the world a better place in an effective way encourages people not to drift towards being non-altruistic (while is also sufficiently broad not to commit people to moral beliefs they might well want to change e.g. that animal suffering doesn’t matter), because people want to be consistent. Second, participating in the 80k community could help to counteract destructive social pressure from workplace communities. It remains to see how well these measures work—we’ll be keeping a close eye.
Thanks for this.
Another question...
Did those involved with CEA study the literature on human value drift — if so, what did they find? What is CEA’s own experience with it?
Examples I’ve witnessed several times each: Someone plans to do environmental law only but they end up in corporate law. Another person plans to become a professional philanthropist, but then fails to donate later, and instead spends money keeping up with the Joneses. Someone else plans to be a genuine, pleasant person but then they study “pickup artistry” and find that being a manipulative, cocky jerk actually does increase their success with women, and a bit later I discover they’re a cocky, manipulative jerk to everyone. (Note to everyone: there are lots of ways to increase one’s romantic success without becoming a cocky, manipulative jerk!)
I wish I knew how often this kind of value drift happens. Value drift with regard to professional philanthropy seems to happen a lot in the SI community; maybe it happens less often in communities focused on more “ground-level” causes like poverty reduction? What can be done to prevent it?
Of course, we probably don’t want to prevent some kinds of value drift, e.g. value drift that occurs strictly due to encountering new and better information. I used to care a lot about God’s will, until I gained information indicating God’s non-existence.
Hm. It seems like a couple of your examples may involve value drift due to behavioral reinforcement. The behavior of buying expensive stuff gets reinforced when friends act impressed, or the behavior of being a jerk gets reinforced when it gets you laid. If it’s entirely behavioral phenomenon, it seems possible that the “value drift” is only a drift in revealed preferences, not reflective ones.
Schelling fences or similar come to mind as a way to prevent this sort of behavior change in oneself (“be a decent person always”, “donate 30% of my income”).
I’ve had a fair amount of success detailing policies like these for myself to follow. Typically my policies have associated lag times before policy changes take effect, which I’ve found to be key for experimenting to see what’s convenient, workable, and makes reasonable compromises while not ditching the entire policy whenever I encounter problems. I don’t think the effectiveness of these policies is best explained in terms game theory, however. The way it feels from the inside is, if I draw up a policy when I’m in a relatively high-willpower state, it’s as though I can “lock in” a bunch of future decisions related to the policy that I’ll later make using minimal willpower.
If people are interested in the details of what I’ve learned makes for effective policy administration, I could probably write a discussion post about it.
I’m interested.
Also interested
I know it’s days later....but I’m interested.
http://lesswrong.com/lw/flr/thoughts_on_designing_policies_for_oneself/
Why do you think that does not describe the other examples you cited? Those examples are other people while this example is yourself. From inside change feels like finding the truth, while from outside it looks like value drift. What do those other people say about themselves? What do religious people you know say about you?
What you do changes who you are, and few young people’s plans for their life are going to survive longer than a tiny fraction of it.
I don’t know that that’s the whole story.
For example, during the time I worked at Google, I observed that my coworkers’ values, including those of people I otherwise respected, were drifting towards the values of the organization (or the value of loyalty toward the organization). This was part of the reason I left. I suspect this kind of value drift happens to most people in any organization or job. It’s hard not to absorb the values you spend most of your time around, let alone the values that pay your salary.
So this is a case where I anticipated that my values were likely to drift, in a way incompatible with my current values, and removed myself from a situation I thought likely to cause that drift. (Mind you there were plenty of other reasons.)
Not to say that you weren’t right to do that, but I notice that the religious will sometimes avoid consorting with the non-religious for the same reason.
Faced with an experience that one foresees being changed by, how should one decide whether to go ahead with it? Given several sets of values, such that from the standpoint of any of them all the others look worse, how to decide which to adopt?
Hi Luke,
This is certainly really important for 80k—it’s on our list of strategic considerations to investigate.
We haven’t looked into it in depth already, beyond knowledge of some relevant psychology literature (e.g. being primed by images of money has been found to make people more selfish in a couple of (probably dodgy) studies).
We’ve put a couple of measures in place which seem like they might help to mitigate the types of drift that don’t involve updating on new information. First, making a public commitment to make the world a better place in an effective way encourages people not to drift towards being non-altruistic (while is also sufficiently broad not to commit people to moral beliefs they might well want to change e.g. that animal suffering doesn’t matter), because people want to be consistent. Second, participating in the 80k community could help to counteract destructive social pressure from workplace communities. It remains to see how well these measures work—we’ll be keeping a close eye.
Ben