I don’t have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don’t have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.
Because of this, and because I’m basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on “more rational” values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I’m not sure why many people here understand that idea in some cases but ignore it in others. If we’re going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.
Gunnar Zarncke makes a good point as well, one I think complements my argument. There’s no standard with which to choose between helping all the birds and helping none, once you’ve thrown the care-o-meter away.
I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right.
But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).
I worry that such consistency isn’t possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.
I don’t have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don’t have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.
Because of this, and because I’m basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not complete disregard. Saying that we should ignore our actual values and focus on “more rational” values we could counterfactually have is disquieting to me because it seems to involve an underlying nihilism of sorts. Values are orthogonal to rationality, I’m not sure why many people here understand that idea in some cases but ignore it in others. If we’re going to get rid of values for not being sufficiently rational or consistent, we might as well delete them all.
Gunnar Zarncke makes a good point as well, one I think complements my argument. There’s no standard with which to choose between helping all the birds and helping none, once you’ve thrown the care-o-meter away.
I understand what you mean by saying values and rationality are orthogonal. If I had a known, stable,consistent utility function you would be absolutely right.
But 1) my current (supposedly terminal) values are certainly not orthogonal to each other, and may be (in fact, probably are) mutually inconsistent some of the time. Also 2) There are situations where I may want to change, adopt, or delete some of my values in order to better achieve the ones I currently espouse (http://lesswrong.com/lw/jhs/dark_arts_of_rationality/).
I worry that such consistency isn’t possible. If you have a preference for chocolate over vanilla given exposure to one set of persuasion techniques, and a preference for vanilla over chocolate given other persuasion techniques, it seems like you have no consistent preference. If all our values are sensitive to aspects of context such as this, then trying to enforce consistency could just delete everything. Alternatively, it could mean that CEV will ultimately worship Moloch rather than humans, valuing whatever leads to amassing as much power as possible. If inefficiency or irrationality is somehow important or assumed in human values, I want the values to stay and the rationality to go. Given all the weird results from the behavioral economics literature, and the poor optimization of the evolutionary processes from which our values emerged, such inconsistency seems probable.