First, our environment is still shaping our values and preferences, and thus the sort of world that we most want to live in might not be a world that would be mostly populated by us.
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in. Who cares how probable it was, a priori, that evolution should spawn our sort of people in our preferred sort of environment?
Well, I suppose you do, for some reason, but I’m really confused as to why.
Second, if we have any conflicts about preferences, typically we would go up a level to resolve those conflicts
Actually, I disagree: we usually just negotiate from a combination of heuristics for morally appropriate power relations (picture something Rawlsian, and there are complex but, IMHO, well-investigated sociological arguments for why a Rawlsian approach to power relations is a rational idea for the people involved) and morally inappropriate power relations (ie: compulsion and brute force).
I suppose you could call the former component “going up a level”, but ultimately I think it grounds itself in the Rawls-esque dynamics of creating, out of social creatures who only share a little personality and experience in common among everyone, a common society that improves life for all its members and maximizes the expected yield of individual efforts, particularly in view of the fact that many causally relevant attributes of individuals are high-entropy random variables and so we need to optimize the expected values, blah blah blah. Ultimately, human individuals do not enter into society because some kind of ontologically, metaphysically special Fundamental Particle of Morals collides with them and causes them to do so, but simply because people need other people to help each-other out and to feel at all ok about being people—solidarity is a basic sociological force.
So we can’t ground our conflict-resolution process in something moral instead of practical.
As you can see above, I think the conflict-resolution process is the most practical part of the morals of human life.
It seems to me that near-mode values are strongly biodetermined, but far-mode values are almost entirely culturally determined. Since most moral philosophy takes place in far mode, cultural determination is far more relevant.
Frankly, I think this is just an error on the part of most so-called moral philosophy, that it is conducted largely in a cognitive mode governed by secondary ideas-about-ideas, beliefs-in-beliefs, and impressions-about-impressions, a realm almost entirely without experiential data.
While I don’t think “Near Mode/Far Mode” is entirely a map that matches the psychological territory, insofar as we’re going to use it, I would consider Near Mode far more morally significant, precisely because it is informed directly by the actual experiences of the actual individuals involved. The social signals that convey “ideas” as we usually conceive of them in “Far Mode” actually have a tiny fraction of the bandwidth of raw sensory experience and conscious ideation, and as such should be weighted far more lightly by those of us looking to make our moral and aesthetic evaluations on data the same way we make factual evaluations on data.
The first rule of bounded rationality is that data and compute-power are scarce resources, and you should broadly assume that inferences based on more of each are very probably better than inferences in the same domain performed with less of each—and one of these days I’ll have the expertise to formalize that!
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in.
I don’t think I was clear enough. I’m not stating that it is value-wrong to alter the environment; indeed, that’s what values push people to do. I’m saying that while the direct effect is positive, the indirect effects can be negative. For example, we might want casual sex to be socially accepted because casual sex is fun, and then discover that this means unpleasant viruses infect a larger proportion of the population, and if they’re suitable lethal the survivors will, by selection if not experience, be those who are less accepting of casual sex. Or we might want to avoid a crash now and so transfer wealth from good predictors to poor predictors, and then discover that this has weakened the incentive to predict well, leading to worse predictions overall and more crashes. Both of those are mostly cultural examples, and I suspect the genetic examples will suggest themselves.
That is, one of the ways that values drift is the environmental change brought on by the previous period’s exertion of their morals may lead to the destruction of those morals in the next period. If you care about value preservation, this is one of the forces changing values that needs to be counteracted or controlled.
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in. Who cares how probable it was, a priori, that evolution should spawn our sort of people in our preferred sort of environment?
Well, I suppose you do, for some reason, but I’m really confused as to why.
Actually, I disagree: we usually just negotiate from a combination of heuristics for morally appropriate power relations (picture something Rawlsian, and there are complex but, IMHO, well-investigated sociological arguments for why a Rawlsian approach to power relations is a rational idea for the people involved) and morally inappropriate power relations (ie: compulsion and brute force).
I suppose you could call the former component “going up a level”, but ultimately I think it grounds itself in the Rawls-esque dynamics of creating, out of social creatures who only share a little personality and experience in common among everyone, a common society that improves life for all its members and maximizes the expected yield of individual efforts, particularly in view of the fact that many causally relevant attributes of individuals are high-entropy random variables and so we need to optimize the expected values, blah blah blah. Ultimately, human individuals do not enter into society because some kind of ontologically, metaphysically special Fundamental Particle of Morals collides with them and causes them to do so, but simply because people need other people to help each-other out and to feel at all ok about being people—solidarity is a basic sociological force.
As you can see above, I think the conflict-resolution process is the most practical part of the morals of human life.
Frankly, I think this is just an error on the part of most so-called moral philosophy, that it is conducted largely in a cognitive mode governed by secondary ideas-about-ideas, beliefs-in-beliefs, and impressions-about-impressions, a realm almost entirely without experiential data.
While I don’t think “Near Mode/Far Mode” is entirely a map that matches the psychological territory, insofar as we’re going to use it, I would consider Near Mode far more morally significant, precisely because it is informed directly by the actual experiences of the actual individuals involved. The social signals that convey “ideas” as we usually conceive of them in “Far Mode” actually have a tiny fraction of the bandwidth of raw sensory experience and conscious ideation, and as such should be weighted far more lightly by those of us looking to make our moral and aesthetic evaluations on data the same way we make factual evaluations on data.
The first rule of bounded rationality is that data and compute-power are scarce resources, and you should broadly assume that inferences based on more of each are very probably better than inferences in the same domain performed with less of each—and one of these days I’ll have the expertise to formalize that!
I don’t think I was clear enough. I’m not stating that it is value-wrong to alter the environment; indeed, that’s what values push people to do. I’m saying that while the direct effect is positive, the indirect effects can be negative. For example, we might want casual sex to be socially accepted because casual sex is fun, and then discover that this means unpleasant viruses infect a larger proportion of the population, and if they’re suitable lethal the survivors will, by selection if not experience, be those who are less accepting of casual sex. Or we might want to avoid a crash now and so transfer wealth from good predictors to poor predictors, and then discover that this has weakened the incentive to predict well, leading to worse predictions overall and more crashes. Both of those are mostly cultural examples, and I suspect the genetic examples will suggest themselves.
That is, one of the ways that values drift is the environmental change brought on by the previous period’s exertion of their morals may lead to the destruction of those morals in the next period. If you care about value preservation, this is one of the forces changing values that needs to be counteracted or controlled.