I am having difficulty seeing what you don’t understand about PhilGoetz’s point. You read like you’re reacting to overstatements on his part, but it looks to me like you’re reaching much further from reality than he is, or uncharitably interpreting his statements.
We can abstract from our values to principles, and so on, but what makes the difference between an instrumental value and a terminal value is that a terminal value is one that exists for its own sake. Inclusive genetic fitness does match that definition, because natural selection is a thing that slowly replaces things with lower inclusive genetic fitness with things with higher inclusive genetic fitness. This is what biologists mean by ‘maximize,’ and it’s different from what numerical optimization / math people mean by ‘maximize.’
Is it true that you are doing the most you can to maximize your inclusive genetic fitness (IGF)? No, you’re clearly suboptimal. But it is clearly true that your ancestors reproduced, and thus your genes are a product of the evolutionary project to gradually replace lower IGF with higher IGF, and in that sense you are doing more on average to increase your IGF than the counterfactual yous that do not exist because their ancestors failed to reproduce. That seems to be what PhilGoetz is arguing for on the object level (and he should correct me if that’s not the case.)
So now we take a step back to talk about values. When we look at possible values, we see lots of things that want to exist for their own sake (view how people talk about truth, justice, equality, and so on), but humans only seem to desire them because of their effects (view how people act about truth, justice, equality, and so on). It looks like people choose baskets of values and make tradeoffs between them- but in order to make tradeoffs between two instrumental values, there must be some terminal value that can look at options and say “option A is better than option B.”
It looks like the historical way this happens is that people have values, and some people reproduce more / spread their memes more, and this shifts the gene and meme (here, read as “value”) distributions. Broadly, it seems like genes and memes that are good for IGF are somewhat more popular than genes and memes that are bad for IGF, probably for the obvious reason.
That is, it looks like the universe judges value conflicts by existence. If there are more Particularists than Universalists, that seems to be because Particularists are out-existing the Universalists. To the extent that humans have pliable value systems, they seem to look around, decide what values will help them exist best, and then follow those values. (They also pick up their definition of “exist” from the environment around them, leading to quite a bit of freedom in how humans value things, though there seem to be limits on pliability.)
Moving forward, we seem to have some control over how the economic and military frontiers will change, and thus some control over what values will promote more or less existence. We probably want to exert that control in order to ensure the ‘right’ morality is favored.
But… if the practical determines the moral, and we want to decide what is practical using the moral, we now have a circular situation that it’s difficult to escape.
Our deeply held values are not “deeply held” in the sense that we can go meta and justify them to someone who doesn’t have them, but does share our meta-level value generating process. If we put a hypothetical twin you into a Comanche tribe to be raised, and then once he reached your current age you and he tried to come up with the list of human values and optimal arrangement of power, there would probably be significant disagreement. So PhilGoetz is pessimistic about a plan that looks at humans and comes up with the right values moving forward, because the system that determines those values is not a system we trust.
We can abstract from our values to principles, and so on, but what makes the difference between an instrumental value and a terminal value is that a terminal value is one that exists for its own sake.
“Sakes” are mental concepts. Reality does not contain extra-mental sakes to exist for.
Inclusive genetic fitness does match that definition, because natural selection is a thing that slowly replaces things with lower inclusive genetic fitness with things with higher inclusive genetic fitness.
Again: since evolution does not have a mind, I don’t see how you could label inclusive genetic fitness as “terminal”. It is the criterion for which evolution optimizes, but that’s not nearly the same thing as a “terminal value” in any ethical or FAI sense.
(And, as I mentioned, it is very definitely not terminal, in the sense that it is a sub-optimizer for the Second Law of Thermodynamics.)
Is it true that you are doing the most you can to maximize your inclusive genetic fitness (IGF)? No, you’re clearly suboptimal. But it is clearly true that your ancestors reproduced, and thus your genes are a product of the evolutionary project to gradually replace lower IGF with higher IGF, and in that sense you are doing more on average to increase your IGF than the counterfactual yous that do not exist because their ancestors failed to reproduce. That seems to be what PhilGoetz is arguing for on the object level (and he should correct me if that’s not the case.)
While your statement about my being more genetically “fit” than, say, the other sperm and egg cells that I killed off in the womb is entirely correct, that has basically nothing to do with the concept of “terminal values”, which are strictly a property of minds (and which evolution simply does not have).
So now we take a step back to talk about values. When we look at possible values, we see lots of things that want to exist for their own sake (view how people talk about truth, justice, equality, and so on), but humans only seem to desire them because of their effects (view how people act about truth, justice, equality, and so on). It looks like people choose baskets of values and make tradeoffs between them- but in order to make tradeoffs between two instrumental values, there must be some terminal value that can look at options and say “option A is better than option B.”
Or a person must simply trade off their terminal values against each-other, with some weighting deciding the final total utility.
It seems to me like we need a word to use in Evaluative Cognitive Algorithms Theory other than “values”, since people like you and PhilGoetz are confusing “values” in the Evaluative Cognitive Algorithms Theory sense of the word with “values” in the sense of what a non-naturalist ethicist or a politician talks about.
Moving forward, we seem to have some control over how the economic and military frontiers will change, and thus some control over what values will promote more or less existence. We probably want to exert that control in order to ensure the ‘right’ morality is favored.
If you are thinking in terms of promoting the “right” morality in an evolutionary sense, that the “right” morality is a program of which you “must” make copies, you are not using the term “morality” in the sense that Evaluative Cognitive Algorithms Theory people use it either.
(And certainly not in any sense that would invoke moral realism, but you don’t seem to have been claiming moral realism in the first place. On a side note, I think that trying to investigate what theories allow you to be “realist about X” is a useful tool for understanding what you mean by X, and morality is no exception.)
But… if the practical determines the moral, and we want to decide what is practical using the moral, we now have a circular situation that it’s difficult to escape.
No we don’t. One optimizer can be stronger than another. For instance, at this point, humanity is stronger than evolution: we are rapidly destroying life on this planet, including ourselves, faster than anything can evolve to survive having our destructive attentions turned its way. Now personally I think that’s bloody-stupid, but it certainly shows that we are the ones setting the existence pressures now, we are the ones deciding precisely where the possible-but-counterfactual gives way to the actual.
And unfortunately, we need modal logic here. The practical does not determine what the set moral modality we already possess will output. That modality is already a fixed computational structure (unless you’re far more cultural-determinist than I consider reasonable).
Our deeply held values are not “deeply held” in the sense that we can go meta and justify them to someone who doesn’t have them, but does share our meta-level value generating process.
I am confused by what you think a “meta-level value-generating process” is, or even could be, at least in the realms of ethics or psychology. Do you mean evolution when you say “meta-level value generating process”?
And additionally, why on Earth should we have to justify our you!”values” to someone who doesn’t have them? Seeking moral justification is itself an aspect of human psychology, so the average non-human mind would never expect any such thing.
If we put a hypothetical twin you into a Comanche tribe to be raised, and then once he reached your current age you and he tried to come up with the list of human values and optimal arrangement of power, there would probably be significant disagreement.
There would be a significant difference in preference of lifestyles. Once we explained each-other to each-other, however, what we call “values” would be very, very close, and ways to arrange to share the world would be invented quite quickly.
(Of course, this may simply reflect that I put more belief-weight on bioterminism, whereas you place it on cultural determinism.)
Again: since evolution does not have a mind, I don’t see how you could label inclusive genetic fitness as “terminal”.
Perhaps it would be clearer to discuss “exogenous” and “endogenous” values, as the relevant distinction between terminal and instrumental values are that terminal values are internally uncaused, and instrumental values are those pursued because they will directly or indirectly lead to an improvement in those terminal values, and this maps somewhat clearly onto exogenous and endogenous.
That is, of course this is a two-place word. IGF is exogenous to humans, but endogenous to evolution (and, as you put it, entropy is exogenous to evolution).
So my statement is that we have a collection of values and preferences that are moderately well-suited to our environment, because there is a process by which environments shape their inhabitants. As we grow more powerful, we shape our environment to be better suited to our values and preferences, because that is how humans embody preferences.
But we have two problems. First, our environment is still shaping our values and preferences, and thus the sort of world that we most want to live in might not be a world that would be mostly populated by us. Second, if we have any conflicts about preferences, typically we would go up a level to resolve those conflicts—but it is obvious that the level “above” us doesn’t have any desirable moral insights. So we can’t ground our conflict-resolution process in something moral instead of practical.
Of course, this may simply reflect that I put more belief-weight on bioterminism, whereas you place it on cultural determinism.
It seems to me that near-mode values are strongly biodetermined, but far-mode values are almost entirely culturally determined. Since most moral philosophy takes place in far mode, cultural determination is far more relevant. You and your Comanche twin might be equally anxious, say, but are probably anxious about very different things and have different coping strategies and so on.
ways to arrange to share the world would be invented quite quickly.
I picked Comanche specifically because they were legendary raiders with a predatory morality.
First, our environment is still shaping our values and preferences, and thus the sort of world that we most want to live in might not be a world that would be mostly populated by us.
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in. Who cares how probable it was, a priori, that evolution should spawn our sort of people in our preferred sort of environment?
Well, I suppose you do, for some reason, but I’m really confused as to why.
Second, if we have any conflicts about preferences, typically we would go up a level to resolve those conflicts
Actually, I disagree: we usually just negotiate from a combination of heuristics for morally appropriate power relations (picture something Rawlsian, and there are complex but, IMHO, well-investigated sociological arguments for why a Rawlsian approach to power relations is a rational idea for the people involved) and morally inappropriate power relations (ie: compulsion and brute force).
I suppose you could call the former component “going up a level”, but ultimately I think it grounds itself in the Rawls-esque dynamics of creating, out of social creatures who only share a little personality and experience in common among everyone, a common society that improves life for all its members and maximizes the expected yield of individual efforts, particularly in view of the fact that many causally relevant attributes of individuals are high-entropy random variables and so we need to optimize the expected values, blah blah blah. Ultimately, human individuals do not enter into society because some kind of ontologically, metaphysically special Fundamental Particle of Morals collides with them and causes them to do so, but simply because people need other people to help each-other out and to feel at all ok about being people—solidarity is a basic sociological force.
So we can’t ground our conflict-resolution process in something moral instead of practical.
As you can see above, I think the conflict-resolution process is the most practical part of the morals of human life.
It seems to me that near-mode values are strongly biodetermined, but far-mode values are almost entirely culturally determined. Since most moral philosophy takes place in far mode, cultural determination is far more relevant.
Frankly, I think this is just an error on the part of most so-called moral philosophy, that it is conducted largely in a cognitive mode governed by secondary ideas-about-ideas, beliefs-in-beliefs, and impressions-about-impressions, a realm almost entirely without experiential data.
While I don’t think “Near Mode/Far Mode” is entirely a map that matches the psychological territory, insofar as we’re going to use it, I would consider Near Mode far more morally significant, precisely because it is informed directly by the actual experiences of the actual individuals involved. The social signals that convey “ideas” as we usually conceive of them in “Far Mode” actually have a tiny fraction of the bandwidth of raw sensory experience and conscious ideation, and as such should be weighted far more lightly by those of us looking to make our moral and aesthetic evaluations on data the same way we make factual evaluations on data.
The first rule of bounded rationality is that data and compute-power are scarce resources, and you should broadly assume that inferences based on more of each are very probably better than inferences in the same domain performed with less of each—and one of these days I’ll have the expertise to formalize that!
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in.
I don’t think I was clear enough. I’m not stating that it is value-wrong to alter the environment; indeed, that’s what values push people to do. I’m saying that while the direct effect is positive, the indirect effects can be negative. For example, we might want casual sex to be socially accepted because casual sex is fun, and then discover that this means unpleasant viruses infect a larger proportion of the population, and if they’re suitable lethal the survivors will, by selection if not experience, be those who are less accepting of casual sex. Or we might want to avoid a crash now and so transfer wealth from good predictors to poor predictors, and then discover that this has weakened the incentive to predict well, leading to worse predictions overall and more crashes. Both of those are mostly cultural examples, and I suspect the genetic examples will suggest themselves.
That is, one of the ways that values drift is the environmental change brought on by the previous period’s exertion of their morals may lead to the destruction of those morals in the next period. If you care about value preservation, this is one of the forces changing values that needs to be counteracted or controlled.
I am having difficulty seeing what you don’t understand about PhilGoetz’s point. You read like you’re reacting to overstatements on his part, but it looks to me like you’re reaching much further from reality than he is, or uncharitably interpreting his statements.
We can abstract from our values to principles, and so on, but what makes the difference between an instrumental value and a terminal value is that a terminal value is one that exists for its own sake. Inclusive genetic fitness does match that definition, because natural selection is a thing that slowly replaces things with lower inclusive genetic fitness with things with higher inclusive genetic fitness. This is what biologists mean by ‘maximize,’ and it’s different from what numerical optimization / math people mean by ‘maximize.’
Is it true that you are doing the most you can to maximize your inclusive genetic fitness (IGF)? No, you’re clearly suboptimal. But it is clearly true that your ancestors reproduced, and thus your genes are a product of the evolutionary project to gradually replace lower IGF with higher IGF, and in that sense you are doing more on average to increase your IGF than the counterfactual yous that do not exist because their ancestors failed to reproduce. That seems to be what PhilGoetz is arguing for on the object level (and he should correct me if that’s not the case.)
So now we take a step back to talk about values. When we look at possible values, we see lots of things that want to exist for their own sake (view how people talk about truth, justice, equality, and so on), but humans only seem to desire them because of their effects (view how people act about truth, justice, equality, and so on). It looks like people choose baskets of values and make tradeoffs between them- but in order to make tradeoffs between two instrumental values, there must be some terminal value that can look at options and say “option A is better than option B.”
It looks like the historical way this happens is that people have values, and some people reproduce more / spread their memes more, and this shifts the gene and meme (here, read as “value”) distributions. Broadly, it seems like genes and memes that are good for IGF are somewhat more popular than genes and memes that are bad for IGF, probably for the obvious reason.
That is, it looks like the universe judges value conflicts by existence. If there are more Particularists than Universalists, that seems to be because Particularists are out-existing the Universalists. To the extent that humans have pliable value systems, they seem to look around, decide what values will help them exist best, and then follow those values. (They also pick up their definition of “exist” from the environment around them, leading to quite a bit of freedom in how humans value things, though there seem to be limits on pliability.)
Moving forward, we seem to have some control over how the economic and military frontiers will change, and thus some control over what values will promote more or less existence. We probably want to exert that control in order to ensure the ‘right’ morality is favored.
But… if the practical determines the moral, and we want to decide what is practical using the moral, we now have a circular situation that it’s difficult to escape.
Our deeply held values are not “deeply held” in the sense that we can go meta and justify them to someone who doesn’t have them, but does share our meta-level value generating process. If we put a hypothetical twin you into a Comanche tribe to be raised, and then once he reached your current age you and he tried to come up with the list of human values and optimal arrangement of power, there would probably be significant disagreement. So PhilGoetz is pessimistic about a plan that looks at humans and comes up with the right values moving forward, because the system that determines those values is not a system we trust.
“Sakes” are mental concepts. Reality does not contain extra-mental sakes to exist for.
Again: since evolution does not have a mind, I don’t see how you could label inclusive genetic fitness as “terminal”. It is the criterion for which evolution optimizes, but that’s not nearly the same thing as a “terminal value” in any ethical or FAI sense.
(And, as I mentioned, it is very definitely not terminal, in the sense that it is a sub-optimizer for the Second Law of Thermodynamics.)
While your statement about my being more genetically “fit” than, say, the other sperm and egg cells that I killed off in the womb is entirely correct, that has basically nothing to do with the concept of “terminal values”, which are strictly a property of minds (and which evolution simply does not have).
Or a person must simply trade off their terminal values against each-other, with some weighting deciding the final total utility.
It seems to me like we need a word to use in Evaluative Cognitive Algorithms Theory other than “values”, since people like you and PhilGoetz are confusing “values” in the Evaluative Cognitive Algorithms Theory sense of the word with “values” in the sense of what a non-naturalist ethicist or a politician talks about.
If you are thinking in terms of promoting the “right” morality in an evolutionary sense, that the “right” morality is a program of which you “must” make copies, you are not using the term “morality” in the sense that Evaluative Cognitive Algorithms Theory people use it either.
(And certainly not in any sense that would invoke moral realism, but you don’t seem to have been claiming moral realism in the first place. On a side note, I think that trying to investigate what theories allow you to be “realist about X” is a useful tool for understanding what you mean by X, and morality is no exception.)
No we don’t. One optimizer can be stronger than another. For instance, at this point, humanity is stronger than evolution: we are rapidly destroying life on this planet, including ourselves, faster than anything can evolve to survive having our destructive attentions turned its way. Now personally I think that’s bloody-stupid, but it certainly shows that we are the ones setting the existence pressures now, we are the ones deciding precisely where the possible-but-counterfactual gives way to the actual.
And unfortunately, we need modal logic here. The practical does not determine what the set moral modality we already possess will output. That modality is already a fixed computational structure (unless you’re far more cultural-determinist than I consider reasonable).
I am confused by what you think a “meta-level value-generating process” is, or even could be, at least in the realms of ethics or psychology. Do you mean evolution when you say “meta-level value generating process”?
And additionally, why on Earth should we have to justify our you!”values” to someone who doesn’t have them? Seeking moral justification is itself an aspect of human psychology, so the average non-human mind would never expect any such thing.
There would be a significant difference in preference of lifestyles. Once we explained each-other to each-other, however, what we call “values” would be very, very close, and ways to arrange to share the world would be invented quite quickly.
(Of course, this may simply reflect that I put more belief-weight on bioterminism, whereas you place it on cultural determinism.)
Perhaps it would be clearer to discuss “exogenous” and “endogenous” values, as the relevant distinction between terminal and instrumental values are that terminal values are internally uncaused, and instrumental values are those pursued because they will directly or indirectly lead to an improvement in those terminal values, and this maps somewhat clearly onto exogenous and endogenous.
That is, of course this is a two-place word. IGF is exogenous to humans, but endogenous to evolution (and, as you put it, entropy is exogenous to evolution).
So my statement is that we have a collection of values and preferences that are moderately well-suited to our environment, because there is a process by which environments shape their inhabitants. As we grow more powerful, we shape our environment to be better suited to our values and preferences, because that is how humans embody preferences.
But we have two problems. First, our environment is still shaping our values and preferences, and thus the sort of world that we most want to live in might not be a world that would be mostly populated by us. Second, if we have any conflicts about preferences, typically we would go up a level to resolve those conflicts—but it is obvious that the level “above” us doesn’t have any desirable moral insights. So we can’t ground our conflict-resolution process in something moral instead of practical.
It seems to me that near-mode values are strongly biodetermined, but far-mode values are almost entirely culturally determined. Since most moral philosophy takes place in far mode, cultural determination is far more relevant. You and your Comanche twin might be equally anxious, say, but are probably anxious about very different things and have different coping strategies and so on.
I picked Comanche specifically because they were legendary raiders with a predatory morality.
I simply have to ask: so what? I place no particular terminal value on evolution itself. I see nothing wrong, neither aesthetically nor morally, with simply overriding evolution through human deeds, the better to create the kind of world that, indeed, we living humans most want to live in. Who cares how probable it was, a priori, that evolution should spawn our sort of people in our preferred sort of environment?
Well, I suppose you do, for some reason, but I’m really confused as to why.
Actually, I disagree: we usually just negotiate from a combination of heuristics for morally appropriate power relations (picture something Rawlsian, and there are complex but, IMHO, well-investigated sociological arguments for why a Rawlsian approach to power relations is a rational idea for the people involved) and morally inappropriate power relations (ie: compulsion and brute force).
I suppose you could call the former component “going up a level”, but ultimately I think it grounds itself in the Rawls-esque dynamics of creating, out of social creatures who only share a little personality and experience in common among everyone, a common society that improves life for all its members and maximizes the expected yield of individual efforts, particularly in view of the fact that many causally relevant attributes of individuals are high-entropy random variables and so we need to optimize the expected values, blah blah blah. Ultimately, human individuals do not enter into society because some kind of ontologically, metaphysically special Fundamental Particle of Morals collides with them and causes them to do so, but simply because people need other people to help each-other out and to feel at all ok about being people—solidarity is a basic sociological force.
As you can see above, I think the conflict-resolution process is the most practical part of the morals of human life.
Frankly, I think this is just an error on the part of most so-called moral philosophy, that it is conducted largely in a cognitive mode governed by secondary ideas-about-ideas, beliefs-in-beliefs, and impressions-about-impressions, a realm almost entirely without experiential data.
While I don’t think “Near Mode/Far Mode” is entirely a map that matches the psychological territory, insofar as we’re going to use it, I would consider Near Mode far more morally significant, precisely because it is informed directly by the actual experiences of the actual individuals involved. The social signals that convey “ideas” as we usually conceive of them in “Far Mode” actually have a tiny fraction of the bandwidth of raw sensory experience and conscious ideation, and as such should be weighted far more lightly by those of us looking to make our moral and aesthetic evaluations on data the same way we make factual evaluations on data.
The first rule of bounded rationality is that data and compute-power are scarce resources, and you should broadly assume that inferences based on more of each are very probably better than inferences in the same domain performed with less of each—and one of these days I’ll have the expertise to formalize that!
I don’t think I was clear enough. I’m not stating that it is value-wrong to alter the environment; indeed, that’s what values push people to do. I’m saying that while the direct effect is positive, the indirect effects can be negative. For example, we might want casual sex to be socially accepted because casual sex is fun, and then discover that this means unpleasant viruses infect a larger proportion of the population, and if they’re suitable lethal the survivors will, by selection if not experience, be those who are less accepting of casual sex. Or we might want to avoid a crash now and so transfer wealth from good predictors to poor predictors, and then discover that this has weakened the incentive to predict well, leading to worse predictions overall and more crashes. Both of those are mostly cultural examples, and I suspect the genetic examples will suggest themselves.
That is, one of the ways that values drift is the environmental change brought on by the previous period’s exertion of their morals may lead to the destruction of those morals in the next period. If you care about value preservation, this is one of the forces changing values that needs to be counteracted or controlled.