By “manufactured values” I meant artificial values coming from nurture rather than innate human nature.
I don’t think that this distinction really cuts reality at the joints. In general, it’s my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect—if it’s easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture… then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it’s unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn’t have learned to classify things according to that specific function. But on the other hand “learning any function” is in the untrained neural network’s nature, so just because something came from nurture, doesn’t mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn’t have learned to discriminate anything.
Similarly, without a culture surrounding us we’ll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We’re clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy—where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions—Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent “potential morality” becomes an actual concrete morality:
The acquisition of phonology provides a useful analogy for the
acquisition of morality. Children are born with the ability to
distinguish among hundreds of phonemes, but after a few years of
exposure to a specific language they lose the ability to make some
unexercised phoneme contrasts (Werker & Tees, 1984). Likewise,
Ruth Benedict (1934/1959) suggested, we can imagine a great “arc
of culture” on which are arrayed all the possible aspects of human
functioning. “A culture that capitalized even a considerable proportion
of these would be as unintelligible as a language that used
all the clicks, all the glottal stops, all the labials” (Benedict,
1934/1959, p. 24).
Similarly, a culture that emphasized all of the moral intuitions
that the human mind is prepared to experience would risk paralysis
as every action triggered multiple conflicting intuitions. Cultures
seem instead to specialize in a subset of human moral potential.
For example, Shweder’s theory of the “big three” moral ethics
(Shweder, Much, Mahapatra, & Park, 1997; see also Jensen, 1997)
proposes that moral “goods” (i.e., culturally shared beliefs about
what is morally admirable and valuable) generally cluster into
three complexes, or ethics, which cultures embrace to varying
degrees: the ethic of autonomy (focusing on goods that protect the
autonomous individual, such as rights, freedom of choice, and
personal welfare), the ethic of community (focusing on goods that
protect families, nations, and other collectivities, such as loyalty,
duty, honor, respectfulness, modesty, and self-control), and the
ethic of divinity (focusing on goods that protect the spiritual self,
such as piety and physical and mental purity). A child is born
prepared to develop moral intuitions in all three ethics, but her
local cultural environment generally stresses only one or two of the
ethics. Intuitions within culturally supported ethics become sharper
and more chronically accessible (Higgins, 1996), whereas intuitions
within unsupported ethics become weaker and less accessible.
Such “maintenance-loss” models have been documented in
other areas of human higher cognition. It seems to be a design
feature of mammalian brains that much of neural development is
“experience expectant” (Black, Jones, Nelson, & Greenough,
1998). That is, there are developmentally timed periods of high
neural plasticity, as though the brain “expected” certain types of
experience to be present at a certain time to guide its final wiring.
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven’t yet invented them don’t value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn’t prevent some cultures from rejecting these things...
I don’t think that this distinction really cuts reality at the joints. In general, it’s my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect—if it’s easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture… then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it’s unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn’t have learned to classify things according to that specific function. But on the other hand “learning any function” is in the untrained neural network’s nature, so just because something came from nurture, doesn’t mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn’t have learned to discriminate anything.
Similarly, without a culture surrounding us we’ll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We’re clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy—where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions—Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent “potential morality” becomes an actual concrete morality:
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven’t yet invented them don’t value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn’t prevent some cultures from rejecting these things...