Good question. First of all, is it even possible to change an individual’s terminal values ? My guess is that the answer is “no”; that’s why they are called “terminal values”. Or, rather, even if it were technologically possible to change a person’s terminal values, doing so would probably amount to murder. It would be akin to reprogramming Clippy to care about butterflies instead of paperclips.
How should I behave to try to change the terminal values that society as a whole implements / tolerates ?
If changing an individual’s terminal values is impossible, and if you are committed to a very low level of violence, my guess is that you should attempt to instill your desired values in as many young children as possible—and let time take care of the rest.
Good question. First of all, is it even possible to change an individual’s terminal values ? My guess is that the answer is “no”; that’s why they are called “terminal values”.
That’s not what “terminal values” means. It simply means the values from which all of a person’s other values can be derived. It is perfectly possible to change one’s terminal values—for instance, a young child cares only about itself, while almost no adults care only about themselves.
That’s a good point. Another example is going through puberty. Although one could imagine an AI whose terminal values are always the same, and thus make changes to their behavior only due to acquiring new knowledge, it seems that humans are literally built for their terminal values to shift in particular ways.
That’s a good point about children (and puberty, as Crux said); it’s possible (and IMO likely) that some of their terminal values are malleable. But I also agree with what William_Bur said on a sibling thread: issues like racism and segregation are instrumental values, not terminal ones.
I don’t think that’s necessarily true—see for instance Haidt’s work on moral foundations. Plenty of people who opposed interracial marriage framed it as a matter of purity/contamination.
There have been many historical examples that look like changes in terminal values. For example, George Wallace appears to have changed positions on racism in government (going from pro to anti).
As you yourself note, the US Civil Rights movement appears to have successfully changed the US society’s terminal values. Although I’m not sure “getting offended” is an accurate paraphrase of the Southern Christian Leadership Conference’s strategy or tactics.
I don’t believe those were terminal values. If someone changes their position from pro-segregation to anti-segregation, I would expect that in their previous model, segregation seemed better, but later they updated their model, and under the new model, segregation seemed worse. Better and worse in making everyone happy, for example.
Some people sincerely believed that segregation brings better results for everyone. Whether they were right or wrong, segregation was just a means of achieving some other value.
I agree with what Villiam_Bur said. I think our terminal values are more along the lines of “seek pleasure, avoid pain”, with the possible addition of ”...for myself as well as my descendants”. Issues like racism matter to people because, in the long run (and from a very general perspective), they cause pain, or inhibit pleasure.
I think our terminal values are more along the lines of “seek pleasure, avoid pain”, with the possible addition of ”...for myself as well as my descendants”.
My first thought here was, “No, we don’t know what our terminal values are yet” and to nitpick the particular ones you proposed.
Then I realized: “Terminal values” are an idea in a mathematical theory. They’re not things out in the world. They’re a data object in a model that is a deliberate simplification of behavior for the purpose of highlighting particular features of it. “Terminal values” exist in the map, next to Homo economicus and markets in equilibrium — not out here in the territory where there’s mud on our boots.
Nobody does something because of a terminal value. Rather, people do things, and some folks (very few) explain those doings in terms of terminal values. Others explain them in terms of dramaturgy) or other maps, each of which highlights different things.
Moreover, if we expand the mathematical map to the level needed to encompass something as complicated as people … well, the sorts of things that people think of and say, “These could be human terminal values!” are usually not even the sorts of things that we should expect could be human terminal values. The difference is as big as the difference between the syntax accepted by a text adventure game’s command parser and the syntax understood by a natural language speaker.
Good question. First of all, is it even possible to change an individual’s terminal values ? My guess is that the answer is “no”; that’s why they are called “terminal values”. Or, rather, even if it were technologically possible to change a person’s terminal values, doing so would probably amount to murder. It would be akin to reprogramming Clippy to care about butterflies instead of paperclips.
If changing an individual’s terminal values is impossible, and if you are committed to a very low level of violence, my guess is that you should attempt to instill your desired values in as many young children as possible—and let time take care of the rest.
That’s not what “terminal values” means. It simply means the values from which all of a person’s other values can be derived. It is perfectly possible to change one’s terminal values—for instance, a young child cares only about itself, while almost no adults care only about themselves.
That’s a good point. Another example is going through puberty. Although one could imagine an AI whose terminal values are always the same, and thus make changes to their behavior only due to acquiring new knowledge, it seems that humans are literally built for their terminal values to shift in particular ways.
That’s a good point about children (and puberty, as Crux said); it’s possible (and IMO likely) that some of their terminal values are malleable. But I also agree with what William_Bur said on a sibling thread: issues like racism and segregation are instrumental values, not terminal ones.
I don’t think that’s necessarily true—see for instance Haidt’s work on moral foundations. Plenty of people who opposed interracial marriage framed it as a matter of purity/contamination.
There have been many historical examples that look like changes in terminal values. For example, George Wallace appears to have changed positions on racism in government (going from pro to anti).
As you yourself note, the US Civil Rights movement appears to have successfully changed the US society’s terminal values. Although I’m not sure “getting offended” is an accurate paraphrase of the Southern Christian Leadership Conference’s strategy or tactics.
I don’t believe those were terminal values. If someone changes their position from pro-segregation to anti-segregation, I would expect that in their previous model, segregation seemed better, but later they updated their model, and under the new model, segregation seemed worse. Better and worse in making everyone happy, for example.
Some people sincerely believed that segregation brings better results for everyone. Whether they were right or wrong, segregation was just a means of achieving some other value.
EDIT: I think we should be careful about assuming that our opponents have different terminal values. More likely, they have a different model of reality.
I agree with what Villiam_Bur said. I think our terminal values are more along the lines of “seek pleasure, avoid pain”, with the possible addition of ”...for myself as well as my descendants”. Issues like racism matter to people because, in the long run (and from a very general perspective), they cause pain, or inhibit pleasure.
My first thought here was, “No, we don’t know what our terminal values are yet” and to nitpick the particular ones you proposed.
Then I realized: “Terminal values” are an idea in a mathematical theory. They’re not things out in the world. They’re a data object in a model that is a deliberate simplification of behavior for the purpose of highlighting particular features of it. “Terminal values” exist in the map, next to Homo economicus and markets in equilibrium — not out here in the territory where there’s mud on our boots.
Nobody does something because of a terminal value. Rather, people do things, and some folks (very few) explain those doings in terms of terminal values. Others explain them in terms of dramaturgy) or other maps, each of which highlights different things.
Moreover, if we expand the mathematical map to the level needed to encompass something as complicated as people … well, the sorts of things that people think of and say, “These could be human terminal values!” are usually not even the sorts of things that we should expect could be human terminal values. The difference is as big as the difference between the syntax accepted by a text adventure game’s command parser and the syntax understood by a natural language speaker.