Although you don’t explicitly mention it, I feel like this whole post is about value drift. The doomers are generally right on the facts (and often on the causal pathways), and we do nonetheless consider the post-doom world better, but the 1-nth order effects of these new technologies reciprocally change our preferences and worldviews to favor the (doomed?) world created by the aforementioned new technologies.
The question of value drift is especially strange given that we have a “meta-intuition” that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with “new” values.
For a concrete example, consider that most Americans have historically been Christians. In fact, the history of the early United States is deeply influenced by Christianity, sometimes swelling in certain periods to fanatical levels. If those Americans could see the secular American republic of 2025, with little religious belief and no respect for the moral authority of Christian scripture, they would most likely be morally appalled. Perhaps they might view the loss of “traditional God-fearing values” as a harm that in itself outweighs the cumulative benefits of industrial modernity. As a certain Nazarene said: “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” (Mark 8:36)
With this in mind, as a final exercise I’d like you, dear reader, to imagine a future where humanity has advanced enormously technologically, but has undergone such profound value shifts that every central moral and social principle that you hold dear has been abandoned, replaced with mores which you find alien and abhorrent. In this scenario, do you obey your moral intuitions that the future is one of Lovecraftian horror? Or do you obey your historical meta-intuitions that future people probably know better than you do?
Except that there arguably exist technologies hated even by the humans who grew in their realm. For example, nuclear weapons.[1] Or, according to a critic, social media. Suppose that AI establishes some kind of a future where the humans can’t even usefully help each other or are so spoiled by, say, AI girlfriends or boyfriends that humans find it hard to relate to each other. If the humans don’t become fine with it, then your case for the future and futuristic mores would break, but the case against futuristic mores would hold.
I agree with your sentiment — I suppose I was implicitly presenting the bull case (or paradigmatic case) of cultural drift, wherein the future values are supported by future people but despised by their ancestors.
I think your example is closer to the familiar “Moloch” dynamic, where social and material technology leads to collective outcomes that are obviously undesirable to all involved. Moloch is certain to be an possible issue in any future world!
The question of value drift is especially strange given that we have a “meta-intuition” that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with “new” values.
I feel like this is sweeping a bit under the rug. First, there’s a reason why there are people who label themselves politically as “conservatives”—some people do think that our current values are just fine and should, in fact, be preserved unchanged forever! Some even want to go back to previous values (however impractical and unfeasible that tends to be; usually what it happens is that you make up some new thing that is merely a bastardised modern caricature of the old values). As far as people who instead want the values to change go, they usually have an idea of a good direction for them to change—usually they’re people who are far from the median of society and so they would like society to become more like them.
Of course push far enough in the future and all ideology might seem entirely incomprehensible to us. I don’t really have a clean answer for what we should think of that, except that maybe it’s a big discounting factor on longtermist thinking (after all, suppose that all humans from 500,000 years hence have agreed that slavery is fine and genociding aliens is desirable—should we feel particularly proud of ensuring there’s more of those people around?).
As far as people who instead want the values to change go, they usually have an idea of a good direction for them to change—usually they’re people who are far from the median of society and so they would like society to become more like them.
I have in mind another conjecture: even median humans value humans with values that are, in their minds, at least as moral as median humans, and ideally[1] more moral.
On the other hand, I have seen conservatives building cases for SOTA liberal values being damaging to the minds or outright incompatible with sustaining the civilisation (e.g. a too big part of Gen Z women being against motherhood). In the past, if some twisted moral reflection led to destructive values, then the values were likely to be outcompeted.
The third option is a group of humans forsibly establishing their values[2]versus another system of values compatible with progress is considered amoral.
So I think that people are likely to value the future with values which keep the civilisation afloat and can be accepted upon thorough reflection on how the values were reached and on the values’ consequences.
The degree of extra morality which humans value can vary between cultures. For example, we less value the reasons which caused people to enter monasteries, but not the acts like sustaining knowledge.
Although you don’t explicitly mention it, I feel like this whole post is about value drift. The doomers are generally right on the facts (and often on the causal pathways), and we do nonetheless consider the post-doom world better, but the 1-nth order effects of these new technologies reciprocally change our preferences and worldviews to favor the (doomed?) world created by the aforementioned new technologies.
The question of value drift is especially strange given that we have a “meta-intuition” that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with “new” values.
For a concrete example, consider that most Americans have historically been Christians. In fact, the history of the early United States is deeply influenced by Christianity, sometimes swelling in certain periods to fanatical levels. If those Americans could see the secular American republic of 2025, with little religious belief and no respect for the moral authority of Christian scripture, they would most likely be morally appalled. Perhaps they might view the loss of “traditional God-fearing values” as a harm that in itself outweighs the cumulative benefits of industrial modernity. As a certain Nazarene said: “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” (Mark 8:36)
With this in mind, as a final exercise I’d like you, dear reader, to imagine a future where humanity has advanced enormously technologically, but has undergone such profound value shifts that every central moral and social principle that you hold dear has been abandoned, replaced with mores which you find alien and abhorrent. In this scenario, do you obey your moral intuitions that the future is one of Lovecraftian horror? Or do you obey your historical meta-intuitions that future people probably know better than you do?
Except that there arguably exist technologies hated even by the humans who grew in their realm. For example, nuclear weapons.[1] Or, according to a critic, social media. Suppose that AI establishes some kind of a future where the humans can’t even usefully help each other or are so spoiled by, say, AI girlfriends or boyfriends that humans find it hard to relate to each other. If the humans don’t become fine with it, then your case for the future and futuristic mores would break, but the case against futuristic mores would hold.
While nuclear weapons are hard to separate from nuclear power plants, thermonuclear fusion has yet to produce a peaceful application.
I agree with your sentiment — I suppose I was implicitly presenting the bull case (or paradigmatic case) of cultural drift, wherein the future values are supported by future people but despised by their ancestors.
I think your example is closer to the familiar “Moloch” dynamic, where social and material technology leads to collective outcomes that are obviously undesirable to all involved. Moloch is certain to be an possible issue in any future world!
I feel like this is sweeping a bit under the rug. First, there’s a reason why there are people who label themselves politically as “conservatives”—some people do think that our current values are just fine and should, in fact, be preserved unchanged forever! Some even want to go back to previous values (however impractical and unfeasible that tends to be; usually what it happens is that you make up some new thing that is merely a bastardised modern caricature of the old values). As far as people who instead want the values to change go, they usually have an idea of a good direction for them to change—usually they’re people who are far from the median of society and so they would like society to become more like them.
Of course push far enough in the future and all ideology might seem entirely incomprehensible to us. I don’t really have a clean answer for what we should think of that, except that maybe it’s a big discounting factor on longtermist thinking (after all, suppose that all humans from 500,000 years hence have agreed that slavery is fine and genociding aliens is desirable—should we feel particularly proud of ensuring there’s more of those people around?).
I have in mind another conjecture: even median humans value humans with values that are, in their minds, at least as moral as median humans, and ideally[1] more moral.
On the other hand, I have seen conservatives building cases for SOTA liberal values being damaging to the minds or outright incompatible with sustaining the civilisation (e.g. a too big part of Gen Z women being against motherhood). In the past, if some twisted moral reflection led to destructive values, then the values were likely to be outcompeted.
The third option is a group of humans forsibly establishing their values[2] versus another system of values compatible with progress is considered amoral.
So I think that people are likely to value the future with values which keep the civilisation afloat and can be accepted upon thorough reflection on how the values were reached and on the values’ consequences.
The degree of extra morality which humans value can vary between cultures. For example, we less value the reasons which caused people to enter monasteries, but not the acts like sustaining knowledge.
Or values that they would like others to follow, but in this case the group is far easier to denounce as manipulators.