I used to have this opinion about colonialism being justified, and over time have started to believe that exercising a kind of agency that violates others peoples sovereignty is not self-justified according to the values of the winner, by the winner.
If an SI came to America now, nuked it Truman style, and replaced every human being with an a-sentient robotic mimic that was convinced it loved the new flag—we might get these kinds of articles too. The actions wouldn’t be justified and we wouldn’t be wrong to say they are wrong simply because we can’t oppose them.
The essay blurs the line between being defender and aggressor and I think that’s something that can’t be done tacitly. I get the point you are making about values which encourage agency, rather than to contempt it. And ways of life are absolutely worth defending. But I struggle immensely with the notion that we can derive any type of normative claim about the goodness of imposition of group values when those very group values are being applied as the retrospective rubric.
You can love your life, society, its norms and the freedoms they afford you. But the claims about the intrinsic goodness of your system, without a shared basis of evaluation between that to which you aim to compare it (under its own criteria), make them epistemic-ally thin as a viking screaming of his love for Valhalla. And while I think it is that vikings right to live and die for Valhalla, if that way is threatened, that love does not bubble up to something as equivalent to an excuse for external imposition of the viking way.
I struggle specifically here because of the problem of sovereignty. If I was reasonably confident I knew better than you, how you should live, under what basis do I have the obligation to take away your agency to elect override your own preferences? Or the new set of preference makers in any society? Even if I think I could do both better?
This I do not know and for me the answer underpins all such moral evaluations of colonialism, present and future. Human and AI.
I might make a follow-up post that argues against postmodernism (which I feel like you are espousing here). I think there are a bunch of pretty solid ways you can compare value systems (e.g. you can just ask people which society they would like to switch to), and that this provides pretty strong arguments in favor of the colonization of North America.
I think there are deeper challenges here that could exist, but I don’t think this example provides such a challenge (I am not like 90%+ confident, but I am like reasonably confident).
I don’t understand how “if it looks like the highest magnitude feature in describing this behavior pattern is ‘conquer’, you’re probably doing a bad thing” is postmodernism? That seems pretty compatible with modernism to me. Like, I think we can hope for better than “let’s do the same mix of maybe some good but mostly motivated by bad”! It feels like you’ve already ideologically written your bottom line here, I have low P(habryka’s values after this conversation converge to being asymptotically aligned with mine more than epsilon) at the moment; but it might rapidly go up if it turns out that habryka actually does disvalue mass suffering and deletion, something I expect on order 30% of humanity simply is asymptotically aligned with me about, in asymptotically disvaluing this sort of behavior.
Like, come on, surely you can see how “goodness conquer” and “goodness achieve” are different referents? is that also postmodernism? I thought postmodernism was when you don’t treat words as having referents or something. I’m pretty sure these have referents! I don’t want there to be much if any conquering going on in utopia, conquering just seems bad, surely a goodness that contains conquering as a good is not good at all? Or maybe you meant something more complex by your example that will be obvious to me when my brain becomes less quantized; I feel like reading this post decreased my brain’s bit precision from the normal 8 down to 2, or something, due to emotional content. It’s a pretty emotionally activating post for people who are sufficiently near me, for an unknown value of “sufficiently”. Did it really need to be?
I don’t understand how “if it looks like the highest magnitude feature in describing this behavior pattern is ‘conquer’, you’re probably doing a bad thing” is postmodernism?
“Postmodernism” is a famously confusing term, but I am here using it to refer to the position of “you cannot compare goodness across different societal perspectives, you always have to evaluate a moral system from within that society and can’t make comparisons that aggregate across multiple moral perspectives”. This is of course only one of the 15 things that “postmodernism” means, but it’s the one I was referring to here.
I think you can! Though it’s of course tricky.
but it might rapidly go up if it turns out that habryka actually does disvalue mass suffering and deletion
Huh, I am very confused. Of course those things are very bad. The whole reason why I chose American colonialism as an example is because it’s so bad, and so poses the greatest challenge to a position of “when you see bad things happening as part of your efforts to do good, nope out”, which I think was a reasonable interpretation of my first post.
So we are obviously on the same page here! I mention a lot of times that things were really bad, and they continue to be really bad! But I also find it extremely interesting that really a surprising fraction of modern western democratic institutions were birthed in that mess, and that even despite all the badness it seems more likely than not for it to have been the right call to do, and that it would have been a moral mistake to nope out.
“Postmodernism” is a famously confusing term, but I am here using it to refer to the position of “you cannot compare goodness across different societal perspectives, you always have to evaluate a moral system from within that society and can’t make comparisons that aggregate across multiple moral perspectives”. This is of course only one of the 15 things that “postmodernism” means, but it’s the one I was referring to here.
The thing you are describing here is more typically called moral and cultural relativism. Cultural relativism in the social sciences largely originates with Franz Boas (pioneer of modern anthropology) in the 19th century; moral relativism in philosophy goes back to antiquity. It is in any event much older than the various movements in 20th-century anthropology, art, and other fields that attracted the “postmodernism” label.
Sure! I think cultural relativism is a major strand of postmodernism, and the “postmodernist” version of it is the one I am interested in responding to and engaging with. I certainly agree that aspect of postmodernism is much older!
This post is in a meaningful sense a defense of modernism, and so it seems natural to engage with postmodernist critiques of it, of which this is one of the standard big ones.
My understanding is that the way these words are used in sociology, anthropology, etc., cultural relativism is very much present in modernism. The thing you are calling “modernism” seems to be something else; something more connected to naïve realism, traditionalism, conservatism, reaction, etc.
I am confused what you mean by “modernism” here? I mean this thing that Wikipedia is talking about:
It is also often perceived, especially in the West, as a socially progressive movement that affirms the power of human beings to create, improve, and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology.[c] From this perspective, modernism encourages the re-examination of every aspect of existence. Modernists analyze topics to find the ones they believe to be holding back progress, replacing them with new ways of reaching the same end.
To be clear, I am maximally sympathetic to all of these words being super vague and abstract and hard to use, so I am very happy to use different words. But I do also find it helpful to have handles for this kind of stuff.
This comment does seem to be arguing against one thing gears is saying,
but, I think gears is also say: (and I kind of agree, at least as an isolated point) that you a choice of what to call the post, and “let goodness conquer all it can defend” is a phrasing that leans into the bad-parts-specifically of the American project.
(Choosing good titles is hard tho. I have different titles for somewhat different posts I might have written for both this post and the last but they would have been fairly different posts)
The choice of “conquering” in the title is important because it shields against the usual kumbaya aspects of people thinking in the space.
Like, man, yes, if you want to create good things you will have a lot of fighting to do, and while under the umbrella of the modern world individuals can largely get away with not having to do any literal fighting, I find myself similarly frequently frustrated when people sneer at creating successful companies and taking the appropriate competitive zero-sum-contest-winning-actions that are necessary for good things to exist in that space.
The “conquering” part, or something of its kind, feels load-bearing to me. Though of course, title space is deep and wide, and it’s still putting emphasis on something, but I don’t regret the emphasis on this point (and of course as I said above, the whole point of choosing the American colonization is as to be the most far-out example of something to analyze).
I don’t mean to follow you around and pester you, but this:
Like, man, yes, if you want to create good things you will have a lot of fighting to do, and while under the umbrella of the modern world individuals can largely get away with not having to do any literal fighting, I find myself similarly frequently frustrated when people sneer at … the appropriate competitive zero-sum-contest-winning-actions that are necessary for good things to exist...”
Seems like a crux that I didn’t understand about your viewpoint. I’m a thoroughly modern dude who, while I wouldn’t sneer at competition engaged in in its appropriate places (like between companies, where the rules of how they can compete are pretty carefully circumscribed), strongly prefer fight-avoidance in general, and will try hard to find cooperative solutions to problems. I think one of the things I like most about the world I live in, is we’ve found ways to coordinate to put various methods of conflict off-limits, and only “fight” in nice mostly harmless ways. “Have the ability to conquer, but don’t use it”, “talk softly and carry a big stick” etc. carry a lot of appeal to me. Ideally in future-utopia-according-to-me, we swear off weapons any more hurtful than big sticks, and anyone who decides to defect about that gets beaten with the sticks until they decide that maybe that was a bad plan. And “colonialism was worth it” carries strong vibes (for me) of “get the biggest weapons you can find for the side of good, and use them to conquer and defend your notion of the good”. I feel like that’s what the colonial empires were doing—trying to bring the light of Civilization as they understood it to the dark continents, by force and replacing the inferior people with superior ones. EDIT: On further reflection, this part is not something I actually think. Think of them as inferior: Yes. Think they should be replaced with people from the home country: No.
I like the umbrella of the modern world very much, but recognize it’s fragile and do not want to poke holes in it. :D I fundamentally don’t think fighting and conquering is how Good wins, whereas I think the colonialists did think that’s how Good wins (because back in the day, war between countries was expected and normal). In my view, Good wins by deterring fights (by having the capacity to fight if needed), and being appealing. I’m not sure if you’d actually endorse “Good should conquer”, but “if you want to create good things, you have to fight” might be something you’d say? If so, I’d be able to meet you at “if you want to create good things, you have to be willing and able to fight if it comes to it”.
The blogpost I had in mind to write someday is “The Moral Obligation to be Powerful”, which is making a somewhat different point, but has the same desiderata of “fight against kumbaya/innocence vibe”.
I think my reaction here is to the implication that what actually happened was on the pareto frontier; that if we’re able to counterfact by sending back to a small group of people some reasonable amount of foreseeing-good-and-bad-outcomes, that the bad ones can’t be averted without preventing the good ones. Like, I don’t think the natives had to be screwed over so badly to get the good outcomes you’re talking about! explaining what the colonists meant by property and how their legal system worked would probably have done a lot of what I’m saying. the diseases would be harder to avoid but there might be some short message you can imagine someone figuring out at the time that we can counterfact on.
Besides the obvious direct moral cost, which was enormous, a lot of why people complain about the effects on today is that the natives were already pretty good at governance, they weren’t governing for expansion but they were pretty good at governing stability, so if they’d had more of a vote in governing for expansion there’s reason to expect it would have been slightly slower in exchange for much more stable. I doubt being nicer to natives results in no revolutionary war, the crown still was trying to stay in control pretty hard If they had been involved in setting up the USA after throwing off the crown; so most of the counterfactual of “find a way to warn natives about what’s coming” seems likely to produce a civ closer to NZ, which doesn’t seem like a particularly bad outcome. So like, the hint I’m getting isn’t like, “I accept tradeoffs”, it’s “I accept subpar tradeoffs where the negative side is hugely more negative than it needed to be in order to achieve what I see as good”.
I am reassured moderately, but I’m still confused by this pattern, and in particular, “conquer” still is setting off alarm bells for me that the representation in your head might be voting yes on things I think the natural abstraction of the good thing you’re trying to defend does not need to vote yes about.
I think that argument is valid only under a normative value system which doesn’t pay the cost of consequence out sourcing. I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
The point being that it is very easy to claim from within a structure with outsourced consequences that the structure is self-justified and coherently, globally good. No, you just aren’t paying the costs.
If you want to claim that the normative evaluation only applies to the in-group, then sure. But I’d argue that’s the exact kind of self-exemption I don’t morally agree with.
I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)
You’re mistaking Habryka’s argument to be “if people prefer modern america to pre-colonial america, then it’s right to colonialize america”. He’s just here making the (more modest) point that “if people prefer modern america to pre-colonial america, then probably modern america is a better to live than pre-colonial america”, which you seemed to be saying one could not have any opinion on.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).
I think the challenge here is that the comment is made as justification for the broader point of the article, which in context was (as addendum to your quote) “as an example of argument against post modernism”. Which I consider an argument as claim to its rightness, especially when framed in the context.
I am making the subtle point that the argument can’t be used to debunk a post-modernist philosophy because the data point he elected to use, was, for lack of better terms, consequentialist. Not morally justifying. To me, that’s like saying (and forgive me for the staunch metaphor): “I can make a pretty good case for arguing that squatting in your grandparents mansion is morally justified, because everyone on the block would choose to live in this mansion if they could”.
I would agree with you if he not had the prior qualifiers of it being an argument against the philosophy he considers me to have, from my earlier comment, and if in the article he didn’t equivocate all of this with goodness itself.
I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
I would take that bet, and consider it somewhat of a crux[1]. Indeed, I am honestly surprised you think it would come out the other way. I would be happy to make a bet about a survey on Positly or something.
If you want to claim that the normative evaluation only applies to the in-group, then sure. But I’d argue that’s the exact kind of self-exemption I don’t morally agree with.
Yep, I totally agree. My current beliefs here are (without total confidence) that everyone involved here would prefer a course of history where the US was established across the North American continent (my guess is also everyone would agree that you should make a lot lot of changes to how it was colonized).
I think the question of “from what moral reference frame should you evaluate whether something was worth it” is a pretty tricky one. You clearly can’t say “from the perspective of whoever was there first”, since, I do think I feel quite fine replacing insect populations and plankton from my oceans and using them for better stuff (I also think it’s obviously worth it to convert wild forests into arable land, but I might already be losing some people here).
You also clearly can’t say “just evaluate the consequences from the perspective from wherever you are now”, since that creates selection effects.
I again don’t actually think it would be a crux for this case (since I am pretty sure that the vast majority of people who lived on the US northern continent would prefer a future in which the US exists), and that seems like a better crux to go into, but I could go into the game theory here and how I would currently resolve these issues.
It would be a very straightforward crux in as much as we could elicit people’s enlightened and endorsed opinions here. The big issue seems to me just that people’s instinctual moral judgement often sucks and doesn’t correlate that much with what they would endorse after a lot of thinking, and the later seems much harder to get data on.
My current beliefs here are (without total confidence) that everyone involved here would prefer a course of history where the US was established across the North American continent (my guess is also everyone would agree that you should make a lot lot of changes to how it was colonized).
Hmmm… this is tricky. Like, how constrained are the courses of history you say that people would prefer?
Suppose the counterfactual world where people said no to Europeans genociding non-Christians on other continents, and so colonialism as I currently understand it doesn’t happen. What happens then? It sounds like you’re thinking there’s no US, and democracy worldwide is thus much weakened. I figure what would happen is, the New World still gets discovered by Europeans, and open land still gets populated with an agricultural society, one way or another. Maybe European powers take a more peaceful path in the New World, but still populate it, and there’s still a rebellion against colonial taxation, and the founding of something like the US still happens, maybe European ideas around agriculture transfer over and are adopted by those living in the Americas, as they watch Europe grow and industrialize, but we don’t have a vast empty continent, one way or another. And if the ideas of the founders hadn’t taken root in America, if we assume those people still existed, they might have taken root somewhere else. So to my mind, the counterfactual is we still have a populated North America, and democracy, we just have one less really bad thing in our history, and the “shining city on a hill” is on a different hill. Does the country or countries that exists on the landmass the US occupies today, in that counterfactual, count as the US, though? Unclear.
Personally, I’m less attached to the United States than I am to the ideals that an ideal United States would attempt to strive for. As long as those ideals are instantiated somewhere, I’m OK with that counterfactual. And I don’t see a strong logical or conceptual link between the ideals of the United States that I think are good, and colonialism, which was driven by very different ideas.
There are, in other words, a whole lot of possible counterfactuals I could imagine that keep the good I associate with US culture, while ditching colonialism. And I’m not super attached to the giant country to my south, as a political entity, if it was a bunch of small countries that might even be fun.
Yep, the exact counterfactual here is pretty tricky.
I think the trickiest moral part is how you relate in terms of interfacing with the existing legal system and existing property rights.
I think if you try to respect either of these, you are in for a really bad time, and my guess is the default outcome is that the Northern American continent roughly ends up similar to the Southern American continent. I think that would be quite bad! North America really is in a much better place than South America.
And then I also think there is a pretty decent chance that without North America, democracy never actually sweeps the world. Maybe you even get so unlucky that you reverse the industrial revolution (an outcome I don’t consider impossible as things were just brewing around that time), which would of course be maximally catastrophic, though I do think overall unlikely.
Like, the minimum thing that IMO needed to have happened to get good outcomes on the North American continent is for most of the land to be transferred away from native populations and towards the settling nations, and for the legal system of the continent to be replaced by something more like the American legal system (as opposed to whatever patchwork of tribal customs was governing things).
There are some ways this could have happened with very minimal violence. You can imagine buying all the land, but my strong guess is that you would have failed at that and if you had treated the existing population to have property rights over the continent, you would have failed to establish the boundary of an actually new nation. I think the next best choice would have been eminent domain with actually generous compensation, though unfortunately it wasn’t (to my knowledge) actually the case that early settlers, or colonizing nations, were in a good spot to generously compensate the people whose lands they were taking. Colonies generally barely broke even in those early years, and so there wasn’t a lot of surplus to go around.
Ah, ok. My understanding is that the peoples of North America didn’t have a strong sense of land ownership the way Europeans did, it was more “we take care of the land for ourselves and future generations, and the land takes care of us”. I think the peaceful resolution there would have involved a discussion between cultures so they could map and understand each other’s ontologies and ways of thinking. I expect the amount of land the colonists would have wanted to own for their own use would have been trivial for the natives to relinquish at first. And I dunno, if people think charter cities or seasteads or whatnot can have an impact by being an example of better governance --> thriving, why not small colonies with better legal systems? Of course there’s having to, y’know, fight the British. But probably the Native Americans could have helped with that (did help with that, actually? Except mostly on the British side, because they were concerned about colonial expansionism. Imagine a counterfactual where the colonies and the pre-existing population were on good terms, during the American Revolution...)
I certainly think if it had been legally possible at the time to have city-states or charter cities run by the native Americans, that would have been an absolutely amazing outcome.
Unfortunately I think city states and charter cities require really stable government and political borders and this wasn’t feasible at the time. I might be wrong about this. I also don’t think the political theory or political will for this alternative history was there in any meaningful sense (again, I think the closest analog we have is how governance of South America ended up shaking out, though it’s of course not perfect).
Similarly for small states. The US controlling the continent coast-to-coast has been hugely useful for trade and prosperity and governance. I am pretty federalist and think states should have more power, but I don’t think that extends into thinking that multiple nation states on the US continent would have been better (I think South America, and Europe in the 20th century both show different ways of how that would by default go wrong, I think).
I think the question of “from what moral reference frame should you evaluate whether something was worth it” is a pretty tricky one. You clearly can’t say “from the perspective of whoever was there first” ... You also clearly can’t say “just evaluate the consequences from the perspective from wherever you are now”
I’d think you’d want to have a decision method about this that doesn’t give the more powerful party (with the bigger army or the better weapons, etc.) more votes. If you’re making a moral decision and you don’t think might makes right, that implies that power shouldn’t give you more influence in deciding what “right” is, after all. And it’s rather worrisome that “we decide based on how many past, present and future people vote in favour of this plan” has a strategy “so just kill your opponents so your side has many descendants and their side has none”.
I could see weighting it by the number of people affected, and how strongly they prefer or disprefer various outcomes and the methods of getting to those outcomes (potentially including counterfactual people and your best estimate of what they would say in each case). I could also see a simpler decision rule, that allows for vetoes and deontological prohibitions of certain actions (like, say, genocide) and then you have to navigate through possibility-space to an alternative that no present parties veto and doesn’t violate deontological restrictions, and then whichever alternatives pass the bar for “more benefit than harm to all those affected” are worth it. This method, as with many methods that involve vetoes, protects minority interests and doesn’t let 50%+1 of the population do whatever it feels like to 50%-1 of the population under consideration.
In any case, if you’ve got two or more parties in conflict, I think you’d want some method of deciding what’s “worth it” that is impartial.
Yep! Much has been written on things like this, here and elsewhere. We could go into a whole deconstruction of moral relativism, but I don’t think it’s the best use of either of our time. For now, I maintain the position that moral relativism poses some challenges to doing analysis like this, but IMO not too much of one, and you can overcome them, and I certainly disavow any analysis of the form “we just count the preferences of current humans, ignoring the fact that they are descendants of the victors”.
I mean, I would not switch to the U.S. society of 1776, or 1860, or even 1920. It is better today in 2026 than it was before vaccines, etc. It is very hard to decide whether I would prefer a counterfactual Native American country/empire that could have developed after western contact and exist in 2026, because the outcome is highly uncertain. Various levels of western colonization happened to non-western societies; several low-colonized countries are doing great in 2026. Mostly what makes countries great today is wide availability of technology, natural resources, education, human rights, and medicine.
That said, would I switch to super-America that conquered the world in the 1800s as a (somewhat unintuitively) democratic republic and invented antibiotics and vaccines in the same century and paused global warming in the mid century or early 1900s because of no conveniently hidden externalities of a world government? Maybe?
I think it’s a mistake to call the above postmodernism and I’d be disappointed if your long form address of the above point were framed that way.
I agree this position is part of a bundle that’s associated with postmodernism (mostly by its detractors!), but the use here feels conflationary, adversarial, mind-killing.
I would find this future post much more readable, enjoyable, and easier to fit in my model of the world (and of Oli) if you didn’t use this piece of language.
I mean, I am not one to object to tabooing a word if it causes confusion in a conversation, but having taken a few classes at Berkeley by self-identified postmodernist teachers and having had much fun arguing with them, I am pretty sure this is an accurate description of a standard part of postmodernist thought, and I also don’t think they would consider that adversarial! Like, they said almost these exact words to me and I expect would straightforwardly endorse them.
I feel like “postmodernism” is almost famous for being a term that causes confusion about what it means, but I wasn’t expecting it to be a term to trigger defense-mechanisms.
I don’t dispute that some postmodernists would consider cultural relativism central to their worldview, and think instead of ’mostly by its detractors’ I should have said ’often by its detractors’.
I would be highly interested in reading an “against postmodernism” follow up.
I think the deeper challenge (which is not particularly relevant in the America example) is the idea that there are (a) some things which we think are good and we do them and they are good, and there are also (b) some things which we think are good that are actually bad, but not in a way where we can tell ‘from the inside’ right now. I think this point is more at home in a hypothetical post that is not about postmodernism, which (i) seems like an interesting post, and (ii) is not the one I am encouraging you to write.
(the third leg of this rule of three is (c) things that we think will lead to good ends but violate basic deontology, which we don’t do; https://www.lesswrong.com/s/AmFb5xWbPWWQyQ244/p/K9ZaZXDnL3SEmYZqB. What happens when multiple people have different deontological frameworks is left as an exercise for the reader).
(epistemic status: typed quickly, mostly encouragement of a future post, that I want to exist, where this encouragement is much stronger as a comment than as an upvote)
If I was reasonably confident I knew better than you, how you should live, under what basis do I have the obligation to take away your agency to elect override your own preferences? Or the new set of preference makers in any society? Even if I think I could do both better?
My answer is, if both: 1. I am reasonably confident I know better than you how you should live. 2. I am not sure that you are not someone with the intelligence and capability to make your own evaluations of what is good and bad for you and act accordingly (not a baby or a cat or someone of extremely low mental capacity who we would say for example can’t sign contracts on their own behalf because they don’t understand what’s happening well enough).
Then I should try to convince you I’m right, rather than imposing the outcomes on you that I think are best for you. If I am right, and you have decision-making capacity, you will be convinced. If I am wrong or you are someone who lacks decision-making capacity, I will find that out. I’m only in the clear to “take away your agency” if I have a good faith belief that you’re something like a baby or a cat or someone without the mental capacity to make decisions for themselves (for certain scopes of decisions—even a cat can decide whether it wants to eat food a or food b, and various other things). And I’d better be pretty sure of that, because if I turn out to have been wrong about it and treated you as someone with less agency than you deserve, that’s real bad. And honestly, even with my dog, who is not a smart dog, I try asking nicely and persuasion and positive reward for desired behaviour, before coercion, and coercion is rarely required. Even granting that the colonizers had thought of the people they were colonizing as moral patients rather than people, if they had treated them as well as I treat my none-too-bright dog, history would have been different.
You can love your life, society, its norms and the freedoms they afford you. But the claims about the intrinsic goodness of your system, without a shared basis of evaluation between that to which you aim to compare it (under its own criteria), make them epistemic-ally thin as a viking screaming of his love for Valhalla. And while I think it is that vikings right to live and die for Valhalla, if that way is threatened, that love does not bubble up to something as equivalent to an excuse for external imposition of the viking way.
Ironically, this does more to justify colonialism than it does to defeat it. By your logic, if a modern military force were somehow transported back to the 1600s, they would have no moral basis to prevent European settlers from colonizing North America. For these settlers, the act of settlement was as much driven by morality, by a religious need to go forth and multiply and spread the light of God, as it was by rational calculation of material needs.
So I ask you, by what right would you stop these settlers from going forth and bringing the light of the Lord to the heathen darkness?
If I was reasonably confident I knew better than you, how you should live, under what basis do I have the obligation to take away your agency to elect override your own preferences?
Under the norms of your society/culture. They obviously aren’t static, and can e.g. fade away, having lost to external competition (like vikings had), or change in response to internal critique (like how colonialism became discredited). If you would rather operate under some hypothetical perfect rules derived from first principles, then you will likely be disappointed, seeing how philosophy has for millennia utterly failed to discover those.
I used to have this opinion about colonialism being justified, and over time have started to believe that exercising a kind of agency that violates others peoples sovereignty is not self-justified according to the values of the winner, by the winner.
If an SI came to America now, nuked it Truman style, and replaced every human being with an a-sentient robotic mimic that was convinced it loved the new flag—we might get these kinds of articles too. The actions wouldn’t be justified and we wouldn’t be wrong to say they are wrong simply because we can’t oppose them.
The essay blurs the line between being defender and aggressor and I think that’s something that can’t be done tacitly. I get the point you are making about values which encourage agency, rather than to contempt it. And ways of life are absolutely worth defending. But I struggle immensely with the notion that we can derive any type of normative claim about the goodness of imposition of group values when those very group values are being applied as the retrospective rubric.
You can love your life, society, its norms and the freedoms they afford you. But the claims about the intrinsic goodness of your system, without a shared basis of evaluation between that to which you aim to compare it (under its own criteria), make them epistemic-ally thin as a viking screaming of his love for Valhalla. And while I think it is that vikings right to live and die for Valhalla, if that way is threatened, that love does not bubble up to something as equivalent to an excuse for external imposition of the viking way.
I struggle specifically here because of the problem of sovereignty. If I was reasonably confident I knew better than you, how you should live, under what basis do I have the obligation to take away your agency to elect override your own preferences? Or the new set of preference makers in any society? Even if I think I could do both better?
This I do not know and for me the answer underpins all such moral evaluations of colonialism, present and future. Human and AI.
I might make a follow-up post that argues against postmodernism (which I feel like you are espousing here). I think there are a bunch of pretty solid ways you can compare value systems (e.g. you can just ask people which society they would like to switch to), and that this provides pretty strong arguments in favor of the colonization of North America.
I think there are deeper challenges here that could exist, but I don’t think this example provides such a challenge (I am not like 90%+ confident, but I am like reasonably confident).
I don’t understand how “if it looks like the highest magnitude feature in describing this behavior pattern is ‘conquer’, you’re probably doing a bad thing” is postmodernism? That seems pretty compatible with modernism to me. Like, I think we can hope for better than “let’s do the same mix of maybe some good but mostly motivated by bad”! It feels like you’ve already ideologically written your bottom line here, I have low P(habryka’s values after this conversation converge to being asymptotically aligned with mine more than epsilon) at the moment; but it might rapidly go up if it turns out that habryka actually does disvalue mass suffering and deletion, something I expect on order 30% of humanity simply is asymptotically aligned with me about, in asymptotically disvaluing this sort of behavior.
Like, come on, surely you can see how “goodness conquer” and “goodness achieve” are different referents? is that also postmodernism? I thought postmodernism was when you don’t treat words as having referents or something. I’m pretty sure these have referents! I don’t want there to be much if any conquering going on in utopia, conquering just seems bad, surely a goodness that contains conquering as a good is not good at all? Or maybe you meant something more complex by your example that will be obvious to me when my brain becomes less quantized; I feel like reading this post decreased my brain’s bit precision from the normal 8 down to 2, or something, due to emotional content. It’s a pretty emotionally activating post for people who are sufficiently near me, for an unknown value of “sufficiently”. Did it really need to be?
“Postmodernism” is a famously confusing term, but I am here using it to refer to the position of “you cannot compare goodness across different societal perspectives, you always have to evaluate a moral system from within that society and can’t make comparisons that aggregate across multiple moral perspectives”. This is of course only one of the 15 things that “postmodernism” means, but it’s the one I was referring to here.
I think you can! Though it’s of course tricky.
Huh, I am very confused. Of course those things are very bad. The whole reason why I chose American colonialism as an example is because it’s so bad, and so poses the greatest challenge to a position of “when you see bad things happening as part of your efforts to do good, nope out”, which I think was a reasonable interpretation of my first post.
So we are obviously on the same page here! I mention a lot of times that things were really bad, and they continue to be really bad! But I also find it extremely interesting that really a surprising fraction of modern western democratic institutions were birthed in that mess, and that even despite all the badness it seems more likely than not for it to have been the right call to do, and that it would have been a moral mistake to nope out.
The thing you are describing here is more typically called moral and cultural relativism. Cultural relativism in the social sciences largely originates with Franz Boas (pioneer of modern anthropology) in the 19th century; moral relativism in philosophy goes back to antiquity. It is in any event much older than the various movements in 20th-century anthropology, art, and other fields that attracted the “postmodernism” label.
Sure! I think cultural relativism is a major strand of postmodernism, and the “postmodernist” version of it is the one I am interested in responding to and engaging with. I certainly agree that aspect of postmodernism is much older!
This post is in a meaningful sense a defense of modernism, and so it seems natural to engage with postmodernist critiques of it, of which this is one of the standard big ones.
My understanding is that the way these words are used in sociology, anthropology, etc., cultural relativism is very much present in modernism. The thing you are calling “modernism” seems to be something else; something more connected to naïve realism, traditionalism, conservatism, reaction, etc.
I am confused what you mean by “modernism” here? I mean this thing that Wikipedia is talking about:
To be clear, I am maximally sympathetic to all of these words being super vague and abstract and hard to use, so I am very happy to use different words. But I do also find it helpful to have handles for this kind of stuff.
This comment does seem to be arguing against one thing gears is saying,
but, I think gears is also say: (and I kind of agree, at least as an isolated point) that you a choice of what to call the post, and “let goodness conquer all it can defend” is a phrasing that leans into the bad-parts-specifically of the American project.
(Choosing good titles is hard tho. I have different titles for somewhat different posts I might have written for both this post and the last but they would have been fairly different posts)
The choice of “conquering” in the title is important because it shields against the usual kumbaya aspects of people thinking in the space.
Like, man, yes, if you want to create good things you will have a lot of fighting to do, and while under the umbrella of the modern world individuals can largely get away with not having to do any literal fighting, I find myself similarly frequently frustrated when people sneer at creating successful companies and taking the appropriate competitive zero-sum-contest-winning-actions that are necessary for good things to exist in that space.
The “conquering” part, or something of its kind, feels load-bearing to me. Though of course, title space is deep and wide, and it’s still putting emphasis on something, but I don’t regret the emphasis on this point (and of course as I said above, the whole point of choosing the American colonization is as to be the most far-out example of something to analyze).
I don’t mean to follow you around and pester you, but this:
Seems like a crux that I didn’t understand about your viewpoint. I’m a thoroughly modern dude who, while I wouldn’t sneer at competition engaged in in its appropriate places (like between companies, where the rules of how they can compete are pretty carefully circumscribed), strongly prefer fight-avoidance in general, and will try hard to find cooperative solutions to problems. I think one of the things I like most about the world I live in, is we’ve found ways to coordinate to put various methods of conflict off-limits, and only “fight” in nice mostly harmless ways. “Have the ability to conquer, but don’t use it”, “talk softly and carry a big stick” etc. carry a lot of appeal to me. Ideally in future-utopia-according-to-me, we swear off weapons any more hurtful than big sticks, and anyone who decides to defect about that gets beaten with the sticks until they decide that maybe that was a bad plan. And “colonialism was worth it” carries strong vibes (for me) of “get the biggest weapons you can find for the side of good, and use them to conquer and defend your notion of the good”. I feel like that’s what the colonial empires were doing—trying to bring the light of Civilization as they understood it to the dark continents, by force
and replacing the inferior people with superior ones. EDIT: On further reflection, this part is not something I actually think. Think of them as inferior: Yes. Think they should be replaced with people from the home country: No.I like the umbrella of the modern world very much, but recognize it’s fragile and do not want to poke holes in it. :D I fundamentally don’t think fighting and conquering is how Good wins, whereas I think the colonialists did think that’s how Good wins (because back in the day, war between countries was expected and normal). In my view, Good wins by deterring fights (by having the capacity to fight if needed), and being appealing. I’m not sure if you’d actually endorse “Good should conquer”, but “if you want to create good things, you have to fight” might be something you’d say? If so, I’d be able to meet you at “if you want to create good things, you have to be willing and able to fight if it comes to it”.
The blogpost I had in mind to write someday is “The Moral Obligation to be Powerful”, which is making a somewhat different point, but has the same desiderata of “fight against kumbaya/innocence vibe”.
Yeah, OK, fair enough.
I think my reaction here is to the implication that what actually happened was on the pareto frontier; that if we’re able to counterfact by sending back to a small group of people some reasonable amount of foreseeing-good-and-bad-outcomes, that the bad ones can’t be averted without preventing the good ones. Like, I don’t think the natives had to be screwed over so badly to get the good outcomes you’re talking about! explaining what the colonists meant by property and how their legal system worked would probably have done a lot of what I’m saying. the diseases would be harder to avoid but there might be some short message you can imagine someone figuring out at the time that we can counterfact on.
Besides the obvious direct moral cost, which was enormous, a lot of why people complain about the effects on today is that the natives were already pretty good at governance, they weren’t governing for expansion but they were pretty good at governing stability, so if they’d had more of a vote in governing for expansion there’s reason to expect it would have been slightly slower in exchange for much more stable. I doubt being nicer to natives results in no revolutionary war, the crown still was trying to stay in control pretty hard If they had been involved in setting up the USA after throwing off the crown; so most of the counterfactual of “find a way to warn natives about what’s coming” seems likely to produce a civ closer to NZ, which doesn’t seem like a particularly bad outcome. So like, the hint I’m getting isn’t like, “I accept tradeoffs”, it’s “I accept subpar tradeoffs where the negative side is hugely more negative than it needed to be in order to achieve what I see as good”.
I am reassured moderately, but I’m still confused by this pattern, and in particular, “conquer” still is setting off alarm bells for me that the representation in your head might be voting yes on things I think the natural abstraction of the good thing you’re trying to defend does not need to vote yes about.
I think that argument is valid only under a normative value system which doesn’t pay the cost of consequence out sourcing. I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
The point being that it is very easy to claim from within a structure with outsourced consequences that the structure is self-justified and coherently, globally good. No, you just aren’t paying the costs.
If you want to claim that the normative evaluation only applies to the in-group, then sure. But I’d argue that’s the exact kind of self-exemption I don’t morally agree with.
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)
You’re mistaking Habryka’s argument to be “if people prefer modern america to pre-colonial america, then it’s right to colonialize america”. He’s just here making the (more modest) point that “if people prefer modern america to pre-colonial america, then probably modern america is a better to live than pre-colonial america”, which you seemed to be saying one could not have any opinion on.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
Ben is understanding me correctly that that was the argument I was making in this comment (I think you can compare how good a place to live is even, including across cultures and societies).
I agree in the post I am making the argument that the overall tradeoff was worth it. I could connect the two. I agree with you that there are circumstances in which “state of world A is better than state of the world B” does not imply that bringing about A is better than bringing about B. I do think it’s a pretty argument in favor of bringing about A.
It seemed like you were making the additional argument “if you could stop A completely (and that was your only option) you should not.”
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
I am not a hedonic utilitarian, so would reject this analysis on those grounds.
The question is “would A be a better state than state B” holistically, by the assessment of something like the extrapolated volition of humanity. Importantly including everything that will happen into the distant future (which I think makes there being only a few billion moderately happy humans very unlikely, as we will eventually colonize the stars, and I would consider it an enormous atrocity to fail to do so).
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
It’s a bit tricky. I don’t super feel like I owe the competitors to my distant ancestors in the primordial soup consideration in humanity’s CEV, though I am also not enormously confident that I definitely don’t.
Definitely agree that in this case you consider the value of the people who you took the opportunity to reproduce from (though also ultimately I will also at least somewhat bite the bullet that my values might diverge from theirs and in as much as we are in a fully zero-sum competition I would like my values to win out, though overall principles of fairness and justice definitely compel me to give them a non-trivial chunk of the Lightcone).
I think the challenge here is that the comment is made as justification for the broader point of the article, which in context was (as addendum to your quote) “as an example of argument against post modernism”. Which I consider an argument as claim to its rightness, especially when framed in the context.
I am making the subtle point that the argument can’t be used to debunk a post-modernist philosophy because the data point he elected to use, was, for lack of better terms, consequentialist. Not morally justifying. To me, that’s like saying (and forgive me for the staunch metaphor): “I can make a pretty good case for arguing that squatting in your grandparents mansion is morally justified, because everyone on the block would choose to live in this mansion if they could”.
I would agree with you if he not had the prior qualifiers of it being an argument against the philosophy he considers me to have, from my earlier comment, and if in the article he didn’t equivocate all of this with goodness itself.
I would take that bet, and consider it somewhat of a crux[1]. Indeed, I am honestly surprised you think it would come out the other way. I would be happy to make a bet about a survey on Positly or something.
Yep, I totally agree. My current beliefs here are (without total confidence) that everyone involved here would prefer a course of history where the US was established across the North American continent (my guess is also everyone would agree that you should make a lot lot of changes to how it was colonized).
I think the question of “from what moral reference frame should you evaluate whether something was worth it” is a pretty tricky one. You clearly can’t say “from the perspective of whoever was there first”, since, I do think I feel quite fine replacing insect populations and plankton from my oceans and using them for better stuff (I also think it’s obviously worth it to convert wild forests into arable land, but I might already be losing some people here).
You also clearly can’t say “just evaluate the consequences from the perspective from wherever you are now”, since that creates selection effects.
I again don’t actually think it would be a crux for this case (since I am pretty sure that the vast majority of people who lived on the US northern continent would prefer a future in which the US exists), and that seems like a better crux to go into, but I could go into the game theory here and how I would currently resolve these issues.
It would be a very straightforward crux in as much as we could elicit people’s enlightened and endorsed opinions here. The big issue seems to me just that people’s instinctual moral judgement often sucks and doesn’t correlate that much with what they would endorse after a lot of thinking, and the later seems much harder to get data on.
Hmmm… this is tricky. Like, how constrained are the courses of history you say that people would prefer?
Suppose the counterfactual world where people said no to Europeans genociding non-Christians on other continents, and so colonialism as I currently understand it doesn’t happen. What happens then? It sounds like you’re thinking there’s no US, and democracy worldwide is thus much weakened. I figure what would happen is, the New World still gets discovered by Europeans, and open land still gets populated with an agricultural society, one way or another. Maybe European powers take a more peaceful path in the New World, but still populate it, and there’s still a rebellion against colonial taxation, and the founding of something like the US still happens, maybe European ideas around agriculture transfer over and are adopted by those living in the Americas, as they watch Europe grow and industrialize, but we don’t have a vast empty continent, one way or another. And if the ideas of the founders hadn’t taken root in America, if we assume those people still existed, they might have taken root somewhere else. So to my mind, the counterfactual is we still have a populated North America, and democracy, we just have one less really bad thing in our history, and the “shining city on a hill” is on a different hill. Does the country or countries that exists on the landmass the US occupies today, in that counterfactual, count as the US, though? Unclear.
Personally, I’m less attached to the United States than I am to the ideals that an ideal United States would attempt to strive for. As long as those ideals are instantiated somewhere, I’m OK with that counterfactual. And I don’t see a strong logical or conceptual link between the ideals of the United States that I think are good, and colonialism, which was driven by very different ideas.
There are, in other words, a whole lot of possible counterfactuals I could imagine that keep the good I associate with US culture, while ditching colonialism. And I’m not super attached to the giant country to my south, as a political entity, if it was a bunch of small countries that might even be fun.
Yep, the exact counterfactual here is pretty tricky.
I think the trickiest moral part is how you relate in terms of interfacing with the existing legal system and existing property rights.
I think if you try to respect either of these, you are in for a really bad time, and my guess is the default outcome is that the Northern American continent roughly ends up similar to the Southern American continent. I think that would be quite bad! North America really is in a much better place than South America.
And then I also think there is a pretty decent chance that without North America, democracy never actually sweeps the world. Maybe you even get so unlucky that you reverse the industrial revolution (an outcome I don’t consider impossible as things were just brewing around that time), which would of course be maximally catastrophic, though I do think overall unlikely.
Could you elaborate a bit? This part is not clear to me, but seems quite important.
Like, the minimum thing that IMO needed to have happened to get good outcomes on the North American continent is for most of the land to be transferred away from native populations and towards the settling nations, and for the legal system of the continent to be replaced by something more like the American legal system (as opposed to whatever patchwork of tribal customs was governing things).
There are some ways this could have happened with very minimal violence. You can imagine buying all the land, but my strong guess is that you would have failed at that and if you had treated the existing population to have property rights over the continent, you would have failed to establish the boundary of an actually new nation. I think the next best choice would have been eminent domain with actually generous compensation, though unfortunately it wasn’t (to my knowledge) actually the case that early settlers, or colonizing nations, were in a good spot to generously compensate the people whose lands they were taking. Colonies generally barely broke even in those early years, and so there wasn’t a lot of surplus to go around.
Ah, ok. My understanding is that the peoples of North America didn’t have a strong sense of land ownership the way Europeans did, it was more “we take care of the land for ourselves and future generations, and the land takes care of us”. I think the peaceful resolution there would have involved a discussion between cultures so they could map and understand each other’s ontologies and ways of thinking. I expect the amount of land the colonists would have wanted to own for their own use would have been trivial for the natives to relinquish at first. And I dunno, if people think charter cities or seasteads or whatnot can have an impact by being an example of better governance --> thriving, why not small colonies with better legal systems? Of course there’s having to, y’know, fight the British. But probably the Native Americans could have helped with that (did help with that, actually? Except mostly on the British side, because they were concerned about colonial expansionism. Imagine a counterfactual where the colonies and the pre-existing population were on good terms, during the American Revolution...)
I certainly think if it had been legally possible at the time to have city-states or charter cities run by the native Americans, that would have been an absolutely amazing outcome.
Unfortunately I think city states and charter cities require really stable government and political borders and this wasn’t feasible at the time. I might be wrong about this. I also don’t think the political theory or political will for this alternative history was there in any meaningful sense (again, I think the closest analog we have is how governance of South America ended up shaking out, though it’s of course not perfect).
Similarly for small states. The US controlling the continent coast-to-coast has been hugely useful for trade and prosperity and governance. I am pretty federalist and think states should have more power, but I don’t think that extends into thinking that multiple nation states on the US continent would have been better (I think South America, and Europe in the 20th century both show different ways of how that would by default go wrong, I think).
I’d think you’d want to have a decision method about this that doesn’t give the more powerful party (with the bigger army or the better weapons, etc.) more votes. If you’re making a moral decision and you don’t think might makes right, that implies that power shouldn’t give you more influence in deciding what “right” is, after all. And it’s rather worrisome that “we decide based on how many past, present and future people vote in favour of this plan” has a strategy “so just kill your opponents so your side has many descendants and their side has none”.
I could see weighting it by the number of people affected, and how strongly they prefer or disprefer various outcomes and the methods of getting to those outcomes (potentially including counterfactual people and your best estimate of what they would say in each case). I could also see a simpler decision rule, that allows for vetoes and deontological prohibitions of certain actions (like, say, genocide) and then you have to navigate through possibility-space to an alternative that no present parties veto and doesn’t violate deontological restrictions, and then whichever alternatives pass the bar for “more benefit than harm to all those affected” are worth it. This method, as with many methods that involve vetoes, protects minority interests and doesn’t let 50%+1 of the population do whatever it feels like to 50%-1 of the population under consideration.
In any case, if you’ve got two or more parties in conflict, I think you’d want some method of deciding what’s “worth it” that is impartial.
Yep! Much has been written on things like this, here and elsewhere. We could go into a whole deconstruction of moral relativism, but I don’t think it’s the best use of either of our time. For now, I maintain the position that moral relativism poses some challenges to doing analysis like this, but IMO not too much of one, and you can overcome them, and I certainly disavow any analysis of the form “we just count the preferences of current humans, ignoring the fact that they are descendants of the victors”.
I mean, I would not switch to the U.S. society of 1776, or 1860, or even 1920. It is better today in 2026 than it was before vaccines, etc. It is very hard to decide whether I would prefer a counterfactual Native American country/empire that could have developed after western contact and exist in 2026, because the outcome is highly uncertain. Various levels of western colonization happened to non-western societies; several low-colonized countries are doing great in 2026. Mostly what makes countries great today is wide availability of technology, natural resources, education, human rights, and medicine.
That said, would I switch to super-America that conquered the world in the 1800s as a (somewhat unintuitively) democratic republic and invented antibiotics and vaccines in the same century and paused global warming in the mid century or early 1900s because of no conveniently hidden externalities of a world government? Maybe?
I think it’s a mistake to call the above postmodernism and I’d be disappointed if your long form address of the above point were framed that way.
I agree this position is part of a bundle that’s associated with postmodernism (mostly by its detractors!), but the use here feels conflationary, adversarial, mind-killing.
I would find this future post much more readable, enjoyable, and easier to fit in my model of the world (and of Oli) if you didn’t use this piece of language.
I mean, I am not one to object to tabooing a word if it causes confusion in a conversation, but having taken a few classes at Berkeley by self-identified postmodernist teachers and having had much fun arguing with them, I am pretty sure this is an accurate description of a standard part of postmodernist thought, and I also don’t think they would consider that adversarial! Like, they said almost these exact words to me and I expect would straightforwardly endorse them.
I feel like “postmodernism” is almost famous for being a term that causes confusion about what it means, but I wasn’t expecting it to be a term to trigger defense-mechanisms.
I don’t dispute that some postmodernists would consider cultural relativism central to their worldview, and think instead of ’mostly by its detractors’ I should have said ’often by its detractors’.
I’m glad you’re open to using different language.
I would be highly interested in reading an “against postmodernism” follow up.
I think the deeper challenge (which is not particularly relevant in the America example) is the idea that there are (a) some things which we think are good and we do them and they are good, and there are also (b) some things which we think are good that are actually bad, but not in a way where we can tell ‘from the inside’ right now. I think this point is more at home in a hypothetical post that is not about postmodernism, which (i) seems like an interesting post, and (ii) is not the one I am encouraging you to write.
(the third leg of this rule of three is (c) things that we think will lead to good ends but violate basic deontology, which we don’t do; https://www.lesswrong.com/s/AmFb5xWbPWWQyQ244/p/K9ZaZXDnL3SEmYZqB. What happens when multiple people have different deontological frameworks is left as an exercise for the reader).
(epistemic status: typed quickly, mostly encouragement of a future post, that I want to exist, where this encouragement is much stronger as a comment than as an upvote)
My answer is, if both:
1. I am reasonably confident I know better than you how you should live.
2. I am not sure that you are not someone with the intelligence and capability to make your own evaluations of what is good and bad for you and act accordingly (not a baby or a cat or someone of extremely low mental capacity who we would say for example can’t sign contracts on their own behalf because they don’t understand what’s happening well enough).
Then I should try to convince you I’m right, rather than imposing the outcomes on you that I think are best for you. If I am right, and you have decision-making capacity, you will be convinced. If I am wrong or you are someone who lacks decision-making capacity, I will find that out. I’m only in the clear to “take away your agency” if I have a good faith belief that you’re something like a baby or a cat or someone without the mental capacity to make decisions for themselves (for certain scopes of decisions—even a cat can decide whether it wants to eat food a or food b, and various other things). And I’d better be pretty sure of that, because if I turn out to have been wrong about it and treated you as someone with less agency than you deserve, that’s real bad. And honestly, even with my dog, who is not a smart dog, I try asking nicely and persuasion and positive reward for desired behaviour, before coercion, and coercion is rarely required. Even granting that the colonizers had thought of the people they were colonizing as moral patients rather than people, if they had treated them as well as I treat my none-too-bright dog, history would have been different.
Ironically, this does more to justify colonialism than it does to defeat it. By your logic, if a modern military force were somehow transported back to the 1600s, they would have no moral basis to prevent European settlers from colonizing North America. For these settlers, the act of settlement was as much driven by morality, by a religious need to go forth and multiply and spread the light of God, as it was by rational calculation of material needs.
So I ask you, by what right would you stop these settlers from going forth and bringing the light of the Lord to the heathen darkness?
Under the norms of your society/culture. They obviously aren’t static, and can e.g. fade away, having lost to external competition (like vikings had), or change in response to internal critique (like how colonialism became discredited). If you would rather operate under some hypothetical perfect rules derived from first principles, then you will likely be disappointed, seeing how philosophy has for millennia utterly failed to discover those.