I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
cubefox
Could you perhaps add subheadings? Otherwise it’s a bit of a wall of text.
the price was very high, but the results justify the cost.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything).
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.
I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)
But then someone else (e.g. China) would have achieved the same outcome. And a good AI outcome might be less unlikely under US American ASI than under Chinese ASI.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
I’m not sure whether this is load-bearing for the main point of the post but I have to comment on this part:
Ok, fine. I’ll say it directly. I am extremely glad the west colonized North America. The American experiment was one of the greatest successes in history, and god was it far from perfect. Despite it all, despite the Trail of Tears, despite smallpox ravaging the land, despite the conquistadors and the looting and the rapes — yes, all of that, and still it was worth it. America is worth it. Democracy is worth it.
If you were faced with the horrors of the American colonization, would you have chosen to keep going? Or would you have wrung your hands, declared the American experiment a failure, concluded that maybe man was never supposed to wield this power, and retired to the countryside, in denial that other men and women were doing the dirty work for you?
I think those are the wrong question. The right questions are: If you had been a native at the time, would you have opposed the colonization? And, as a native at the time, was the colonization ultimately in your best interest?
There is an obvious analogy with evolution. The colonization of America looks very much like a superior invasive species (Westerners) arriving in a new habitat (North America) and outcompeting the inferior native species (Native Americans) by eventually outbreeding them and taking most of their land.
And this also looks very much like a possible future: A superior invasive species (autonomous AI agents) arriving in a new habitat (being invented and created by humans) and outcompeting the inferior native species (humans) by eventually outbreeding them and taking most of their land.
Now it seems likely that the correct answer to the question of whether the colonization was good for the Native Americans at the time is the same as the answer to the question of whether the possible future in the previous paragraph would be good for us (currently existing) humans.
Yes. There are many different “cold” viruses, and adults have some degree of immunity against most of them, while children are getting many for the first time in their life. That’s very different from the flu, which doesn’t have many concurrent versions but is evolving more rapidly. (Not sure where covid falls here.)
Yes. One example of an excellent writer who comes to mind is W.V.O. Quine. He has interesting arguments but also an unusually engaging writing style with subtle rhetorical tricks, which, in my opinion, often make his articles far more convincing superficially than they are under close scrutiny. The key word is “rhetoric” here. Good rhetoric doesn’t make a good argument and bad rhetoric doesn’t make a bad argument.
However, essays can be well-written without significantly relying on rhetorical tricks. I think many classics from Yudkowsky and Scott Alexander are of this type.
Interesting: Until now I assumed that the “smart reason” was identical to 2, but clearly they are different.
There are several things one can do when having beliefs outside the Overton window:
Stating them outright and suffer the social penalty (e.g. no longer being taken seriously or being ostracized)
Keeping your mouth shut
Lying (strategically stating a somewhat similar socially acceptable position rather than what you believe)
Stating things in an indirect and veiled way that hopefully only sufficiently reasonable or already sympathetic people will understand (“dog whistling”)
I agree that 3 tends to backfire. But it is usually unclear what to do instead.
Or “fewer people would have babies without day care and therefore things that enable that can’t be bad.”
In terms of logits (log odds), the probabilities 0.1 and 0.9 are actually quite close together: −2.2 and +2.2, for a logit range of
. The infinite range seems appropriate because the difference at the boundaries of probabilities (e.g. between 0.9 and 0.99) should get large, which can’t be captured with a finite range, like the cubic bezier.
I always struggle to understand why people have no problem believing in objective facts about epistemic and instrumental rationality, but not in objective facts about morality. That seems semantically implausible, because statements of the form “x is rational” don’t seem fundamentally different from statements of the form “x is ethical”. The semantic difference between them seems to be about as small as the difference between “x is altruistic” and “x is egoistic”.
The masses decide very little and their intelligence and knowledge of policy is entirely irrelevant to their usefulness as voters.
That seems unlikely. Plausibly, a less intelligent and knowledgeable electorate will tend to elect worse leaders.
Note that you can get addicted to sunlight.
That’s an interesting advanced perspective. I’m not as much of a tea nerd yet, I just sometimes buy them in stores in tea bags during the winter. Perhaps I should consider some of your tips for next winter. But I want to to share my observations (i.e., unqualified opinions) as well.
-
My current favorite is peppermint tea. It’s not from the proper tea plant (unlike black/green/white tea), so it doesn’t contain caffeine. The special thing about it is that is contains menthol, which feels cool when you breathe in after drinking, which contrasts with the tea being hot, and produces a unique taste. I recommend brewing it for longer than the official instructions suggests, or to use more tea (see also below).
-
Black/green/white tea comes from the same type of plant, the tea plant. The difference in taste comes (in my opinion) mostly from the fact that black tea tastes stronger, and green/white tea progressively milder. But a similar effect can be achieved by simply brewing the tea for longer or shorter. So black tea can be made to taste approximately like green tea or the other way round.
-
Proper tea (especially black tea) has a very strong taste compared to other teas, so it is important to not let it brew for long. Most other (non-proper) teas have a much weaker taste and should be brewed for longer. The instructions on the tea bags often don’t properly reflect this difference in strength. If your tea just tastes like hot water, you just need to brew it for longer, or use more tea.
-
There is no such thing as “herbal tea”. It’s like speaking of “fruit juice”. What fruit!? Apple? Banana? Elderberry? The taste will be totally different depending on what the main ingredient (“herb”) is. Most herbs don’t taste similar to each other, so there is no reason to equate them under the category “herbal”. You have to look at the list of ingredients to determine what type of tea it is.
-
Many fancy sounding but cheap tea types are actually just some sort of basic tea with added flavoring. Again, the list of ingredients tells us what tea it actually is.
-
Tea pots can be kept warm for quite some time with teapot warmers that consume tealights (small flat candles). But candles are known to be quite toxic for air quality, so I plan to stop using them.
-
Electric kettles are useful for quickly heating water, but if you just make one average sized cup of tea, using a microwave is sufficient. Then you also don’t need a teapot warmer.
-
I don’t think water temperature makes a noticeable difference in taste, though it does affect the brewing time (cooler=longer).
-
Yes, especially their results on all kinds of visual understanding benchmarks are very impressive, sometimes significantly ahead of the competition. Unfortunately the model is really unknown outside of China. For foreigners, the website (https://www.doubao.com/chat/) redirects to a different chatbot called “Dola”. I’m not sure whether this is essentially the same model behind the scenes, just with different censorship perhaps.
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.