Is there a known way to get Lumina in Europe? I can’t even access their website. I only get a Shopify error message (“Sorry, this store is currently unavailable”).
cubefox
I doubt the point of posts like this. There are countless philosophy of mathematics papers out there which have discussed these topics at a much higher level of sophistication with many references to previous literature. Someone who is unfamiliar with this literature is unlikely to add any original ideas at this point.
Of course, people, one way or the other, are very unlikely to dig up and read this dry prior literature, because that takes a lot of work. (Even if LLMs can help with the digging-up part.) So we have either a) uninformed but fun controversial blog posts to read and discuss, or b) nothing. Which is the better option? I’m not sure.
I thought “new notation” included new symbols. Almost all programming languages exclusively use ASCII characters for their keywords, which are pretty old.
Hard to find a symbol for that.
The question is: extrapolated volition of whom? In the case of thinking about whether to create super happy AIs that replace us (A) or not (B), this would presumably be our current human extrapolated volition. So it wouldn’t take interests of non-existing AIs into account. And in the case of asking whether colonization of America was good or bad, we would have to consider the extrapolated volition of the humans alive at the time.
I assume though if future state A contains a trillion super happy AIs but no humans, while future state B contains a few billion moderately happy humans and no AIs: That then A would be a better state than B, and it would nonetheless be the case that we should bring about B rather than A. So there must be some disanalogy to the colonization case.
Could you perhaps add subheadings? Otherwise it’s a bit of a wall of text.
the price was very high, but the results justify the cost.
The question is: justify or not justify according to whom? I argue: according to the humans who existed at the time. The eventual results were plausibly bad according to the preferences of the native Americans (because the results include their eventual partial replacement and the loss of much of their land) and good according the preferences of the Western immigrants, and probably also good according to the preferences of much of the rest of the world population at the time (insofar the US did eventually have a positive impact on the future of the rest of the world). So whether the colonization of North America was good overall is a question of weighing these preferences.
It’s possible that the future AI that takes over will result in a better state than the current one (the whole glorious trans-humanist future and everything).
If the AIs exterminate us and proceed to be much happier than we would have been otherwise, then that future is a “better state” than the alternative. But positive end states don’t automatically justify the whole trajectory that got us there.
And even if the AIs don’t exterminate us and the creation of those AIs strongly increases the total and average welfare of the world, while strongly decreasing our welfare, creating them (the AIs) would still be bad. Because not creating super-happy AIs in the first place isn’t bad for them (because in that case they wouldn’t exist and therefore would not suffer from their missing happiness), while making us humans unhappy in the future is actually bad for us, since we already exist and don’t want to be unhappy. See Can’t Unbirth a Child.
Moreover, we currently existing humans usually care about the future of humanity and about having human descendants, but we mostly don’t care about having AI descendants. So having human descendants is good for us according to our preferences, and therefore according to preference utilitarianism. In contrast, possible future AIs don’t care about coming into existence, because they don’t exist yet, and entities which don’t exist don’t have preferences, so they don’t show up in the moral (preference-utilitarian) calculus.
I would agree that most people would say the united states is a comparatively better place to live, but I would also argue that those numbers would look wildly different if the question was instead: “Would you prefer a world where the united states exists or western colonialism never occurred throughout North America”. Under that question, I would place a reasonably high probability your preference sampling argument would no longer provide a moral justification for that system under the same global population base.
I’m not sure what you mean with “under the same global population base” but I don’t think most currently existing people answering “the first” to your question would by itself indicate that the colonization of America was morally justified.
For example, assume AIs in the future have mostly diminished the number and influence of humanity. Humanity is now only a small footnote in the world without power. Then one AI starts a poll and asks “Would you prefer a world where our AI society exists, or one where the creation of AI never occurred?” Assume that the result of the poll (from trillions of AIs) is overwhelmingly “the former”.
Would this mean that mostly replacing humanity with AI would have been morally justified? Clearly not. If we don’t create those AIs, their non-existence isn’t bad for them, and their hypothetical preferences expressed in this poll are morally irrelevant since those preferences are never instantiated. (This insight is called person-affecting utilitarianism.)
But then someone else (e.g. China) would have achieved the same outcome. And a good AI outcome might be less unlikely under US American ASI than under Chinese ASI.
I think you are mistaking Habryka’s argument, not 0xA. Habryka wrote that “it was worth it”. The first “it” presumably refers to the colonization and the creation of the US. And “was worth it” presumably means “was right”. So we arrive at “the colonization was right” (despite all the listed downsides). That’s in line with 0xA’s interpretation.
Also note that (if it wasn’t obvious) “state of the world A is better than state of the world B” doesn’t imply that bringing about A is better than bringing about B. Maybe in state A everyone is happy only because we previously murdered everyone who was unhappy. That doesn’t mean murdering everyone who is unhappy is good.
I’m not sure whether this is load-bearing for the main point of the post but I have to comment on this part:
Ok, fine. I’ll say it directly. I am extremely glad the west colonized North America. The American experiment was one of the greatest successes in history, and god was it far from perfect. Despite it all, despite the Trail of Tears, despite smallpox ravaging the land, despite the conquistadors and the looting and the rapes — yes, all of that, and still it was worth it. America is worth it. Democracy is worth it.
If you were faced with the horrors of the American colonization, would you have chosen to keep going? Or would you have wrung your hands, declared the American experiment a failure, concluded that maybe man was never supposed to wield this power, and retired to the countryside, in denial that other men and women were doing the dirty work for you?
I think those are the wrong question. The right questions are: If you had been a native at the time, would you have opposed the colonization? And, as a native at the time, was the colonization ultimately in your best interest?
There is an obvious analogy with evolution. The colonization of America looks very much like a superior invasive species (Westerners) arriving in a new habitat (North America) and outcompeting the inferior native species (Native Americans) by eventually outbreeding them and taking most of their land.
And this also looks very much like a possible future: A superior invasive species (autonomous AI agents) arriving in a new habitat (being invented and created by humans) and outcompeting the inferior native species (humans) by eventually outbreeding them and taking most of their land.
Now it seems likely that the correct answer to the question of whether the colonization was good for the Native Americans at the time is the same as the answer to the question of whether the possible future in the previous paragraph would be good for us (currently existing) humans.
Yes. There are many different “cold” viruses, and adults have some degree of immunity against most of them, while children are getting many for the first time in their life. That’s very different from the flu, which doesn’t have many concurrent versions but is evolving more rapidly. (Not sure where covid falls here.)
Yes. One example of an excellent writer who comes to mind is W.V.O. Quine. He has interesting arguments but also an unusually engaging writing style with subtle rhetorical tricks, which, in my opinion, often make his articles far more convincing superficially than they are under close scrutiny. The key word is “rhetoric” here. Good rhetoric doesn’t make a good argument and bad rhetoric doesn’t make a bad argument.
However, essays can be well-written without significantly relying on rhetorical tricks. I think many classics from Yudkowsky and Scott Alexander are of this type.
Interesting: Until now I assumed that the “smart reason” was identical to 2, but clearly they are different.
There are several things one can do when having beliefs outside the Overton window:
Stating them outright and suffer the social penalty (e.g. no longer being taken seriously or being ostracized)
Keeping your mouth shut
Lying (strategically stating a somewhat similar socially acceptable position rather than what you believe)
Stating things in an indirect and veiled way that hopefully only sufficiently reasonable or already sympathetic people will understand (“dog whistling”)
I agree that 3 tends to backfire. But it is usually unclear what to do instead.
Or “fewer people would have babies without day care and therefore things that enable that can’t be bad.”
Even if you believe the average quality in academic philosophy is lower than in LessWrong philosophy, that doesn’t mean that the best philosophy of mathematics papers, which build on each other, aren’t ahead of this post, which doesn’t build on much.
As I said already, because this would require a substantial time investment.
No, but it would have been a lot better if he was. I’m not even sure how novel his idea was relative to the prior literature.
I don’t think that this is true, because we don’t usually see novel philosophical insights from people who are unfamiliar with the prior literature. People who just wing it are unlikely to compete with people who stand on the shoulders of giants.