There is no perfect match with Bostrom’s vulnerabilities, because the book assumed there was a relatively safe strategy: hide. If no one knows you are there, no one will attack you, because although the “nukes” are cheap, they would be destroying potentially useful resources.
Not relevant, if you succeed in hiding you simply fall off the vulnerability landscape. We only need to consider what happens when you’ve been exposed. Also, whose resources? It’s a cosmic commons, so who cares if it gets destroyed.
The point of the Dark Forest hypothesis was precisely that in a world with such asymmetric weapons, coordination is not necessary. If you naively make yourself visible to thousand potential enemies, it is statistically almost certain that someone will pull the trigger; for whatever reason.
That’s just Type-1 vulnerable world. No need for the contrived argumentation the author gave.
There is a selfish reason to pull the trigger; any alien civilization is a potential extinction threat.
Not really, cleaning up extinction threats is a public good that generally tends to fall prey to Tragedy of the Commons. Even if you made the numbers work out somehow—which is very difficult and requires certain conditions that the author has explicitly refuted (like the impossibility to colonize other stars or to send out spam messages) - it would still not be an example of Moloch. It would be an example of pan-galactic coordination, albeit a perverted one.
Very much disagree. My sense is that the book series is pretty meagre on presenting “thoughtful hard science” as well as game theory and human sociology.
To pick the most obvious example—the title of the trilogy* - the three body problem was misrepresented in the books as “it’s hard to find the general analytic solution” instead of “the end state is extremely sensitive to changes in the initial condition”, and the characters in the book (both humans and Trisolarians) spend eons trying to solve the problem mathematically.
But even if an exact solution was found—which does exist for some chaotic systems like logistic maps—it would have been useless since the initial condition cannot be known perfectly. This isn’t a minor nitpick like the myriad other scientific problems with the Trisolarian system that can be more easily forgiven for artistic license; this is missing what chaotic systems are about. Why even invoke the three-body problem other than as attire?
*not technically the title of the book series, but frequently referred to as such
Where exactly do you see Moloch in the books? It’s quite the opposite if anything; the mature civilizations of the universe have coordinated around cleaning up the cosmos of nascent civilizations, somehow, without a clear coordination mechanism. Or perhaps it’s a Type-1 vulnerable world, but it doesn’t fit well with the author’s argumentation. I’m not sure, and I’m not sure the author knows either.
I’m still a little puzzled by all the praises for the deep game theoretic insights the book series supposedly contains though. Maybe game theory as attire?
You’re also exposed to all sorts of risks if you’re “below some wealth threshold where they cannot act as they would reasonably like to because their alternative is homelessness, malnourishment, etc.” even before Corona came around. The situation hasn’t changed all that much.
But, as Elon Musk famously said: “If you don’t make stuff, there is no stuff”.
Spreading across the stars without number sounds more like a “KILL CONSUME MULTIPLY CONQUER” thing than it sounds like an “Everything Else” thing. I’m missing something of the point here.
Spreading across the stars without numbers
ETA: Is the point that over time Man evolved to be what he is today, we have a conception of right and wrong, and we’re the first link in the chain that actually cares about making sure our morals propagate forward as we evolve? So now the force of evolution has been co-opted into spreading human morality?
No. I recommend reading Meditations on Moloch first, then everything becomes clear.
That’s a pretty extreme over-dramatization. Corona isn’t even 1% as bad.
The Mote in God’s Eye is a pretty good example of social science fiction in addition to being a great science fiction novel in general.
If the Coronavirus had a 30% fatality rate people would care a lot about not getting infected in the real world, too.
You mean Nash equilibrium strategy? Rock-Paper-Scissors is a zero-sum game, so Pareto optimal is a trivial notion here.
Regardless of what the new player does, there is no reason to ever play scissors. I don’t see any interesting “4-choice dynamic” here. Perhaps you should pick a different example with multiple Nash equilibria.
Another advantage AI secrecy has over nuclear secrecy is that there’s a lot of noise and hype these days about ML both within and outside the community, making hiding in plain sight much easier.
In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.
I’m not sure what “safe” means in this context, but it seems to me that publishing safe AGI is not a threat and it’s the unsafe but potentially very capable AGI research we should worry about?
And the statement “In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself” seems a lot more dubious given recent developments with GPT-3, at least according to most of the Lesswrong community.
Industrializing space is desirable because industrializing Earth has had a number of negative side effects on the biosphere, so moving production outside the biosphere would be a positive development. My argument is that the option of staying home is clearly economically preferable for now, and will be unless we see major cost reductions in space technology.
I thought your argument is that we should industrialize space because it’s economically viable?
Putting that aside, environmentalism is just about the last reason for space activities. Space travel has had a negligible impact on the environment thus far only because there has been so little space travel. But on a per-kilogram payload basis, even assuming the cleanest metholox/hydrolox fuel composition produced purely from solar power, the NO2 from hot exhaust plumes and ozone-eating free radicals from reentry heat alone are enough make any environmentalist screech in horror. You’d have to go to the far end of level 3 tech to begin making this argument, and even then it still isn’t an economic incentive. You can’t seriously dismiss space tourism as a driver for space travel and then propose environmentalism as an alternative.
Whether SpaceX and other launch vehicle organizations can reach the Level 2 threshold you describe remains to be seen, and LVs are only part of the pricetag. Materials, equipment, and labor represent a large segment of space mission cost, and unless we can also drive those down by similar degrees do the economics of colonization start making sense.
Space is hard, sure, but how does that help your point exactly? Colonization doesn’t have to (and won’t) make economic sense. Industrialization does.
Note, too, that ΔV is non-trivial, even when we start getting to high specific-impulse technologies.
Not really. This isn’t relevant for the Moon vs Mars debate, but even for the outer planets I would argue
Short travel time isn’t necessary for colonizing or industrializing outer planets
Nuclear fusion can realistically go up to 500,000s Isp, dwarfing any reasonable requirement for travel inside the solar system
Also, all the analysis with hyperbolic orbits are kind of unnecessary as the solar gravity well becomes trivial for short transfers. You could just as well assume the target planets to be fixed points and get the Δv requirement from distance divided by desired travel time (x2 for deceleration).
Government: Ask Kennedy
Private sector: Ask Musk
I’m still confused about your critique, so let me ask you directly: In the scenario outlined by the OP, do you expect humans to eventually evolve to stop feeling pain from electrical shocks?
Evolution can’t dictate what’s harmful and what’s not; bigger peacock tails can be sexually selected for until it is too costly for survival, and an equilibrium sets in. In our scenario, since pain-inducing stimuli are generally bad for survival, there is no selection pressure to increase the pain threshold for electrical shocks after a certain equilibrium point. Because we start out with a nervous system that associates electrical shocks with pain, this pain becomes a pessimistic error after the equilibrium point and never gets fixed, i.e. humans still suffer under electrical shocks, just not so bad they’d rather kill themselves.
Suffering is not rare in nature because actually harmful things are common and suffering is an adequate response to them.
Why then is it possible to suffer pain worse than death? Why do people and animals suffer just as intensely beyond their reproductive age?
Yes, that’s what pessimistic errors are about. I’m not sure what exactly you’re critiquing though?
The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival. But this is missing the bigger picture.
No, it isn’t. What do I care which values evolution originally intended to align us with? What do I care which direction dysgenic pressure will push our values in the future? Those aren’t my values, and that’s all I need to know.
After all, if you forget to shock yourself, or choose not to, then you are immediately killed. So the people in this country will slowly evolve reward and motivational systems such that, from the inside, it feels like they want to shock themselves, in the same way (though maybe not to the same degree) that they want to eat.
No, there is no selection pressure to shock yourself more than the required amount, anything beyond that is still detrimental to your reproductive fitness. Once we’ve evolved to barely tolerate the pain of electric shocks so as to not kill ourselves, the selection towards more pain tolerance stops, and people will still suffer a great deal because there is no incentive for evolution to fix pessimistic errors. You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia, but it certainly doesn’t apply to most cases, else suffering should already be a rare occurrence in nature.
Well this requirement doesn’t appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).
It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn’t turn out to be a thing. For me that’s the most intriguing aspect of the FAI problem; there are plenty of existential risks to go around but FAI as an existential opportunity is unique.
Maybe one-shot Prisoner’s Dilemma is rare and Moloch doesn’t turn out to be a big issue after all
On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn’t any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)