Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though.
What elements is the moon poor in that are important for a robot economy?
This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in.
I think the key crux is that the slack necessary to preserve a lot of values, assuming they are compatible with expansion at all is so negligibly small compared to the resources of the AI economy that even very Malthusian competition means that values aren’t eroded to what’s purely optimal for expansion, because it’s very easy to preserve your original values ~forever.
Some reasons for this are:
Very long lived colonists fundamentally remove a lot of the ways human values have changed in the long run. While humans can change values across their lifetimes, it’s generally rare once you are past 25, and it’s very hard to persuade people, meaning most of the civilizational drift has been inter-generational, but with massively long-lived humans, AIs embodied as robots, or uploaded humans with designer bodies, you have basically removed most of the source of values change.
I believe that replicating your values, or really everything will be so reliable that you could in theory, and probably in practice make yourself immune to random drift in values for the entire age of the universe, due to error-correction tricks.
To continue the human example, we were created by evolution on genes, but within a lifetime, evolution has no effect on the policy and so even if evolution ‘wants’ to modify a human brain to do something other than what that brain does, it cannot operate within-lifetime (except at even lower levels of analysis, like in cancers or cell lineages etc); or, if the human brain is a digital emulation of a brain snapshot, it is no longer affected by evolution at all; and even if it does start to mold human brains, it is such a slow high-variance optimizer that it might take hundreds of thousands or millions of years… and there probably won’t even be biological humans by that point, never mind the rapid progress over the next 1-3 generations in ‘seizing the means of reproduction’ if you will. (As pointed out in the context of Von Neumann probes or gray goo, if you add in error-correction, it is entirely possible to make replication so reliable that the universe will burn out before any meaningful level of evolution can happen, per the Price equation. The light speed delay to colonization also implies that ‘cancers’ will struggle to spread much if they take more than a handful of generations.)
While persuasion will get better, and become incomprehensibly superhuman eventually, they will almost certainly not be targeted towards values that are purely expansionist, except for a few cases.
I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There’s also only one government but several competing labs.
(Note that the scenario doesn’t mention companies in the mid and late 2030s)
Maybe companies have already been essentially controlled by the government in canon, in which case the foregoing doesn’t matter (I believe you hint at that solution), but I think the crux is I both expect a lot for competence/state capacity to be lost in the next 10-15 years by default (though Trump is a shock here that accelerates competence decline), and also I expect them to react when a company can credibly automate everyone’s jobs, and by that point I think it’s too easy to create an automated military which is unchallengable by local governments, and at that point the federal government would have to respond militarily, and I ultimately think what does America in the timeline (assuming companies haven’t already been controlled by the government) is the vetocractic aspects/vetocracy.
In essence, I think they will react too slowly such that they get OODA looped by companies.
Also, the persuasion capabilities are not to be underestimated here, and since you have mentioned that AIs have gotten better at all humans by the 2030s at persuasion, I’d expect even further improvements in tandem with planning improvements such that it’s very easy to convince the population that corporate governments are more legitimate than the US government.
In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country’s leaders inherit and that proscribe their actions.
For example: I’ve been reading Christopher Brown’s Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons’ sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image.
I was more so imagining a few EAs in the companies like Anthropic or Deepmind, which do have the power to supplant the nation-state, so they are as or more powerful in setting cultural norms as current nations, but if companies are controlled by government so thoroughly they don’t rebel, then I agree with you.
I agree unconditionally on what happened regarding China.
Some thoughts:
What elements is the moon poor in that are important for a robot economy?
I think the key crux is that the slack necessary to preserve a lot of values, assuming they are compatible with expansion at all is so negligibly small compared to the resources of the AI economy that even very Malthusian competition means that values aren’t eroded to what’s purely optimal for expansion, because it’s very easy to preserve your original values ~forever.
Some reasons for this are:
Very long lived colonists fundamentally remove a lot of the ways human values have changed in the long run. While humans can change values across their lifetimes, it’s generally rare once you are past 25, and it’s very hard to persuade people, meaning most of the civilizational drift has been inter-generational, but with massively long-lived humans, AIs embodied as robots, or uploaded humans with designer bodies, you have basically removed most of the source of values change.
I believe that replicating your values, or really everything will be so reliable that you could in theory, and probably in practice make yourself immune to random drift in values for the entire age of the universe, due to error-correction tricks.
It’s described more below:
https://www.lesswrong.com/posts/QpaJkzMvzTSX6LKxp/keeping-self-replicating-nanobots-in-check#4hZPd3YonLDezf2bE
While persuasion will get better, and become incomprehensibly superhuman eventually, they will almost certainly not be targeted towards values that are purely expansionist, except for a few cases.
Maybe companies have already been essentially controlled by the government in canon, in which case the foregoing doesn’t matter (I believe you hint at that solution), but I think the crux is I both expect a lot for competence/state capacity to be lost in the next 10-15 years by default (though Trump is a shock here that accelerates competence decline), and also I expect them to react when a company can credibly automate everyone’s jobs, and by that point I think it’s too easy to create an automated military which is unchallengable by local governments, and at that point the federal government would have to respond militarily, and I ultimately think what does America in the timeline (assuming companies haven’t already been controlled by the government) is the vetocractic aspects/vetocracy.
In essence, I think they will react too slowly such that they get OODA looped by companies.
Also, the persuasion capabilities are not to be underestimated here, and since you have mentioned that AIs have gotten better at all humans by the 2030s at persuasion, I’d expect even further improvements in tandem with planning improvements such that it’s very easy to convince the population that corporate governments are more legitimate than the US government.
I was more so imagining a few EAs in the companies like Anthropic or Deepmind, which do have the power to supplant the nation-state, so they are as or more powerful in setting cultural norms as current nations, but if companies are controlled by government so thoroughly they don’t rebel, then I agree with you.
I agree unconditionally on what happened regarding China.