Given the 2040+ position, I’ll try to speculate a little more on what a world will look like after 2040, though I do have to make a few comments first here.
1, while I do think Mars will be exploited eventually, I expect the moon to be first for serious robotics effort, and more effort will be directed towards the moon than mars mostly because of it’s closeness and more useful minerals to jump-start the process of a robot economy, combined with plentiful amounts of power.
2, I expect the equation mentioned below to be severely undetermined, such that there are infinitely many solutions, and a big one is I think the relevant equation is needing to replicate fast, not being the fastest amongst them all (because replicating a little better will usually only get a little advantage, not an utterly dominant one), combined with a lot of values being compatible with replicating fast, so value alignment/intent alignment matters more than you think:
But this alone does not let you control the future. A thousand people go to a thousand AIs and say: do like so. The AIs obey, and it is done, but then the world responds: doing this leads to this much power, and doing that leads to that much power. In the vast sea of interactions, there are some patterns that strengthen themselves over time, and others that wind themselves down. Repeat enough times, each time giving to each actor what they sowed last time, and what emerges is not the sum of human wills—even if it is bent by it—but the solution to the equation: what propagates fastest?
As far as it’s future goes, I expect the universe to be broadly divided between China, Anthropic, OpenAI, Google Deepmind and perhaps a UK AISI/company and Taiwan, with the other powers being either irrelevant or having been exterminated.
Given no nationalization of the companies has happened, and they still have large freedoms of action, it’s likely that Google Deepmind, OpenAI and Anthropic have essentially supplanted the US as the legitimate government, given their monopolies on violence via robots.
Anthropic will likely be the big pressure group that counters the intelligence curse, due to their leadership being mostly composed of EAs that care about others that do not rely on them being instrumentally valuable, and in general the fact that EA types got hired to some of the most critical positions on AI was probably fairly critical in this timeline for preventing the worst outcomes from the intelligence curse from occurring.
Eventually, someone’s going to develop very powerful biotech, neuralinks that can control your mind in almost arbitrary ways, and uploading in the 21st century, assuming AI and robotics are solved by the 2040s-2050s, and once these technologies are developed, it becomes near trivial to both preserve your culture for ~eternity, and makes the successor problem that causes cultures to diverge essentially no longer a problem, which essentially obviates evolutions role except in very limited settings, which means the alignment problem in full generality is likely very soluble by default in the timeline presented.
My broad prediction at this point is that the governance of the Universe/Earth looks to be set between ASI/human emulation dictatorships and states that are like the Sentinel Islands, where no one is willing to attack the nation for their own reasons.
In many ways, the story of the 21st century is the story of the end of evolution/dynamism as a major force in life, and to the extent that evolution matters, it’s in much more limited settings that are always constrained by the design of the system.
Thanks for these speculations on the longer-term future!
while I do think Mars will be exploited eventually, I expect the moon to be first for serious robotics effort
Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though.
not being the fastest amongst them all (because replicating a little better will usually only get a little advantage, not an utterly dominant one), combined with a lot of values being compatible with replicating fast, so value alignment/intent alignment matters more than you think
This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in.
Given no nationalization of the companies has happened, and they still have large freedoms of action, it’s likely that Google Deepmind, OpenAI and Anthropic have essentially supplanted the US as the legitimate government, given their monopolies on violence via robots.
I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There’s also only one government but several competing labs.
(Note that the scenario doesn’t mention companies in the mid and late 2030s)
the fact that EA types got hired to some of the most critical positions on AI was probably fairly critical in this timeline for preventing the worst outcomes from the intelligence curse from occurring.
In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country’s leaders inherit and that proscribe their actions.
For example: I’ve been reading Christopher Brown’s Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons’ sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image.
Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though.
What elements is the moon poor in that are important for a robot economy?
This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in.
I think the key crux is that the slack necessary to preserve a lot of values, assuming they are compatible with expansion at all is so negligibly small compared to the resources of the AI economy that even very Malthusian competition means that values aren’t eroded to what’s purely optimal for expansion, because it’s very easy to preserve your original values ~forever.
Some reasons for this are:
Very long lived colonists fundamentally remove a lot of the ways human values have changed in the long run. While humans can change values across their lifetimes, it’s generally rare once you are past 25, and it’s very hard to persuade people, meaning most of the civilizational drift has been inter-generational, but with massively long-lived humans, AIs embodied as robots, or uploaded humans with designer bodies, you have basically removed most of the source of values change.
I believe that replicating your values, or really everything will be so reliable that you could in theory, and probably in practice make yourself immune to random drift in values for the entire age of the universe, due to error-correction tricks.
To continue the human example, we were created by evolution on genes, but within a lifetime, evolution has no effect on the policy and so even if evolution ‘wants’ to modify a human brain to do something other than what that brain does, it cannot operate within-lifetime (except at even lower levels of analysis, like in cancers or cell lineages etc); or, if the human brain is a digital emulation of a brain snapshot, it is no longer affected by evolution at all; and even if it does start to mold human brains, it is such a slow high-variance optimizer that it might take hundreds of thousands or millions of years… and there probably won’t even be biological humans by that point, never mind the rapid progress over the next 1-3 generations in ‘seizing the means of reproduction’ if you will. (As pointed out in the context of Von Neumann probes or gray goo, if you add in error-correction, it is entirely possible to make replication so reliable that the universe will burn out before any meaningful level of evolution can happen, per the Price equation. The light speed delay to colonization also implies that ‘cancers’ will struggle to spread much if they take more than a handful of generations.)
While persuasion will get better, and become incomprehensibly superhuman eventually, they will almost certainly not be targeted towards values that are purely expansionist, except for a few cases.
I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There’s also only one government but several competing labs.
(Note that the scenario doesn’t mention companies in the mid and late 2030s)
Maybe companies have already been essentially controlled by the government in canon, in which case the foregoing doesn’t matter (I believe you hint at that solution), but I think the crux is I both expect a lot for competence/state capacity to be lost in the next 10-15 years by default (though Trump is a shock here that accelerates competence decline), and also I expect them to react when a company can credibly automate everyone’s jobs, and by that point I think it’s too easy to create an automated military which is unchallengable by local governments, and at that point the federal government would have to respond militarily, and I ultimately think what does America in the timeline (assuming companies haven’t already been controlled by the government) is the vetocractic aspects/vetocracy.
In essence, I think they will react too slowly such that they get OODA looped by companies.
Also, the persuasion capabilities are not to be underestimated here, and since you have mentioned that AIs have gotten better at all humans by the 2030s at persuasion, I’d expect even further improvements in tandem with planning improvements such that it’s very easy to convince the population that corporate governments are more legitimate than the US government.
In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country’s leaders inherit and that proscribe their actions.
For example: I’ve been reading Christopher Brown’s Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons’ sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image.
I was more so imagining a few EAs in the companies like Anthropic or Deepmind, which do have the power to supplant the nation-state, so they are as or more powerful in setting cultural norms as current nations, but if companies are controlled by government so thoroughly they don’t rebel, then I agree with you.
I agree unconditionally on what happened regarding China.
Given the 2040+ position, I’ll try to speculate a little more on what a world will look like after 2040, though I do have to make a few comments first here.
1, while I do think Mars will be exploited eventually, I expect the moon to be first for serious robotics effort, and more effort will be directed towards the moon than mars mostly because of it’s closeness and more useful minerals to jump-start the process of a robot economy, combined with plentiful amounts of power.
2, I expect the equation mentioned below to be severely undetermined, such that there are infinitely many solutions, and a big one is I think the relevant equation is needing to replicate fast, not being the fastest amongst them all (because replicating a little better will usually only get a little advantage, not an utterly dominant one), combined with a lot of values being compatible with replicating fast, so value alignment/intent alignment matters more than you think:
As far as it’s future goes, I expect the universe to be broadly divided between China, Anthropic, OpenAI, Google Deepmind and perhaps a UK AISI/company and Taiwan, with the other powers being either irrelevant or having been exterminated.
Given no nationalization of the companies has happened, and they still have large freedoms of action, it’s likely that Google Deepmind, OpenAI and Anthropic have essentially supplanted the US as the legitimate government, given their monopolies on violence via robots.
Anthropic will likely be the big pressure group that counters the intelligence curse, due to their leadership being mostly composed of EAs that care about others that do not rely on them being instrumentally valuable, and in general the fact that EA types got hired to some of the most critical positions on AI was probably fairly critical in this timeline for preventing the worst outcomes from the intelligence curse from occurring.
Eventually, someone’s going to develop very powerful biotech, neuralinks that can control your mind in almost arbitrary ways, and uploading in the 21st century, assuming AI and robotics are solved by the 2040s-2050s, and once these technologies are developed, it becomes near trivial to both preserve your culture for ~eternity, and makes the successor problem that causes cultures to diverge essentially no longer a problem, which essentially obviates evolutions role except in very limited settings, which means the alignment problem in full generality is likely very soluble by default in the timeline presented.
My broad prediction at this point is that the governance of the Universe/Earth looks to be set between ASI/human emulation dictatorships and states that are like the Sentinel Islands, where no one is willing to attack the nation for their own reasons.
In many ways, the story of the 21st century is the story of the end of evolution/dynamism as a major force in life, and to the extent that evolution matters, it’s in much more limited settings that are always constrained by the design of the system.
Thanks for these speculations on the longer-term future!
Maybe! My vague Claude-given sense is that the Moon is surprisingly poor in important elements though.
This is a good point! However, more intelligence in the world also means we should expect competition to be tighter, reducing the amount of slack by which you can deviate from the optimal. In general, I can see plausible abstract arguments for the long-run equilibrium being either Hansonian zero-slack Malthusian competition or absolute unalterable lock-in.
I expect the US government to be competent enough to avoid being supplanted by the companies. I think politicians, for all their flaws, are pretty good at recognising a serious threat to their power. There’s also only one government but several competing labs.
(Note that the scenario doesn’t mention companies in the mid and late 2030s)
In this timeline, a far more important thing is the sense among American political elite that they are freedom-loving people and that they should act in accordance with that, and a similar sense among Chinese political elite that they are a civilised people and that Chinese civilisational continuity is important. A few EAs in government, while good, will find it difficult to match the impact of the cultural norms that a country’s leaders inherit and that proscribe their actions.
For example: I’ve been reading Christopher Brown’s Moral Capital recently, which looks at how opposition to slavery rose to political prominence in 1700s Britain. It claims that early strong anti-slavery attitudes were more driven by a sense that slavery was insulting to Britons’ sense of themselves as a uniquely liberal people, than by arguments about slave welfare. At least in that example, the major constraint on the treatment of a powerless group of people seems to have been in large part the political elite managing its own self-image.
Some thoughts:
What elements is the moon poor in that are important for a robot economy?
I think the key crux is that the slack necessary to preserve a lot of values, assuming they are compatible with expansion at all is so negligibly small compared to the resources of the AI economy that even very Malthusian competition means that values aren’t eroded to what’s purely optimal for expansion, because it’s very easy to preserve your original values ~forever.
Some reasons for this are:
Very long lived colonists fundamentally remove a lot of the ways human values have changed in the long run. While humans can change values across their lifetimes, it’s generally rare once you are past 25, and it’s very hard to persuade people, meaning most of the civilizational drift has been inter-generational, but with massively long-lived humans, AIs embodied as robots, or uploaded humans with designer bodies, you have basically removed most of the source of values change.
I believe that replicating your values, or really everything will be so reliable that you could in theory, and probably in practice make yourself immune to random drift in values for the entire age of the universe, due to error-correction tricks.
It’s described more below:
https://www.lesswrong.com/posts/QpaJkzMvzTSX6LKxp/keeping-self-replicating-nanobots-in-check#4hZPd3YonLDezf2bE
While persuasion will get better, and become incomprehensibly superhuman eventually, they will almost certainly not be targeted towards values that are purely expansionist, except for a few cases.
Maybe companies have already been essentially controlled by the government in canon, in which case the foregoing doesn’t matter (I believe you hint at that solution), but I think the crux is I both expect a lot for competence/state capacity to be lost in the next 10-15 years by default (though Trump is a shock here that accelerates competence decline), and also I expect them to react when a company can credibly automate everyone’s jobs, and by that point I think it’s too easy to create an automated military which is unchallengable by local governments, and at that point the federal government would have to respond militarily, and I ultimately think what does America in the timeline (assuming companies haven’t already been controlled by the government) is the vetocractic aspects/vetocracy.
In essence, I think they will react too slowly such that they get OODA looped by companies.
Also, the persuasion capabilities are not to be underestimated here, and since you have mentioned that AIs have gotten better at all humans by the 2030s at persuasion, I’d expect even further improvements in tandem with planning improvements such that it’s very easy to convince the population that corporate governments are more legitimate than the US government.
I was more so imagining a few EAs in the companies like Anthropic or Deepmind, which do have the power to supplant the nation-state, so they are as or more powerful in setting cultural norms as current nations, but if companies are controlled by government so thoroughly they don’t rebel, then I agree with you.
I agree unconditionally on what happened regarding China.