(I assume you’re asking “why isn’t it much less bad than AI takeover” as opposed to “isn’t it almost as bad as AI takeover, like 98% as bad”.)
I care most about the long-run utilization of cosmic resources, so this dominates my thinking about this sort of question. I think it’s very easy for humans to use cosmic resources poorly from my perspective and I think this is more likely if resources are controlled by an autocratic regime, especially an autocratic regime where one person holds most of the power (which seems reasonably likely for a post-AGI CCP). In other words, I think it’s pretty easy to lose half of the value of the long-run future (or more) based on which humans are in control and how this goes.
I’ll compare the CCP having full control to broadly democratic human control (e.g. most cosmic resources are controlled by some kinda democratic system or auctioned while retaining democracy).
We could break this down into likelihood of carefully reflecting and then how much this reflection converges. I think control by an autocratic regime makes reflection less likely and that selection effects around who controls the CCP are bad making post-reflection convergence worse (and it’s unclear to me how much I expect reflection to converge / be reasonable in general).
Additionally, I think having a reasonably large number of people having power substantially improves the situation due to the potential for trade (e.g., many people might not care much at all about long-run resource use in far away galaxies, but this is by far the dominant source of value from my perspective!) and the beneficial effects on epistemics/culture (though this is less clear).
Part of this is that I think pretty crazy considerations might be very important to having the future go close to as well as it could (e.g. acausal trade etc) and this requires some values+epistemics combinations which aren’t obviously going to happen.
This analysis assumes that the AI that takes over is a pure paperclipper which doesn’t care about anything else. Taking into account the AI that takes over potentially having better values doesn’t make a big difference to the bottom line, but the fact that AIs that take over might be more likely to do stuff like acausal trade (than e.g. the CCP) makes “human autocracy” look relatively worse compared to AI takeover.
See also Human takeover might be worse than AI takeover, though I disagree with the values comparisons. (I think AIs that take over are much more likely to have values that I care about very little than this post implies.)
As far as outcomes for currently alive humans, I think full CCP control is maybe like 15% as bad as AI takeover relative to broadly democratic human control. AI takeover maybe kills a bit over 50% of humans in expectation while full CCP control maybe kills like 5% of humans in expectation (including outcomes for people that are as bad as death and involve modifying them greatly), but also has some chance of imposing terrible outcomes on the remaining people which are still better than death.
Partial CCP control probably looks much less bad?
None of this is to say we shouldn’t cooperate on AI with China and the CCP. I think cooperation on AI would be great and I also think that if the US (or an AI company) ended up being extremely powerful, that actor shouldn’t violate Chinese sovereignty.
(I assume you’re asking “why isn’t it much less bad than AI takeover” as opposed to “isn’t it almost as bad as AI takeover, like 98% as bad”.)
I care most about the long-run utilization of cosmic resources, so this dominates my thinking about this sort of question. I think it’s very easy for humans to use cosmic resources poorly from my perspective and I think this is more likely if resources are controlled by an autocratic regime, especially an autocratic regime where one person holds most of the power (which seems reasonably likely for a post-AGI CCP). In other words, I think it’s pretty easy to lose half of the value of the long-run future (or more) based on which humans are in control and how this goes.
I’ll compare the CCP having full control to broadly democratic human control (e.g. most cosmic resources are controlled by some kinda democratic system or auctioned while retaining democracy).
We could break this down into likelihood of carefully reflecting and then how much this reflection converges. I think control by an autocratic regime makes reflection less likely and that selection effects around who controls the CCP are bad making post-reflection convergence worse (and it’s unclear to me how much I expect reflection to converge / be reasonable in general).
Additionally, I think having a reasonably large number of people having power substantially improves the situation due to the potential for trade (e.g., many people might not care much at all about long-run resource use in far away galaxies, but this is by far the dominant source of value from my perspective!) and the beneficial effects on epistemics/culture (though this is less clear).
Part of this is that I think pretty crazy considerations might be very important to having the future go close to as well as it could (e.g. acausal trade etc) and this requires some values+epistemics combinations which aren’t obviously going to happen.
This analysis assumes that the AI that takes over is a pure paperclipper which doesn’t care about anything else. Taking into account the AI that takes over potentially having better values doesn’t make a big difference to the bottom line, but the fact that AIs that take over might be more likely to do stuff like acausal trade (than e.g. the CCP) makes “human autocracy” look relatively worse compared to AI takeover.
See also Human takeover might be worse than AI takeover, though I disagree with the values comparisons. (I think AIs that take over are much more likely to have values that I care about very little than this post implies.)
As far as outcomes for currently alive humans, I think full CCP control is maybe like 15% as bad as AI takeover relative to broadly democratic human control. AI takeover maybe kills a bit over 50% of humans in expectation while full CCP control maybe kills like 5% of humans in expectation (including outcomes for people that are as bad as death and involve modifying them greatly), but also has some chance of imposing terrible outcomes on the remaining people which are still better than death.
Partial CCP control probably looks much less bad?
None of this is to say we shouldn’t cooperate on AI with China and the CCP. I think cooperation on AI would be great and I also think that if the US (or an AI company) ended up being extremely powerful, that actor shouldn’t violate Chinese sovereignty.