Hi Felix! I’ve been thinking about the same topics for awhile, and came to pretty much the opposite conclusions.
most humans, who do have some nonzero preference for being altruistic along with their other goals
No nononono. So many people making this argument and it’s so wrong to me.
The thing is: altruistic urges aren’t the only “nonzero urges” that people have. People also have an urge to power, an urge to lord it over others. And for a lot of people it’s much stronger than the altruistic urge. So a world where most people are at the whim of “nonzero urges” of a handful of superpowerful people will be a world of power abuse, with maybe a little altruism here and there. And if you think people will have exit rights from the whims of the powerful, unfortunately history shows that it won’t necessarily be so.
advanced AI can plausibly allow you to make cheap, ultra-destructive weapons… until we hit a point where a few people are empowered to destroy the world at the expense of everyone else
I think we’ll never be at a point where a handful of people can defeat the strongest entities. Bioweapons are slow; drone swarms can be stopped by other drone swarms. I can’t imagine any weapon at all that would allow a terrorist cell to defeat an army of equal tech level. Well, maybe if you have a nanotech-ASI in a test tube, but we’re dead before then.
It is however possible that a handful of people can harm the strongest entities. And that state of affairs is desirable. When the powerful could exploit the masses with impunity in the past, they did so. But when firearms got invented, and a peasant could learn to shoot a knight dead, the masses became politically relevant. That’s basically why we have democracy now: the political power of the masses comes from their threat-value. (Not economic value! The masses were always economically valuable to the powerful. Without threat-value, that just leads to exploitation. You can be mining for diamonds and still be a slave.) So the only way the masses can avoid a world of total subjugation to the powerful in the future is by keeping threat-value. And for that, cheap offense-dominant weapons are a good thing.
Even though the U.S has unbelievable conventional military superiority to North Korea, for instance, the fact that they have nuclear weapons means that we cannot arbitrarily impose our preferences about how North Korea should act onto them… Roughly speaking, you can swap out “U.S” and “North Korea” with “Optimizers” and “Altruists”.
Making an analogy with altruism here is strange. North Korea is a horrifying oppressive regime. The fact that they can use the nuke threat to protect themselves, and their citizens have no analogous “gun” to hold to the head of their own government, is a perfect example of the power abuse that I described above. A world with big actors holding all threat-power will be a world of NKs.
But I don’t believe that inequality is intrinsically problematic from a welfare perspective: it’s far more important that the people at the bottom meet the absolute threshold for comfort than it is for a society’s Gini coefficient to be lower.
There’s a standard response to this argument: namely, inequality of money always tries to convert itself into inequality of power, through lobbying and media ownership and the like. Those at the bottom may have comfort, but that comfort will be short lived if they don’t have the power to ensure it. The “Gini coefficient of power” is the most important variable.
So yeah, to me these all converge on a pretty clear answer to your question. Concentration of power, specifically of threat-power, offense-power, would be very bad. Spreading it out would be good. That’s how the world looks to me.
It sounds like a big crux for this might be whether weapons would be offense-dominant enough to threaten the whole of society.
I think we’ll never be at a point where a handful of people can defeat the strongest entities. Bioweapons are slow; drone swarms can be stopped by other drone swarms
I basically think that this is very unlikely to be the case. It’s just more cost effective to whip up offensive weapons than it is to reactively defend against them. In the basic case of a bioweapon, someone can covertly develop several pathogens at once, which each need to be responded to with a specific vaccine/containment measures. All the onus is on the defender to predict the attacker, while the attacker can take their time and pool their resources in places that are currently underdefended. The attacker also only needs to succeed once, while the defender needs to succeed continuously.
With some advanced technologies, like mirror life or an unaligned ASI, there might simply not be any realistic reactive response. Just as it’s functionally impossible to stop another country from nuking you today if they really want to, I strongly suspect that future offensive technologies will put you in a MAD style equilibrium. It’s just that the existentially threatening technologies that underly MAD will be much cheaper to access in a world full of superhuman AI assistance.
It seems pretty plausible to me that a handful of people or a terrorist organization really could end the world if they put their minds to it, given they have access to general superintelligence. The easiest way might just be developing a misaligned ASI intentionally, and have it run around causing maximum difficulty for the aligned ASIs that are paying the alignment tax.
When the powerful could exploit the masses with impunity in the past, they did so. But when firearms got invented, and a peasant could learn to shoot a knight dead, the masses became politically relevant. That’s basically why we have democracy now: the political power of the masses comes from their threat-value. (Not economic value! The masses were always economically valuable to the powerful. Without threat-value, that just leads to exploitation. You can be mining for diamonds and still be a slave.)
I think that the masses have always had a substantial amount of threat value, and that democracy won out not because they became more threatening but because their economic value increased so greatly during the industrial revolution. Countries like China developed gunpowder all the way back in the 9th century, but they remained a monarchy for a thousand years afterward. In fact, the country saw multiple successful peasant revolutions throughout its history (including the establishment of the Ming Dynasty) but no establishment of a democratic government, because there were no economic incentivies for the new leaders to overhaul the system. It seems to me that there might be ways to preserve democracy that are not conditioned on the distrubtion of literal strategic power.
Can we really say that the U.S is democratic today primarily because firearms are so common? If that were the case, why have its cultural neighbors like Canada and European countries arguably been more democratically stable? The west is democratic because economics rewarded it, and then that incentive lead to a culture that valued democracy more or less for its own sake.
Making an analogy with altruism here is strange. North Korea is a horrifying oppressive regime. The fact that they can use the nuke threat to protect themselves, and their citizens have no analogous “gun” to hold to the head of their own government, is a perfect example of the power abuse that I described above. A world with big actors holding all threat-power will be a world of NKs.
This is a good point, and one I underconsidered. If you have a world where a handful of people go around allocating resources according to their preferences, you will end up with pockets of altruism and oppression relative to how much preference for these the power players have. I don’t think I put enough weight on some of the players wanting actively bad outcomes.
I will say that NK is itself an example of a group that would be far better off if it had never been allowed to obtain its offense-dominant weapons. In a counterfactual world where the U.S was the only state with nukes and had succeeded at ironclad nonproliferation, you would probably see a proper pax atomica. Your take seems to be that the solution to the existence of potential states like NK is to diffuse strategic power further so that individuals are empowered, but I think it provides the opposite lesson: that we should learn from the mistakes we made with nukes and actually constrain the spread of existentially powerful technologies as much as possible this time.
Allowing everyone unfettered access to ASI is pretty close to giving each citizen in North Korea a nuke: sure, the tyranical government wouldn’t be effective, but now everyone is at the mercy of people with evil or irrational values.
--
Other assorted points:
Thanks for taking the time to write this stuff up! I try to spend time thinking through what’s going to be valuable/what we can afford to trade off on, and it’s good to get reality checked.
I don’t personally hope or want the scenario in the essay to happen: it’s more of a thought experiment about where truly extreme concentration of power would be maximally bad. I think that you would essentially avoid the worst outcomes of gradual disempowerment in the scenario I listed (the part where almost everyone’s resources are below subsistence and starve), because it could be kept in check ~permanently by a commitment towards MAD from a wealthy philanthropist, but this clearly isn’t an ideal way to distribute resources from a welfare perspective.
There’s probably something of a middle ground we both agree with here: something like Seb Krier’s Bargaining at Scale, where most people are empowered with individual AI representatives and the government intervenes only to enforce contracts and preserve basic rights. I think that this sort of solution would mostly eliminate the black ball problem I was focused on, while still preserving most of the upside from tech diffusion.
Hi Felix! I’ve been thinking about the same topics for awhile, and came to pretty much the opposite conclusions.
No nononono. So many people making this argument and it’s so wrong to me.
The thing is: altruistic urges aren’t the only “nonzero urges” that people have. People also have an urge to power, an urge to lord it over others. And for a lot of people it’s much stronger than the altruistic urge. So a world where most people are at the whim of “nonzero urges” of a handful of superpowerful people will be a world of power abuse, with maybe a little altruism here and there. And if you think people will have exit rights from the whims of the powerful, unfortunately history shows that it won’t necessarily be so.
I think we’ll never be at a point where a handful of people can defeat the strongest entities. Bioweapons are slow; drone swarms can be stopped by other drone swarms. I can’t imagine any weapon at all that would allow a terrorist cell to defeat an army of equal tech level. Well, maybe if you have a nanotech-ASI in a test tube, but we’re dead before then.
It is however possible that a handful of people can harm the strongest entities. And that state of affairs is desirable. When the powerful could exploit the masses with impunity in the past, they did so. But when firearms got invented, and a peasant could learn to shoot a knight dead, the masses became politically relevant. That’s basically why we have democracy now: the political power of the masses comes from their threat-value. (Not economic value! The masses were always economically valuable to the powerful. Without threat-value, that just leads to exploitation. You can be mining for diamonds and still be a slave.) So the only way the masses can avoid a world of total subjugation to the powerful in the future is by keeping threat-value. And for that, cheap offense-dominant weapons are a good thing.
Making an analogy with altruism here is strange. North Korea is a horrifying oppressive regime. The fact that they can use the nuke threat to protect themselves, and their citizens have no analogous “gun” to hold to the head of their own government, is a perfect example of the power abuse that I described above. A world with big actors holding all threat-power will be a world of NKs.
There’s a standard response to this argument: namely, inequality of money always tries to convert itself into inequality of power, through lobbying and media ownership and the like. Those at the bottom may have comfort, but that comfort will be short lived if they don’t have the power to ensure it. The “Gini coefficient of power” is the most important variable.
So yeah, to me these all converge on a pretty clear answer to your question. Concentration of power, specifically of threat-power, offense-power, would be very bad. Spreading it out would be good. That’s how the world looks to me.
It sounds like a big crux for this might be whether weapons would be offense-dominant enough to threaten the whole of society.
I basically think that this is very unlikely to be the case. It’s just more cost effective to whip up offensive weapons than it is to reactively defend against them. In the basic case of a bioweapon, someone can covertly develop several pathogens at once, which each need to be responded to with a specific vaccine/containment measures. All the onus is on the defender to predict the attacker, while the attacker can take their time and pool their resources in places that are currently underdefended. The attacker also only needs to succeed once, while the defender needs to succeed continuously.
With some advanced technologies, like mirror life or an unaligned ASI, there might simply not be any realistic reactive response. Just as it’s functionally impossible to stop another country from nuking you today if they really want to, I strongly suspect that future offensive technologies will put you in a MAD style equilibrium. It’s just that the existentially threatening technologies that underly MAD will be much cheaper to access in a world full of superhuman AI assistance.
It seems pretty plausible to me that a handful of people or a terrorist organization really could end the world if they put their minds to it, given they have access to general superintelligence. The easiest way might just be developing a misaligned ASI intentionally, and have it run around causing maximum difficulty for the aligned ASIs that are paying the alignment tax.
I think that the masses have always had a substantial amount of threat value, and that democracy won out not because they became more threatening but because their economic value increased so greatly during the industrial revolution. Countries like China developed gunpowder all the way back in the 9th century, but they remained a monarchy for a thousand years afterward. In fact, the country saw multiple successful peasant revolutions throughout its history (including the establishment of the Ming Dynasty) but no establishment of a democratic government, because there were no economic incentivies for the new leaders to overhaul the system. It seems to me that there might be ways to preserve democracy that are not conditioned on the distrubtion of literal strategic power.
Can we really say that the U.S is democratic today primarily because firearms are so common? If that were the case, why have its cultural neighbors like Canada and European countries arguably been more democratically stable? The west is democratic because economics rewarded it, and then that incentive lead to a culture that valued democracy more or less for its own sake.
This is a good point, and one I underconsidered. If you have a world where a handful of people go around allocating resources according to their preferences, you will end up with pockets of altruism and oppression relative to how much preference for these the power players have. I don’t think I put enough weight on some of the players wanting actively bad outcomes.
I will say that NK is itself an example of a group that would be far better off if it had never been allowed to obtain its offense-dominant weapons. In a counterfactual world where the U.S was the only state with nukes and had succeeded at ironclad nonproliferation, you would probably see a proper pax atomica. Your take seems to be that the solution to the existence of potential states like NK is to diffuse strategic power further so that individuals are empowered, but I think it provides the opposite lesson: that we should learn from the mistakes we made with nukes and actually constrain the spread of existentially powerful technologies as much as possible this time.
Allowing everyone unfettered access to ASI is pretty close to giving each citizen in North Korea a nuke: sure, the tyranical government wouldn’t be effective, but now everyone is at the mercy of people with evil or irrational values.
--
Other assorted points:
Thanks for taking the time to write this stuff up! I try to spend time thinking through what’s going to be valuable/what we can afford to trade off on, and it’s good to get reality checked.
I don’t personally hope or want the scenario in the essay to happen: it’s more of a thought experiment about where truly extreme concentration of power would be maximally bad. I think that you would essentially avoid the worst outcomes of gradual disempowerment in the scenario I listed (the part where almost everyone’s resources are below subsistence and starve), because it could be kept in check ~permanently by a commitment towards MAD from a wealthy philanthropist, but this clearly isn’t an ideal way to distribute resources from a welfare perspective.
There’s probably something of a middle ground we both agree with here: something like Seb Krier’s Bargaining at Scale, where most people are empowered with individual AI representatives and the government intervenes only to enforce contracts and preserve basic rights. I think that this sort of solution would mostly eliminate the black ball problem I was focused on, while still preserving most of the upside from tech diffusion.