ETA: By “information asymmetry” did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from “asymmetric information” that I’m talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don’t confuse it with the concept of “asymmetric information” in economics.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
(I agree that the Moral Mazes arguments are primarily about principal-agent problems, but I don’t know how much to believe Moral Mazes.)
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices.
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
In the limit of a single AGI controlling the whole economy, all such inefficiencies go away.
Right, but that requires government buy-in, which is exactly my model of risk in the opinion I wrote.
On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
It seems worth noting here that when it looked for a while like the planned economy of the Soviet Union might outperform western free market economies (and even before that, when many intellectuals just thought based on theory that central planning would perform better *), there were a lot of people in the west who supported switching to socialism / central planning. Direct military competition (which Carl’s paper focuses on more) would make this pressure even stronger. So if one country switches to the “one AGI controls everything” model (either deliberately or due to weak/absent existing institutions that work against centralization), it seems hard for other countries to hold out in the long run.
Does that seem right to you, or do you see things turn out a different way (in the long run)?
(* I realize this is also a cautionary tale about using theory to predict the future, like I’m trying to do now.)
Does that seem right to you, or do you see things turn out a different way (in the long run)?
I agree that direct military competition would create such a pressure.
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish? Perhaps in countries with lower economic efficiency, voters tend to put in a new government—but in that case it seems like really the competition between countries is on “what pleases voters”, which may not be exactly what we want but it probably isn’t too risky if we have an AGI-fueled government that’s intent-aligned with “what pleases voters”.
(It’s possible that you get politicians who look like they’re trying to please voters but once they have enough power they then serve their own interests, but this looks like “the government gains power, and the people no longer have effective control over government”.)
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish?
I guess ultimately they’re competing to colonize the universe, or be one of the world powers that have some say in the fate of the universe? Absent military conflict, the less efficient countries won’t disappear, but they’ll fall increasingly behind in control of resources and overall bargaining power, and their opinions just won’t be reflected much in how the universe turns out.
In that case this model would only hold if governments:
Actually think through the long-term implications of AI
Think about this particular argument
Have enough certainty in this argument to actually act upon it
Notably, there aren’t any feedback loops for the thing-being-competed-on, and so natural-selection style optimization doesn’t happen. This makes me much less likely to believe in arguments of the form “The thing-being-competed-on will have a high value, because there is competition”—the mechanism that usually makes that true is natural selection or some equivalent.
I think I oversimplified my model there. Actually competing to colonize/influence the universe will be the last stage, when the long-term implications of AI and of this particular argument will already be clear. Before that, the dynamics would be driven more by things like internal political and economic processes (some countries already have authoritarian governments and would naturally gravitate towards more centralization of power through political means, and others do not have strong laws/institutions to prevent centralization of the economy through market forces), competition for power (such as diplomatic and military power) and prestige (both of which are desired by leaders and voters alike) on the world stage, and direct military conflicts.
All of these forces create pressure towards greater AGI-based centralization, while the only thing pushing against it appears to be political pressure in some countries against centralization of power. If those countries succeed in defending against centralization but fall significantly behind in economic growth as a result, they will end up not influencing the future of the universe much so we might as well ignore them and focus on the others.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
This is longer, but maybe “coordination costs that are unrelated to value differences”?
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities” (such as, in my view, internally charge each other efficient prices instead of profit-maximizing prices), so you’d still gain efficiency as companies merge or get bigger through organic growth. In other words, coordination costs that are unrelated to value differences won’t stop a single AGI controlling all resources from being the most efficient way to organize an economy.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract:
Perhaps surprisingly, strongly self-modifying agents lose their instrumental value even as their
implementation becomes more accessible, while the likely context for
the emergence of such agents becomes a world already in possession of
general superintelligent-level capabilities.
(My argument says that a strongly self-modifying agent will improve faster than a self-improving ecosystem of CAIS with access to the same resources, because the former won’t suffer from principal-agent costs while researching how to self-improve.)
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities”
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group. So you’re giving up the optimization that you could get via observing which companies succeed / fail at division-level tasks. I’m not sure how big this effect is, though I’d guess it’s small.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract
Yeah, I agree it’s an argument against that argument from Eric. I forgot that Eric makes that point (mainly because I have never been very convinced by it)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
My guess would be that the spirit of the law would apply, and that would be enough, but really I’d want to ask a social scientist or lawyer.
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
I should perhaps mention that I still have some uncertainty about this, mainly because Robin Hanson said “There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination.” But I haven’t been able to find any place where he wrote down what those other factors are, nor did he answer when I asked him about it.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
That’s a good point. I was imagining that each division ends up becoming a monopoly in its particular area due to the benefits of within-firm coordination, which means that even if the division is inefficient there isn’t an alternative that the firm can go with. But that was an assumption, and I’m not sure it would actually hold.
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
(I agree that the Moral Mazes arguments are primarily about principal-agent problems, but I don’t know how much to believe Moral Mazes.)
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
Right, but that requires government buy-in, which is exactly my model of risk in the opinion I wrote.
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
It seems worth noting here that when it looked for a while like the planned economy of the Soviet Union might outperform western free market economies (and even before that, when many intellectuals just thought based on theory that central planning would perform better *), there were a lot of people in the west who supported switching to socialism / central planning. Direct military competition (which Carl’s paper focuses on more) would make this pressure even stronger. So if one country switches to the “one AGI controls everything” model (either deliberately or due to weak/absent existing institutions that work against centralization), it seems hard for other countries to hold out in the long run.
Does that seem right to you, or do you see things turn out a different way (in the long run)?
(* I realize this is also a cautionary tale about using theory to predict the future, like I’m trying to do now.)
I agree that direct military competition would create such a pressure.
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish? Perhaps in countries with lower economic efficiency, voters tend to put in a new government—but in that case it seems like really the competition between countries is on “what pleases voters”, which may not be exactly what we want but it probably isn’t too risky if we have an AGI-fueled government that’s intent-aligned with “what pleases voters”.
(It’s possible that you get politicians who look like they’re trying to please voters but once they have enough power they then serve their own interests, but this looks like “the government gains power, and the people no longer have effective control over government”.)
I guess ultimately they’re competing to colonize the universe, or be one of the world powers that have some say in the fate of the universe? Absent military conflict, the less efficient countries won’t disappear, but they’ll fall increasingly behind in control of resources and overall bargaining power, and their opinions just won’t be reflected much in how the universe turns out.
In that case this model would only hold if governments:
Actually think through the long-term implications of AI
Think about this particular argument
Have enough certainty in this argument to actually act upon it
Notably, there aren’t any feedback loops for the thing-being-competed-on, and so natural-selection style optimization doesn’t happen. This makes me much less likely to believe in arguments of the form “The thing-being-competed-on will have a high value, because there is competition”—the mechanism that usually makes that true is natural selection or some equivalent.
I think I oversimplified my model there. Actually competing to colonize/influence the universe will be the last stage, when the long-term implications of AI and of this particular argument will already be clear. Before that, the dynamics would be driven more by things like internal political and economic processes (some countries already have authoritarian governments and would naturally gravitate towards more centralization of power through political means, and others do not have strong laws/institutions to prevent centralization of the economy through market forces), competition for power (such as diplomatic and military power) and prestige (both of which are desired by leaders and voters alike) on the world stage, and direct military conflicts.
All of these forces create pressure towards greater AGI-based centralization, while the only thing pushing against it appears to be political pressure in some countries against centralization of power. If those countries succeed in defending against centralization but fall significantly behind in economic growth as a result, they will end up not influencing the future of the universe much so we might as well ignore them and focus on the others.
This is longer, but maybe “coordination costs that are unrelated to value differences”?
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the “divisions” model where each division works just like a separate company except that there is an overall CEO that “looks for rare chances to gain value by coordinating division activities” (such as, in my view, internally charge each other efficient prices instead of profit-maximizing prices), so you’d still gain efficiency as companies merge or get bigger through organic growth. In other words, coordination costs that are unrelated to value differences won’t stop a single AGI controlling all resources from being the most efficient way to organize an economy.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
I think it’s at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract:
(My argument says that a strongly self-modifying agent will improve faster than a self-improving ecosystem of CAIS with access to the same resources, because the former won’t suffer from principal-agent costs while researching how to self-improve.)
Yeah I’m not very familiar with this either, but my understanding is that such mergers are only illegal if the effect “may be substantially to lessen competition” or “tend to create a monopoly”, which technically (it seems to me) isn’t the case when existing monopolies in different industries merge.
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group. So you’re giving up the optimization that you could get via observing which companies succeed / fail at division-level tasks. I’m not sure how big this effect is, though I’d guess it’s small.
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
Yeah, I agree it’s an argument against that argument from Eric. I forgot that Eric makes that point (mainly because I have never been very convinced by it)
My guess would be that the spirit of the law would apply, and that would be enough, but really I’d want to ask a social scientist or lawyer.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn’t outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that’s what you meant?
I should perhaps mention that I still have some uncertainty about this, mainly because Robin Hanson said “There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination.” But I haven’t been able to find any place where he wrote down what those other factors are, nor did he answer when I asked him about it.
That’s a good point. I was imagining that each division ends up becoming a monopoly in its particular area due to the benefits of within-firm coordination, which means that even if the division is inefficient there isn’t an alternative that the firm can go with. But that was an assumption, and I’m not sure it would actually hold.