AGI will drastically increase economies of scale

Note to mods: I’m a bit un­cer­tain whether posts like this one cur­rently be­long on the Align­ment Fo­rum. Please move it if it doesn’t. Or if any­one would pre­fer not to have such posts on AF, please let me know.

In Strate­gic im­pli­ca­tions of AIs’ abil­ity to co­or­di­nate at low cost, I talked about the pos­si­bil­ity that differ­ent AGIs can co­or­di­nate with each other much more eas­ily than hu­mans can, by do­ing some­thing like merg­ing their util­ity func­tions to­gether. It now oc­curs to me that an­other way for AGIs to greatly re­duce co­or­di­na­tion costs in an econ­omy is by hav­ing each AGI or copies of each AGI prof­itably take over much larger chunks of the econ­omy (than com­pa­nies cur­rently own), and this can be done with AGIs that don’t even have ex­plicit util­ity func­tions, such as copies of an AGI that are all cor­rigible/​in­tent-al­igned to a sin­gle per­son.

To­day, there are many in­dus­tries with large economies of scale, due to things like fixed costs, net­work effects, and re­duced dead­weight loss when mo­nop­o­lies in differ­ent in­dus­tries merge (be­cause they can in­ter­nally charge each other prices that equal marginal costs), but be­cause co­or­di­na­tion costs among hu­mans in­crease su­per-lin­early with the num­ber of peo­ple in­volved (see Mo­ral Mazes and Short Ter­mism for a re­lated re­cent dis­cus­sion), that cre­ates dis­ec­onomies of scale which coun­ter­bal­ance the economies of scale, so com­pa­nies tend to grow to a cer­tain size and then stop. But an AGI-op­er­ated com­pany, where for ex­am­ple all the work­ers are AGIs that are in­tent-al­igned to the CEO, would elimi­nate al­most all of the in­ter­nal co­or­di­na­tion costs (i.e., all of the co­or­di­na­tion costs that are caused by value differ­ences, such as all the things de­scribed in Mo­ral Mazes, “mar­ket for lemons” or lost op­por­tu­ni­ties for trade due to asym­met­ric in­for­ma­tion, prin­ci­pal-agent prob­lems, mon­i­tor­ing/​au­dit­ing costs, costly sig­nal­ing, and sub­op­ti­mal Nash equil­ibria in gen­eral), al­low­ing such com­pa­nies to grow much big­ger. In fact, from purely the per­spec­tive of max­i­miz­ing the effi­ciency/​out­put of an econ­omy, I don’t see why it wouldn’t be best to have (copies of) one AGI con­trol ev­ery­thing.

If I’m right about this, it seems quite plau­si­ble that some coun­tries will fore­see it too, and as soon as it can fea­si­bly be done, na­tion­al­ize all of their pro­duc­tive re­sources and place them un­der the con­trol of one AGI (per­haps in­tent-al­igned to a supreme leader or to a small, highly co­or­di­nated group of hu­mans), which would al­low them to out-com­pete any other coun­tries that are not will­ing to do this (and don’t have some other com­pet­i­tive ad­van­tage to com­pen­sate for this dis­ad­van­tage). This seems to be an im­por­tant con­sid­er­a­tion that is miss­ing from many peo­ple’s pic­tures of what will hap­pen af­ter (e.g., in­tent-al­igned) AGI is de­vel­oped in a slow-take­off sce­nario.