Human factors represent human capacities or limits that are unlikely to change in the short term. For example, the number of people one can “know” (for some definition of that term), limits to long-term and working memory, etc.
Theory of the firm tries to answer “why are economies markets but businesses autocracies” and related questions. I’m interested in the subquestion of “what factors given the upper bound on coordination for a single business”, related to “how big can a business be”.
I think this is related to “how big can an autocracy (robustly/stably) be”, which is how it relates to the singleton risk.
Some thoughts this produces for me:
Communication and coordination technology (telephones, email, etc) that increase the upper bounds of coordination for businesses ALSO increase the upper bound on coordination for autocracies/singletons
My belief is that the current max size (in people) of a singleton is much lower than current global population
This weakly suggests that a large global population is a good preventative for a singleton
I don’t think this means we can “war of the cradle” our way out of singleton risk, given how fast tech moves and how slow population moves
I think this does mean that any non-extinction event that dramatically reduces population also dramatically increases singleton risk
I think that it’s possible to get a long-term government aligned with the values of the governed, and “singleton risk” is the risk of an unaligned global government
So I think I’d be interested in tracking two “competing” technologies (for a hand-wavy definition of the term)
communication and coordination technologies—tools which increase the maximum effective size of coordination
soft/human alignment technologies—tools which increase alignment between government and governed
Did Bostrom ever call it singleton risk? My understanding is that it’s not clear that a singleton is more of an x-risk than its negative; a liberal multipolar situation under which many kinds of defecting/carcony factions can continuously arise.
Thinking more about the singleton risk / global stable totalitarian government risk from Bostrom’s Superintelligence, human factors, and theory of the firm.
Human factors represent human capacities or limits that are unlikely to change in the short term. For example, the number of people one can “know” (for some definition of that term), limits to long-term and working memory, etc.
Theory of the firm tries to answer “why are economies markets but businesses autocracies” and related questions. I’m interested in the subquestion of “what factors given the upper bound on coordination for a single business”, related to “how big can a business be”.
I think this is related to “how big can an autocracy (robustly/stably) be”, which is how it relates to the singleton risk.
Some thoughts this produces for me:
Communication and coordination technology (telephones, email, etc) that increase the upper bounds of coordination for businesses ALSO increase the upper bound on coordination for autocracies/singletons
My belief is that the current max size (in people) of a singleton is much lower than current global population
This weakly suggests that a large global population is a good preventative for a singleton
I don’t think this means we can “war of the cradle” our way out of singleton risk, given how fast tech moves and how slow population moves
I think this does mean that any non-extinction event that dramatically reduces population also dramatically increases singleton risk
I think that it’s possible to get a long-term government aligned with the values of the governed, and “singleton risk” is the risk of an unaligned global government
So I think I’d be interested in tracking two “competing” technologies (for a hand-wavy definition of the term)
communication and coordination technologies—tools which increase the maximum effective size of coordination
soft/human alignment technologies—tools which increase alignment between government and governed
Did Bostrom ever call it singleton risk? My understanding is that it’s not clear that a singleton is more of an x-risk than its negative; a liberal multipolar situation under which many kinds of defecting/carcony factions can continuously arise.
I don’t know if he used that phrasing, but he’s definitely talked about the risks (and advantages) posed by singletons.