Singleton: the risks and benefits of one world governments

Many thanks to all those whose conversations have contributed to forming these ideas.

Will the singleton save us?

For most of the large existential risks that we deal with here, the situation would be improved with a single world government (a singleton), or at least greater global coordination. The risk of nuclear war would fade, pandemics would be met with a comprehensive global strategy rather than a mess of national priorities. Workable regulations for the technology risks—such as synthetic biology or AI – become at least conceivable. All in all, a great improvement in safety...

...with one important exception. A stable tyrannical one-world government, empowered by future mass surveillance, is itself an existential risk (it might not destroy humanity, but it would “permanently and drastically curtail its potential”). So to decide whether to oppose or advocate for more global coordination, we need to see how likely such a despotic government could be.

This is the kind of research I would love to do if I had the time to develop the relevant domain skills. In the meantime, I’ll just take all my thoughts on the subject and form them into a “proto-research project plan”, in the hopes that someone could make use of them in a real research project. Please contact me if you would want to do research on this, and would fancy a chat.

Defining “acceptable”

Before we can talk about the likelihood of a good outcome, we need to define what a good outcome actually is. For this analysis, I will take the definition that:

  • A singleton regime is acceptable, if it is at least as good as any developed democratic government of today.

This definition can be criticised for its conservatism, or its cowardice. Shouldn’t we be aiming to do much better than what we’re doing now? What about the inefficiency of current governments, the huge opportunity costs? Is this not a disaster in itself?

As should be evident from some of my previous posts, I don’t see loss of efficiency as a huge problem, and I see the push towards efficiency (in certain circumstances) as a huge risk. Others disagree, however, which is why I chose the above minimalist definition. There are many different moral and ethical systems out there, each having their own preferences for ideal governments and their own impressions as to how our current governments fall short. But the vast majority would agree that a democratic singleton would be better than a despotic one. So by choosing that definition, we elide a lot of the conflict over values and can spend most of our efforts on the probability of different outcomes.

A related problem is the usual “be careful what you wish for”. Many people might passionately desire a libertarian or socialist paradise, only to be ultimately disappointed when it arrives. We thus have to worry about two probabilities—the probabilities of reaching X paradise, and the probability that X paradise will actually be paradisiacal. This is strictly harder. And it should be obvious that those pushing for X paradise are the least reliable at assessing whether it will work out; so investigating of specific outcomes should preferably be performed by people who have no interest in that outcome—but these would be hard to motivate to do so! Not to mention that there is a high risk of the whole debate collapsing into well-worn, passionate, but unresolvable current political conflicts.

It should also be noted that a democratic singleton is likely to be no more than a transitional state, before the development of AI or other radical technologies, so this need not imply a loss of efficiency over the long term. Since the long term is even less predictable, and since most ethical systems care about the long term, we have another reason to lay aside their short-term differences.

The last reason to avoid getting more specific, is that the problem is hard enough as it is. And specific details about the future trigger a whole host of biases that make reasoning less accurate (while often making it feel more accurate). Once you’ve got a good grasp on the odds of acceptable democratic singleton vs tyranny, then you can start on (X paradise) vs not-(X paradise).

The lessons of history

History is the most obvious and important starting point. We’ve had many tyrannies and democracies and transitions between them, so this is a rich vein to be mined for insights. But which ones?

We should immediately lay aside our experience with popular revolutions: a surveillance-empowered global tyranny should be stable to such changes. But there are many changes that didn’t involve popular revolutions.

Very relevant are examples of democracies backsliding into tyranny (the Facist/​Nazi takeovers, for instance, and many examples in South America or after decolonisation), despotic regimes improving themselves from the inside (USSR and China after the deaths of Stalin and Mao, the fall of Robespierre, Glasnost), and half-democracies evolving more towards full (the USA and especially the UK during the 19th century, many post-war European regimes). The causes of internal coups and their ultimate consequences may also be worth studying.

These are big themes, and need to be analysed systematically, not with the intent of getting a scorecard (“5-6 - democracies lose on points!”), but to attempt to isolate the features that cause regimes to behave in certain ways. By eyeballing the examples available to me, my initial conclusions would be that dictatorships are not very stable systems of governments (they should not be seen as attractors), that the period after a dramatic or violent transition are particularly prone for regimes to get worse, and that after the first few generations of leaders, severe turns to the bad become unlikely. The potential for continual improvements in semi-democracies seems also quite impressive, but cultural factors are important. But this is a very informal picture, in need of systematisation.

Another approach is to take current trends in modern regimes, and see if history predicts whether they would extend. Greater centralisation of power seems a universal modern feature (countries gaining more power over regions/​states, supranational organisations over countries—cities seem the only small political unit that endures well), and it would be interesting to get historian’s perspective on the reasons for this. We can also try to assess the consequences of such centralisation: does this push towards or away from a bad outcome? We have to tread carefully here, as this enters into mind-killing current political debates, but centralisation has not yet caused the rise of tyrannical regimes (slippery slope arguments are separate, see later). We can similarly look at many other trends that affect governmental behaviour (these would be useful inputs to the models described in a later section).

Another important study area is surveillance, contrasting high surveillance states (like the UK) with low surveillance ones (everywhere else ;-) and estimating the magnitude and direction of the effect of such surveillance on a country’s regime. There is a lot of value loaded studies on surveillance, though, so one should always focus on the actual consequence of surveillance, rather than predicted ones. As I recall, surveillance—so far—seems to have had very little impact, neither the feared repression, nor the promised crime reduction.

The future is not like the past, as a wise man once tautologised, and some features of a singleton will be quite unique. The most important is that it is, indeed, single: there will be no outside opponents, no competing regimes to act as rivals or safety valves. Would we have seen the increased democratisation of Britain in the 19th century without the existence of revolutionary France? Maybe it would have gone faster—or maybe it wouldn’t have happened at all. Would the eastern block regimes have collapsed without the mass-exodus of their populations?

Thus another strand of research is to look at how isolated societies change. There are a few examples—Tokugawa Japan being the most obvious, but some periods in Chinese history fit as well, as well as many other examples beyond my limited experience. There were times when the heart of the Roman Empire was functionally isolated from the outside world: the empire was neither expanding nor threatened, and the Roman citizens were convinced that nothing of value or worry lay outside the boundaries of their world. Isolated villages are other case studies, with the added advantage of approximating the “ubiquitous surveillance” of a future singleton.

But none of those examples fit all that well. We don’t have any good examples of large, isolated and recent democracies, which is what we would really want. So though we can gain some insights from studying isolated societies, we should hold those insights more weakly than those gained from the previous historical studies.

Wise and biased experts

In fact we should hold all these insights quite weakly. History and political science are, by necessity, very uncertain disciplines. Without access to strong versions of the scientific method, they cannot give us the certainty that we’d need. Much of the insights in the field derive from expert opinion, and so we can (and must) use all the insights that we have about the reliability of such opinion. Expert disagreement is an important feature of many of these debates; we will need to use methods that allow us to resolve such disagreements, without presuming we know more than the experts or injecting our own biases into the debate. We would certainly need to increase our uncertainty in these areas, hopefully without losing the ability to draw conclusions.

Tetlock has looked specifically into the reliability of expert political insight, and has concluded that it’s not very good (though still better than AI predictions). His results further decompose to show that hedgehogs (who follow a single or small collection of ideas they know intimately) perform worse than foxes (who have many different ideas they follow weakly). We should aim to be like foxes, eschewing grand historical narratives, being open to being wrong, and trying to uncover multiple independent lines of evidence for important predictions. We should also pay attention to what subdisciplines seem more reliable than others.

Part of that discipline is consciously suppressing the influence of fictional narratives that have seeped into general consciousness (such as 1984). If historians are not to be trusted with their predictions, writers are even less reliable. Political narratives are problematic as well. Life is generally better today than it was forty years ago. The state is also larger, and society is less equal. The narratives that a larger state/​more inequality must lead to worse outcomes are therefore not self-evidently true. They may be true, but require further analysis. That means that the convincing-sounding arguments for either position are not enough: one needs to look at the evidence (while taking into account that informed, smart people disagree with us, and that we cannot simply dismiss their views without strong reasons to do so).

In fact, that might be the most important part of the historical approach: not necessarily giving firm probabilities, but awakening new possibilities. The fact that surveillance dictatorships might not be attractors was a revelation to me, and completely expanded my view of the potential future. Anyone investigating this further will similarly uncover strong preconceptions they didn’t know they had.

The Future, modelled

What is often elided in Tetlock’s work is that while foxes outperformed hedgehogs, some algorithms surpassed both of them. In cases of poor expert performance, algorithms and simple models can reach surprisingly good results. Any future prediction that isn’t purely qualitative is probably going to involve models, so the mastering the use and misuse of models is important for this research.

At one end we have simple models, that explain a lot from a little. Some examples are the supply and demand curves in microeconomics, some of the simpler macroeconomic models, Moore’s laws, and similar. These simple models have their place, but are critically dependent on the insights that go into them: if their assumptions are questioned, they fall apart. Standard economic models are example of highly successful models with questionable assumptions, but they have a lot of empirical evidence behind them (and still people are overconfident in their use). Nothing similar exists for political science models, and it’s unlikely that a new researcher could produce such a model that has been overlooked until now.

Another weakness of simple models is that they are often “attractor models”. They point to a particular state being an equilibrium of certain factors, and predict that that state will be reached. But this is less useful: there is no timeline for how long it will take to reach that state, and we expect technology to radically change human social and political conditions, before any long-term equilibrium is reached. These models might still be useful for generating ideas, though.

In my opinion, a better approach is to decompose the problem (always a wise idea in areas of uncertainty) and construct smaller models for each component, that can be independently calibrated on the data and then combined. This approach has weaknesses as well, the main one being that through choices of decomposition and overfitting, we can end up with a model that says anything we want it to say: it could follow the model-maker’s prejudices rather than anything else.

We should certainly be alive to that risk, but I don’t think giving up in despair is the correct answer. Instead we should do it, but do it right. Find ways of doing the decomposition honestly, calibrating the components independently, catching errors without massaging the data. We should get honest people trying to do a good job, but more importantly, we should establish procedures before the whole project start, to try and minimise bias and maximise accuracy. This also has the advantage that we can then say “this is why our model is unbiased”, rather than “trust us, we did it in an unbiased way, because we’re so good”.

Apart from prediction, we can use these models to identify important features (what components seem to have the greatest influence on the outcome), vulnerability points (what small changes can make things dramatically worse) and which assumptions are the shakiest. A key point is to identify the enabling or dampening factors around slippery slopes. Politics is full of slippery slope arguments (the official secrets act is the first step towards tyranny! Corporate media consolidation destroys democracy!) and it is vital to know whether such slippery slopes go all the way to the end, or can be countered by other trends, derailed or reinforced by technological or social changes.

This method complements rather than contrasts with the historical approach. Models with a causal structure (“X causes Y for these reasons”), that fit the data acceptably, are inherently superior to non-causal models, even if they fit the data better (this is because fitting data isn’t hard with the amount of possible curves and methods we have available, and a causal model may be able to spot a change in the underlying dynamic). The historical approach can suggest the causal structures of the models, which are then fit to the historical data. The division into training and test sets should be maintained here; one useful method is to fit to certain countries, and then test on other countries. This gives an estimate of the inherent uncertainty in our models, and can suggest further factors of importance to investigate (though overfitting looms at this point).

This cross-country approach can often resolve slippery slopes, as well: there will generally be countries that have stopped sliding down any particular slope (and sometimes countries that have slid all the way down), and we can use this to figure out the dampening factors. The most important question is rarely “does X lead to Y”, but “in what circumstances does X lead to Y (and are those circumstances likely in the future)?” As mentioned before, the UK has remained politically acceptable despite the explosion of CCTV cameras. It would be important to understand why, and if this would continue in the future.

If the conditional models are well implemented, it would also allow us to estimate the effect of specific changes, caused by technology or exogenous shocks. For instance, what would happen if most of the legal profession was replaced by functional expert systems? Hopefully, we’d be able to give a good guess.

This is all very abstract: what kind of submodels are we talking about here? Ideally we’d want some things that tracks the flow of power in institutions and societies, the impact of new technologies and cultural trends, the resilience of certain systems to random shocks, and so on.

These are just a few of the ideas that sprang immediately to mind: how income distribution interacts with political freedom; what kinds of legal rights are respected or denied in practice; how regimes go into, and come out of, periods of greater repression; when regimes possess the planning abilities and incentive to act in their own long term interests; what types of pressure can change regimes when everyone knows actual revolution is impossible; whether there’s a quantification of relative political power that is predictive of government repression; how to model the resiliency of a chaotic democracy, up until the point the resiliency breaks; how control and corruption change under centralisation; how cultural trends and habits affect political power; the interplay between the legal profession and general freedom; will increased surveillance cause more repression or more toleration of marginal lifestyles; how much tradition adds to the stability of regimes; how political systems persuade those tempted to depose them to work within them instead; and much more.

Most of these will need to be further broken into subproblems. And this should of course start with a thorough literature review; there’s no point reinventing the wheel, even if it is the wrong colour.

Scenario analysis is sometimes mentioned in predictive context, allowing deep thinking about the consequences of certain specific changes. It’s ludicrously inept at forging a general overall picture, but it can be used to suggest new ideas and breaking out of some preconceptions. In general, though, it seems to simply focus attention on a small number of factors, and obscure what’s really going on.

The danger of (talking about) obvious improvements

There are many features that could—obviously or contentiously—improve the behaviour of any singleton world government. Complete transparency (not simply the government spying on its people, but the other way round), American-style freedom of speech, a more federated rather than centralised system, or the private ownership of guns: all of these have been suggested as important features to reduce the chance of governments growing tyrannical.

Who could object to the idea of (almost) complete transparency, for instance? How can you trust a government that operates in secrecy? Nevertheless, the research project should probably avoid tackling any of these issues. This is partially because of the political mind-killing aspect of it; the debate would soon degenerate.

But the main reason to avoid these problems is because arguing about them is easy (and tempting), but it is hard to show that the result of the argument is relevant at all. We are wondering if pushing for a one world singleton is worth it, comparing the existential risk reduction we’d get to the tyranny risk we’d incur. In that case, we should only talk about complete transparency if we’re willing to solve a question that’s twice are hard!

How so? Well, complete transparency is only relevant if we can show both of the following:

  1. The risk of a completely transparent singleton becoming tyrannical is lower than the xrisk reduction benefit we get.

  2. The risk of a non-completely transparent singleton becoming tyrannical is higher that the xrisk reduction benefit we get.

In other words, we can easily show that A is better than B; but we’re trying to compare both with X, and A > B tells us nothing about this. Worse, the distinction between A and B is only relevant if A > X > B (if not, then complete transparency isnt relevant to whether we push for a singleton). Do not confuse better and worse with good and bad...

And to those who might be tempted to argue that a singleton government without a certain feature is intrinsically worthless, and must be opposed at all costs: there are probably democratic countries across the world that lack that feature. Is life in all those countries so intolerable that you would gladly see humanity extinguished rather than live in such a regime? And if the answer is no, then we’re back to discussing probabilities and relative tradeoffs.

A usual conclusion

As usual, more research is needed (since this is a proto-research project, the opposite would have been somewhat surprising). I’ve laid out what I think is a reasonable plan for tackling the singleton problem: defining a minimalist standard for an acceptable outcome, start with historical analysis and moving into model-building, trying to deal with all the relevant biases along the way, and avoiding exciting contentious issues that are not relevant to the main question. The Xrisk reduction from the singleton should be assessed separately.

But I am not a social scientist, so this approach can and should be much improved. This is the area where those of a social science bent can help! We need you in the FHI/​Less Wrong community. If you or a friend of yours are in that category, I encourage you to take my preliminary plan, ruthlessly trim, expand and discard it, and make it into something workable. And then, maybe, work on it!

The future is unwritten to our eyes; we need to lift the veil just enough to choose the course to follow.