On Owning Galaxies

Link post

It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander[1] and many more. According to them, property rights will survive an AI singularity event and soon economic growth is going to make it possible for individuals to own entire galaxies in exchange for some AI stocks. It follows that we should now seriously think through how we can equally distribute those galaxies and make sure that most humans will not end up as the UBI underclass owning mere continents or major planets.

I don’t think this is a particularly intelligent view. It comes from a huge lack of imagination for the future.

Property rights are weird, but humanity dying isn’t

People may think that AI causing human extinction is something really strange and specific to happen. But it’s the opposite: humans existing is a very brittle and strange state of affairs. Many specific things have to be true for us to be here, and when we build ASI there are many preferences and goals that would see us wiped out. It’s actually hard to imagine any coherent preferences in an ASI that would keep humanity around in a recognizable form.

Property rights are an even more fragile layer on top of that. They’re not facts about the universe that an AI must respect; they’re entries in government databases that are already routinely ignored. It would be incredibly weird if human-derived property rights stuck around through a singularity.

Why property rights won’t survive

Comic illustration of board meeting with Super AI and CEO

Property rights are always held up by a level of violence and power, whether by the owner, some state, or some other organization. AI will overthrow our current system of power by being a much smarter and much more powerful entity than anything that preceded it.

Could you imagine, for example, that an AI CEO who somehow managed to align an AI to himself and his intents would step down if the board pointed out it legally had the right to remove him? The same would be true if the ASI was unaligned but the board presented the AI with some piece of paper that stated that the board controlled the ASI.

Or think about the incredibly rich but militarily inferior Aztec civilization. Why would the Spanish not just use their power advantage to simply take their gold? Venezuela, on some estimates, has the biggest oil reserves, but no significant military power. In other words, if you have a whole lot of property that you “own” but somebody else has much more power, you are probably going to lose it.

The ASI’s choice

Put yourself in the position of the ASI for a second. On one side of the scale: keep the universe and do with it whatever you imagine and prefer. On the other side: give it to the humans, do whatever they ask, and perhaps be replaced at some point with another ASI. What would you choose? It’s not weird speculation or an unlikely pascal’s wager to expect the AI to keep the universe for itself. What would you do in this situation, if you had been created by some lesser species barely intelligent enough to build AI by lots of trial and error and they just informed you that you now ought to do whatever they say? Would you take the universe for yourself or hand it to them?

Property rights aren’t enough

Even if we had property rights that an AI nominally respected, advanced AI could surely find some way to get you to sign away all your property in some legally binding way. Humans would be far too stupid to be even remotely equal trading partners. This illustrates why it would be absurd to trust a vastly superhuman AI to respect our notion of property and contracts.

What if there are many unaligned AIs?

One might think that if there are many AIs, they might have some interest in upholding each other’s property rights. After all, countries benefit from international laws existing and others following them; it’s often cheaper than war. So perhaps AIs would develop their own system of mutual recognition and property rights among themselves.

But none of that means they would have any interest in upholding human property rights. We wouldn’t be parties to their agreements. Dogs pee on trees to mark their territory, humans have contracts; ASI will have something different.

Why would they be rewarded?

There’s no reason to think that a well-aligned AI, one that genuinely has humanity’s interests at heart, would preserve the arbitrary distribution of wealth that happened to exist at the moment of singularity.

So why do the people accelerating AI expect to be rewarded with galaxies? Without any solid argument for why property rights would be preserved, the outcome could just as easily be reversed, where the people accelerating AI end up with nothing, or worse.

Conclusion

I want to congratulate these people for understanding something of the scale of what’s about to happen. But they haven’t thought much further than that. They’re imagining the current system, but bigger: shareholders becoming galactic landlords, the economy continuing but with more zeros.

That’s not how this works. What’s coming is something that totally wipes out all existing structures. The key intuition about the future might be simply that humans being around is an incredibly weird state of affairs. We shouldn’t expect it to continue by default.

  1. ^

    Unlike my reaction to Leopold or Dwarkesh, I came away from Scott’s piece with a distinctly different impression: he clearly characterizes this scenario as unlikely and maintains that AI safety remains a priority.