Dath Ilani Rule of Law

Minor spoilers for mad investor chaos and the woman of asmodeus (planecrash Book 1).

Also, be warned: citation links in this post link to a NSFW subthread in the story.

Criminal Law and Dath Ilan

When Keltham was very young indeed, it was explained to him that if somebody old enough to know better were to deliberately kill somebody, Civilization would send them to the Last Resort (an island landmass that another world might call ‘Japan’), and that if Keltham deliberately killed somebody and destroyed their brain, Civilization would just put him into cryonic suspension immediately.

It was carefully and rigorously emphasized to Keltham, in a distinction whose tremendous importance he would not understand until a few years later, that this was not a threat. It was not a promise of conditional punishment. Civilization was not trying to extort him into not killing people, into doing what Civilization wanted instead of what Keltham wanted, based on a prediction that Keltham would obey if placed into a counterfactual payoff matrix where Civilization would send him to the Last Resort if and only if he killed. It was just that, if Keltham demonstrated a tendency to kill people, the other people in Civilization would have a natural incentive to transport Keltham to the Last Resort, so he wouldn’t kill any others of their number; Civilization would have that incentive to exile him regardless of whether Keltham responded to that prospective payoff structure. If Keltham deliberately killed somebody and let their brain-soul perish, Keltham would be immediately put into cryonic suspension, not to further escalate the threat against the more undesired behavior, but because he’d demonstrated a level of danger to which Civilization didn’t want to expose the other exiles in the Last Resort.

Because, of course, if you try to make a threat against somebody, the only reason why you’d do that, is if you believed they’d respond to the threat; that, intuitively, is what the definition of a threat is.

It’s why Iomedae can’t just alter herself to be a kind of god who’ll release Rovagug unless Hell gets shut down, and threaten Pharasma with that; Pharasma, and indeed all the other gods, are the kinds of entity who will predictably just ignore that, even if that means the multiverse actually gets destroyed. And then, given that, Iomedae doesn’t have an incentive to release Rovagug, or to self-modify into the kind of god who will visibly inevitably do that unless placated.

Gods and dath ilani both know this, and have math for defining it precisely.

Politically mainstream dath ilani are not libertarians, minarchists, or any other political species that the splintered peoples of Golarion would recognize as having been invented by some luminary or another. Their politics is built around math that Golarion doesn’t know, and can’t be predicted in detail without that math. To a Golarion mortal resisting government on emotional grounds, “Don’t kill people or we’ll send you to the continent of exile” and “Pay your taxes or we’ll nail you to a cross” sound like threats just the same—maybe one sounds better-intentioned than the other, but they both sound like threats. It’s only a dath ilani, or perhaps a summoned outsider forbidden to convey their alien knowledge to mortals, who’ll notice the part where Civilization’s incentive for following the exile conditional doesn’t depend on whether you respond to exile conditionals by refraining from murder, while the crucifixion conditional is there because of how the government expects Golarionites to respond to crucifixion conditionals by paying taxes. There is a crystalline logic to it that is not like yielding to your impulsive angry defiant feelings of not wanting to be told what to do.

The dath ilani built Governance in a way more thoroughly voluntarist than Golarion could even understand without math, not (only) because those dath ilani thought threats were morally icky, but because they knew that a certain kind of technically defined threat wouldn’t be an equilibrium of ideal agents; and it seemed foolish and dangerous to build a Civilization that would stop working if people started behaving more rationally.

--Eliezer Yudkowsky, planecrash

“The United States Does Not Negotiate With Terrorists”

I think the idea Eliezer is getting at here is that responding to threats incentivizes threats. Good decision theories, then, precommit to never cave in to threats made to influence you, even when caving would be the locally better option, so as to eliminate the incentive to make those threats in the first place. Agents that have made that precommitment will be left alone, while agents who haven’t can be bullied by threateners. So the second kind of agent will want to appropriately patch their decision theory, thereby self-modifying into the first kind of agent.

Commitment Races and Good Decision Theory

Commitment races are a hypothesized problem in which agents might do better by, as soon as the thought occurs to them, precommitting to punishing all those who don’t kowtow to their utility function, and promulgating this threat. Once this precommitted threat has been knowingly made, the locally best move for others is to cave and kowtow: they were slower on the trigger, but that’s a sunk cost now, and they should just give in quietly.

I think the moral of the above dath ilani excerpt is that your globally best option[1] is to not reward threateners. A dath ilani, when so threatened, would be precommitted to making sure that their threatener gets less benefit in expectation than they would have playing fair (so as to disincentivize threats, so as to be less likely to find themselves so threatened):

That’s not even getting into the math underlying the dath ilani concepts of ‘fairness’! If Alis and Bohob both do an equal amount of labor to gain a previously unclaimed resource worth 10 value-units, and Alis has to propose a division of the resource, and Bohob can either accept that division or say they both get nothing, and Alis proposes that Alis get 6 units and Bohob get 4 units, Bohob should accept this proposal with probability < 56 so Alis’s expected gain from this unfair policy is less than her gain from proposing the fair division of 5 units apiece. Conversely, if Bohob makes a habit of rejecting proposals less than ‘6 value-units for Bohob’ with probability proportional to how much less Bohob gets than 6, like Bohob thinks the ‘fair’ division is 6, Alis should ignore this and propose 5, so as not to give Bohob an incentive to go around demanding more than 5 value-units.

A good negotiation algorithm degrades smoothly in the presence of small differences of conclusion about what’s ‘fair’, in negotiating the division of gains-from-trade, but doesn’t give either party an incentive to move away from what that party actually thinks is ‘fair’. This, indeed, is what makes the numbers the parties are thinking about be about the subject matter of ‘fairness’, that they’re about a division of gains from trade intended to be symmetrical, as a target of surrounding structures of counterfactual actions that stabilize the ‘fair’ way of looking things without blowing up completely in the presence of small divergences from it, such that the problem of arriving at negotiated prices is locally incentivized to become the problem of finding a symmetrical Schelling point.

(You wouldn’t think you’d be able to build a civilization without having invented the basic math for things like that—the way that coordination actually works at all in real-world interactions as complicated as figuring out how many apples to trade for an orange. And in fact, having been tossed into Golarion or similar places, one sooner or later observes that people do not in fact successfully build civilizations that are remotely sane or good if they haven’t grasped the Law governing basic multiagent structures like that.)

--Eliezer, planecrash

  1. ^

    I am not clear on what the decision-theoretically local/​global distinction I’m blindly gesturing at here amounts to. If I knew, I think I would fully understand the relevant updateless(?) decision theory.