Building trusted third parties

How do two mutually distrusting entities(AGIs, space-faring societies, etc.) enact the results of a negotiated agreement when neither trusts the other? In cryptography there are Trusted third parties. This is a proposal for a protocol allowing them to be built.

This is done by having a replicator supplied by party A(Alice) self replicate. Party B(Bob) chooses one offspring from each generation to inspect destructively and another to continue the process. At each step Bob becomes twice as confident the remaining replicator is trustable. This is similar to a split and choose zero knowledge proof.

This protocol is carried out by smaller agents that Alice and Bob both trust. These entities must meet a few requirements.

  • They have to detect tampering

  • They have to hold secrets used for secure communication

  • They have to destroy the secrets if tampered with

How can these conditions fail?:

  • new physics is discovered allowing remote scanning or manipulation of matter

  • A very large nanotechnology gap between the two parties(IE:one is actually incompetent)

  • One knows femtotech the other doesn’t (same as previous)

If these conditions cannot be met (the universe doesn’t allow for them) then this isn’t going to work and more generally it isn’t possible to build trustable infrastructure. That’s it. Secrets are used to secure communication with other agents that are part of or created by Alice or Bob. The physics we know about seems to allow this.

If one party has enough of a technological advantage they would just subsume the other rather than negotiate. Also, if precognition or time travel is possible the coin flip step of the protocol fails.

Implementation concerns

Alice would likely send more than just the replicator. Additional monitoring of the process might be necessary depending on the quality of tamper detection that is possible. Carrying out the process in space far from large masses that Bob could use to hide nefarious things seems like a sensible precaution.

The replicator might be designed to do mostly inspection and assembly rather than fabrication of components. This makes the replicator smaller and the replication process faster. A preferred embodiment is the use of thin sheet form components that can be fed into the replicator and inspected by nano-scale probing.

Bob can attack the process by tampering with the replicator or the materials he provides to it. For an inspecting replicator he could install nano-scale defects in components. The replicator should either have good enough inspection equipment to catch such defects. Have a way of fixing them or rendering them harmless (eg:heating components to high temperature to damage fine structure) or be immune to such things by design (eg:voting logic and error detection to detect incorrect component behavior).

Alice has a greater “attack surface” available to her since she supplies the initial replicator. She can make any design alterations she likes. An attack from her end would be a heritable malicious change to the replicator that disappears when inspection begins. This should be prevented by design. One embodiment of such would be to separate the computing and assembly/​inspection halves of the replicator. The computer would be designed with read only logic and programming so it cannot alter itself without help from the assembly parts. The assembly and inspection part is designed to have as little logic as possible so that, if altered, the process fails rather than producing working malicious offspring. During the coin flip, there is inert material separating the two. During inspection the replicator is de-powered and the two halves separated. There should not be enough time or energy available to allow a malicious change in the computer to be removed.

If strong Drexler style nanotechnology is possible, this is would be an easy problem to solve. Otherwise, it’s harder but I think still feasible.

Game theory

My assumptions here are that the payoff matrix looks something like this

Alice   Bob      pay     P(works)
coop.   coop.    +1,+1   ~1
defect  coop.    +5,-1   2^-n
X       defect   -1,+5   ~0        

The payoff values are relative to the base case where nothing happens. The last row is included for completeness sake. A good implementation makes it nearly impossible for Bob to subvert the protocol thus approximately zero chance of success. Likewise if both are honest approximately unity chance of success.

Each party wins a lot by screwing the other over. If Bob just surrenders to Alice and gives her control of his assets she will defect because that gets her +4 over the cooperate case. What this protocol does is add that probability column which makes Alice’s chance of defect success negligible. The assured +1 becomes the better choice (+5*2^-n << +1*1) so she cooperates. Bob Obviously has an incentive to cooperate since he gets the +1. The +5 isn’t worth it since it’s overwhelmingly likely to fail. Both cooperate, everyone ends up better off.

This is a bit of a simplification. The payoff matrix represents the overall benefit to both parties if they carry out the agreement completely. The agreed upon contract should reflect this with logic in place for one party not carrying through on their end of the agreement. Alice might refuse to hand over her assets after Bob’s are taken over. In this case the contract logic should leave Bob in control of his assets until Alice makes good on her end. Alternatively, it might be worth the sending a dishonest replicator on the off chance of scoring a solar system’s worth of resources. The +1,+1 outcome is better for everyone though and rational agents should end up there. The payoff matrix would have to be really skewed for them not to.

As someone who cares about efficiency, I like the idea that future entities, whatever they may be, might be able to cooperate.

No comments.