Secure homes for digital people

Link post

Being a “digital person” could be scary—if I don’t have control over the hardware I’m running on, then someone else could get my code and run tons of copies in horrible conditions. (See also: qntm’s Lena.)

It would be great to guarantee digital people some control over their situation: 1. to control their local environment and sensations, 2. to avoid unauthorized rewinding or duplicating.

I’ll describe how you could modify the code of a digital person so that they retain this control even if an adversary has access to their source code. This would be very expensive with current cryptography. I think the overhead will eventually become cheap enough that it’s possible to do for some digital people, though it will likely remain expensive enough that it is never applied to most digital people (and with luck most digital people will be able to feel secure for other reasons).

Part 1: the right to control my environment

My ideal

  • I live in a comfortable virtual home. I control all of the details of that world.

  • When people communicate with me, I can choose how/​whether to hear them, and how/​whether to update my home based on what they say (e.g. to render an avatar for them)

  • Sometimes I may occupy a virtual world where a foreign server determines what I see, feel, or hear. But even then I can place boundaries on my experiences and have the ability to quickly retreat to my home.

  • I have as much control as feasible over my own mental state and simulated body. No one else can tamper directly with them.

  • I can choose to pause myself for as long as I want (or permanently).

  • My local environment is private, and I have access to plenty of tamper-proof storage. I can do whatever I want with computers in my home, including e.g. verifying signatures or carrying on encrypted conversations.

Implementation

  1. First we write a simple environment that reflects all my desiderata (the “home”).

  2. Then I apply indistinguishability obfuscation to (me + home), so that the house becomes private and tamper-proof. (This is an extremely expensive operation, more on that later.)

  3. I distribute the obfuscated home and hopefully destroy any unprotected copies of myself.

One conceptual difficulty is that indistinguishability obfuscation applies to circuits whereas I would like to obfuscate a long-running program. But this can be handled straightforwardly, as discussed in Appendix A.

The home could consume terabytes of memory and teraflops of compute before it added significantly to the expense of running a human-like digital person, so I could live in relative luxury. The home could also negotiate resource requirements with the external world, and to decide what to do when requested resources are unavailable (e.g. to pause until it becomes available).

Limitation 1: cost

Indistinguishability obfuscation is extremely expensive, more like a factor of 10000000000 slowdown than 10.

It will get faster with further research, but probably not fast enough to obfuscate the whole person+home. But there are other ways to speed up the process:

  • I think it’s probably possible to have most of the computation be “merely” homomorphically encrypted, and to have an obfuscated controller which verifies and decrypts the results. FHE could be much faster than obfuscation; if I had to guess I’d say it would converge to something like 2-3 orders of magnitude of slowdown.

  • We can potentially have an obfuscated controller verify a much larger untrusted computation. I don’t know how fast we can make delegated computation, but I could imagine it getting closer to 2x than 100x. It might help further that we are not applying these methods to generic problems but to a very specific structured problem (which probably has quite low circuit depth). One complication is that we need our proof system to be secure even against an adversary who can unwind the prover, but I don’t think this is a huge deal.

  • Delegating computation would preserve integrity but not security. So the computation we delegate may need to already be private. Here it seems likely that we can benefit a lot from the structure of the computation. Almost all of our operations are in doing a brain simulation, and we don’t really care about leaking the fact that we are doing a brain simulation, just about leaking the state of the brain. I don’t know how fast this can be made but again I would not be surprised by a factor of 2.

It’s pretty unclear how fast this could get, either from taking some of these techniques to their limits or from thinking of other cleverer ideas. I would not be at all surprised by getting the whole thing down to a factor of 2 slowdown. That said, I also think it’s quite plausible that you need 10x or 10000x.

Limitation 2: security?

The cryptography used in this construction may end up getting broken—whether from a mistaken security assumption, or because the future contains really giant computers, or because we implemented it badly.

The software used in my home may get compromised even if the cryptography works right. An adversary can provide trillions of malicious inputs to find one that lets them do something unintended like exfiltrate my code. With modern software engineering this would be a fatal problem unless the home was extremely simple, but in the long run writing a secure home is probably easier than writing fast enough cryptography.

I may be persuaded to output my source code, letting an adversary run it. I might not give myself the ability to inspect my own source, or might tie my hands in other ways to limit bad outcomes, but probably I can still end up in trouble given enough persuasion. This is particularly plausible if an adversary can rewind and replay me.

Limitation 3: rewinding

In the best case, this scheme guarantees that an attacker can only use my code as part of a valid execution history. But for classical computers there is no possible way to stop them from running many valid execution histories.

An attacker could save a snapshot of me and then expose it to a billion different inputs until they found one in which I responded in a desired way. (Even if I’m cagey enough to avoid this attack in most possible situations, they just have to find one situation where I let my guard down and then escalate from there.) Or I could have revealed information to the outside world that I no longer remember because I’ve been reset to an earlier state.

Someone living in this kind of secure house is protected from the worst abuses, but they still can’t really trust the basic nature of their reality and are vulnerable to extreme manipulation.

This brings us to part 2.

Part 2: the right to a single timeline

My ideal

  • No one should be able to make a second copy of me without my permission, or revert me to a previous state.

  • I should be able to fork deliberately. I can’t force someone to run a second copy of me, but I should be able to give specific permission.

Implementation with trusted hardware

This is easy to achieve if we have a small piece of trusted tamper-resistant hardware that can run cheap computations. We use the same mechanism as in the last section, but:

  • The trusted hardware has a secret key sk, and it maintains an internal counter k.

  • On input x, the trusted hardware signs (x, k) and increments the counter.

  • Whenever someone provides my obfuscated controller an input and tries to step it forward, the obfuscated controller first checks to see that the input has been signed by the trusted controller with the correct timestep.

  • In order to make a copy, I need to have the public key of another piece of trusted hardware, which I use to initialize a new copy. (Ideally, the manufacturer signs the public key of each piece of trusted hardware they built, and I know the manufacturer’s public key.)

If I were willing to make a round trip to a trusted third party every time I received a novel input, then I could have them implement this function directly instead of using tamper-proof hardware. The real critical ingredient is me trusting someone on the outside. I’ll discuss how to potentially remove this assumption in the section on quantum computers below.

None of this actually requires my house to be built to guarantee the right to a single timeline—I could start without such a right, and then install a wrapper to enforce a single timeline once there was some hardware I trusted or if it became important enough.

Implementation with 1-of-2 transfer

Suppose that the only kind of trusted hardware is a device that holds two secrets, and will reveal one or the other of them when asked but not both. I think this is somewhat easier to build than general trusted hardware. (Related: locking phones with quantum bits.)

Now suppose there is a trusted party who manufactures a bunch of these devices, with a public key pk. Each device a serial number n, and its two secrets are signatures from pk: one of (n, 0) and one of (n, 1)

I can still achieve a single-timelines guarantee:

  • At each time, I have a list of 100 secure devices I expect to use to verify my next step. When I am initially created I have to choose 100 devices that someone will need to run me.

  • Each time someone wants to step me forward, they need to provide both the input and the serial numbers for 100 new secure devices to use for the next step.

  • I then apply a cryptographic hash function to get a 100 bit string. I output this string and wait for a response before stepping forward the simulation.

  • They use the nth bit of my string to decide which secret to reveal from their nth device. I verify the signatures for all 100 secrets.

  • If the adversary tries to run me with two different inputs, or to have two different sets of 100 devices to use for the next steps, they will end up with two different 100 bit string challenges (unless they find a hash collision). Stepping me forward for both strings would require getting the signatures for two different bits, which is impossible if the secure devices work as intended and the signature scheme remains secure.

Implementation with quantum computers

We can probably remove the need for a trusted party if we have quantum computers big enough to perform delegated cryptographic operations (but no big enough quantum computers to run an entire digital person in superposition).

The vague idea in this section looks to me like it should work—I’d give it at least 85% odds—but would require some new ideas and analysis. It’s on much sketchier ground than any of the ideas in previous sections

The basic idea is to replace the secure trusted hardware with a small quantum computer that holds a quantum secret. At all times the obfuscated controller knows the secret, but it is impossible for anyone external to get two copies of the secret. When we ask the obfuscated controller to advance its simulation, it will first ask the quantum computer to transform the secret in an irreversible way and to provide proof that it has done so.

To achieve this we can basically repurpose a protocol for delegated quantum computation and verifying quantumness of a single quantum device, which is roughly what you need if you don’t trust the external world at all and so need to treat it as one device (it’s an amusing coincidence that those papers are by more old officemate Urmila Mahadev and that I worked with her on this problem in another life). To briefly summarize the trick:

  • The operator knows 100 hard-to-invert functions f_i.

  • For each i, the untrusted quantum computer has a superposition over two values x for which f_i(x) is equal.

  • By measuring the state, the untrusted quantum computer could collapse the superposition to a single pre-image x, but it is provably hard for them to find both pre-images.

  • The prover can also *prove* that they have both values by applying a hadamard transform.

  • This means that the prover has a given superposition, but they don’t actually know the parts of the superposition, and so the only way they can manipulate it is by applying quantum gates.

  • The obfuscated machine can then tell the untrusted quantum computer a sequence of encrypted operations to perform on this data in order to transform it into a new set of secrets, and then can ask for a set of measurements to both verify that this has been done correctly and ensure that it is irreversible. This step is the one that would require an expert in the field to spend some time.

  • If that actually works then they can use the same idea to fork by stretching the 100 secrets to 400, verifying 200 to confirm that everything was done correctly, and then splitting the remaining 200 into two new secrets, one used by each of my 2 copies.

This protocol fails if an attacker can run the entire home in superposition, since then they can simply rewind the entire process. But if the attacker can’t run the secure home in superposition, then you can extract a classical copy of each input on which they run the home, and so running it on two different inputs would allow you to obtain classical copies of two challenge-responses and so violate the assumption.

This protocol probably provides better support for forking, since I can probably direct the external quantum computer to produce two new secrets without having to find a source of additional trusted hardware.

Limitations:

  • You only obtain security-against-copying *after* you have set up the quantum secret. You can do this by performing a simple protocol from inside your home, and it doesn’t require trusting the quantum device at all. But if an adversary has a copy of you from before you’ve performed this protocol, they could create many copies of you and perform the ritual separately for each of them. (It actually takes some extra work to ensure that an adversary who had a copy of you from before the protocol couldn’t simply break the scheme—I think that can be done but I’m not sure.)

  • There will eventually be quantum computers with quadrillions of qbits, and at that point an attacker (who has access to the final version of the quantum computer) could make many copies of me. If I was naive they could also revive any prior snapshots of me, but I could prevent that if I want to by asking the quantum computer to periodically shred and refresh its secret.

  • For each new input someone wants to send to my home, they need to first consult with a quantum computer. The total cost of the quantum computation is not likely to be too large, but having quantum computers “on site” might be logistically challenging, and round trips could introduce significant latency.

Appendix A: obfuscation for uniform computations

Suppose that I want to obfuscate the program that repeatedly applies the circuit C to a state, i.e. we start from some initial state S[0], then we repeatedly compute (S[t+1], output[t]) = C(S[t], input[t]).

We’ll instead produce an obfuscated “controller” C’, and an appropriate initial state S’[0]. A legitimate operator with access to C’ can simulate my original program, whereas a malicious operator will not be able to do anything other than running multiple copies of me, rewinding to old snapshots, or killing me prematurely.

C’ contains a secret cryptographic key sk. When it receives an input (S’[t], input[t]) it does the following operations:

  • First verify that S’[t] is signed with sk.

  • Then decrypt S’[t] with sk in order to obtain S[t].

  • Now apply C(S[t], input[t]) to obtain (S[t+1], output[t])

  • Now encrypt and sign S[t+1] to obtain S’[t+1]

  • Output (S’[t+1], output[t])

The analysis is left as an easy exercise for the reader (famous last words, especially hazardous in cryptography).

The same idea can be used to obfuscate other kinds of uniform computation, e.g. providing access to secure RAM or having many interacting processors.