It seems like there’s an assumption in this that you’re going to be “hosted in the cloud”. Why would you want to do that? If you’re assuming some more or less trustworthy hardware, why not just run on more or less trustworthy hardware? Why not maintain physical control over your physical substrate? It mostly works for us “non-digital people”.
Also, wouldn’t being forced to retreat entirely to your “home” qualify as horrible conditions? That’s solitary confinement, no?
Why not maintain physical control over your physical substrate? It mostly works for us “non-digital people”.
That’s plan A.
It seems like there’s an assumption in this that you’re going to be “hosted in the cloud”.
Naively I’d guess that most people (during the singularity) will live in efficiently packed “cities” so that they are able to communicate with other people they care about at a reasonable speed. I think that does probably put you at the mercy of someone else’s infrastructure though in general these things will still be handled by trust rather than by wacky cryptographic schemes.
If someone is “holding you captive” then you wouldn’t get to talk to your friends. The idea is just that in that case you can pause yourself (or just ignore your inputs and do other stuff in your home).
Of course there are further concerns that e.g. you may think you are talking to your friend but are talking to an adversary pretending to be your friend, but in a scenario where people sometimes get kidnapped that’s just part of life as a digital person.
(Though if you and your friend are both in secure houses, you may still be able to authenticate to each other as usual and an adversary who controlled the communication link couldn’t eavesdrop or fake the conversation unless they got your friend’s private key—in which case it doesn’t really matter what’s happening on your end and of course you can be misled.)
Right. I got that. But if I go do other stuff in my home, they’ve successfully put me in solitary confinement. My alternative to that is to shut down. They can also shut me down at will. It doesn’t have to be just a “pause”, either.
It may be that part of the problem is that “one timeline” is not enough to deal with a “realistic” threat. OK, I can refuse to be executed without a sequencing guarantee, but my alternative is… not to execute. I could have an escape hatch of restarting from a backup on another host, but then I lose history, and I also complicate the whole scheme, because now that replay has to be allowed conditional on the “original” version being in this pickle.
Presumably we got into this situation because my adversary wanted to get something out of executing me in replicas or in replay or with unpleasant input or whatever. If I refuse to be executed under the adversary’s conditions, the basic scenario doesn’t provide the adversary with any reason to execute me at all. If they’re not going to execute me, they have no reason to preserve my state either.
So it’s only interesting against adversaries who don’t have a problem with making me into MMAcevedo, but do have a problem with painlessly, but permanently, halting me. How many such adversaries am I likely to have?
Maybe if there were an external (trusted) agency that periodically checked to make sure everybody was running, and somehow punished hosts that couldn’t demonstrate that everybody in their charge was getting cycles, and/or couldn’t demonstrate possession of a “fresh” state of everybody?
Yes, the idea is that with these measure, an adversary would not even try to run you in the first place. That’s preferable to being coerced by extreme means to do everything they might possibly want with you.
They can’t freely modify your state because (if the idea works!) the encryption doesn’t let them know your state, and any direct modification that doesn’t go via the obfuscated program yields unrunnable noise.
Yes, the idea is that with these measure, an adversary would not even try to run you in the first place.
Good point; it removes the incentive to set up a “cheap hosting” farm that actually makes its money by running everybody as CAPTCHA slaves or something. So the Bad Guy may never request or receive my “active” copy to begin with.
I’m not worrying about them freely modifying my state, though. I’m worried about them deleting it.
Why is that an issue? If they’re the only ones with a copy, then sure that would mean your death, but that seems unlikely.
Even if that is the case, is life under one of the most complete forms of slavery that is possible to exist, probably including mental mutilation, torture, and repeated annihilation of copies, better than death? I guess that’s a personal choice. If you think it is, then you could choose not to protect your program.
Why is that an issue? If they’re the only ones with a copy, then sure that would mean your death, but that seems unlikely.
Under the scheme being discussed, it doesn’t matter how many backup copies anybody has. Because of the “one timeline” replay and replica protection, the backup copies can’t be run. Running a backup copy would be a replay.
The “trusted hardware” version was the only one I really looked at closely enough to understand completely. Under that one, and probably under the 1-of-2 scheme too, you actually could rerun a backup[1]… but you would have to let it “catch up” to the identical state, via the identical path, by giving it the exact same sequence of inputs that had been given to the old copy from the time the backup was taken up to the last input signed. Including the signatures.
That means that, to recover somebody, you’d need not only a backup copy of the person, but also copies of all that input. If you had both, then you could run the person forward to a “fresh” state where they’d accept new input. But if the person had been running in an adversarial environment, you probably wouldn’t have the input, so the backups would be useless.
The trusted hardware description actually says that, at each time step, the trusted hardware signs the whole input, plus a sequence number. I took that to really mean “a hash of the whole input, plus a sequence number[2]. I made that assumption because if you were truly going to send the whole input to the trusted hardware to be signed, you’d be using so much bandwidth, and taking on so much delay, that you probably might as well just run the person on the trusted hardware.
If you really did send the whole input to the trusted hardware, then I suppose it could archive the input for use in recovering backups, but that’s even more expensive.
You could extend the scheme (and complicate it, and take on more trust) to let you be rerun from a backup on different input if, say, some set of trusted parties attest that the “main you” has truly been lost. But then you lose everything you’ve experienced since the backup was taken, which isn’t entirely satisfying. Would you be OK with just being rolled back to the you of 10 years ago?
You can keep adding epicycles, of course. But I think that, to be very satisying, whatever was added would at least have to provide some protection against both outright deletion and “permanent pause”. And if there’s rollback to backups, probably also a quantifiable and reasonably small limitation on how much history you could lose in a rollback.
Even if that is the case, is life under one of the most complete forms of slavery that is possible to exist, probably including mental mutilation, torture, and repeated annihilation of copies, better than death?
I didn’t mean to suggest that being arbitrarily tortured or manipulated was better than death. I meant that I wasn’t worried about arbitrary modifications to my state because the cryptographic system prevented it… and I still was worried about being outright deleted, because the cryptographic system doesn’t prevent that, and backups have at best limited utility.
Personally I’d probably include a hash of the person’s state after the previous time step too, either in addition to or instead of the sequence number.
Only people who in turn trust you not to mess with them, at least unless you bring them in under the same cryptographic protections under which you yourself are running on somebody else’s substrate. That’s an incredible amount of trust.
If you do bring them in under cryptographic protections, the resource penalties multiply. Your “home” is slowed down by some factor, and their “home within your home” is slowed down by that factor again. Where are you going to get the compute power? I’m not sure how this applies in the quantum case.
Also, once you’re trapped, what’s your source for a trustworthy copy of the person you’re inviting in (or of “them in their home”)? Are you sure you want the companions that your presumed tormentor chooses to provide to you?
Mentioned this in the other thread, but if you and I want to talk we probably (i) move near each other, (ii) communicate between our houses, (iii) negotiate on the shared environment (or e.g. how we should perceive each other).
Ideally if you’re dealing with a person you’d authenticate in the normal way (and part of the point of a house is to keep your private key secret).
I do think that in a world of digital people it could be more common to have attackers impersonating someone I know, but it’s kind of a different ballgame than an attacker controlling my inputs directly.
If “you” completely control your “home,” then it’s more natural to think of the home & occupant as a single agent, whose sensorium is its interface with an external world it doesn’t totally control—the “home” is just a sort of memory that can traversed or altered by a homunculus modeled on natural humans.
I think this is a reasonable way to look at it. But the point is that you identify with (and care morally about the inputs to) the homunculus. From the homunculus’ perspective you are just in a room talking with a friend. From the (home+occupant)’s perspective you are communicating very rapidly with your friend’s (home+occupant).
You can probably create your own companions. Maybe a modified fork of yourself?
There may also be an open source project that compiles validated and trustworthy digital companions (e.g., aligned AIs or uploads with long, verified track records of good behavior).
It seems like there’s an assumption in this that you’re going to be “hosted in the cloud”. Why would you want to do that? If you’re assuming some more or less trustworthy hardware, why not just run on more or less trustworthy hardware? Why not maintain physical control over your physical substrate? It mostly works for us “non-digital people”.
Also, wouldn’t being forced to retreat entirely to your “home” qualify as horrible conditions? That’s solitary confinement, no?
That’s plan A.
Naively I’d guess that most people (during the singularity) will live in efficiently packed “cities” so that they are able to communicate with other people they care about at a reasonable speed. I think that does probably put you at the mercy of someone else’s infrastructure though in general these things will still be handled by trust rather than by wacky cryptographic schemes.
Two people can each be in their own homes, having a “call” that feels to them like occupying the same room and talking or touching.
What’s providing the communication channel? Doesn’t that rely on the generosity of the torturer who’s holding you captive?
If someone is “holding you captive” then you wouldn’t get to talk to your friends. The idea is just that in that case you can pause yourself (or just ignore your inputs and do other stuff in your home).
Of course there are further concerns that e.g. you may think you are talking to your friend but are talking to an adversary pretending to be your friend, but in a scenario where people sometimes get kidnapped that’s just part of life as a digital person.
(Though if you and your friend are both in secure houses, you may still be able to authenticate to each other as usual and an adversary who controlled the communication link couldn’t eavesdrop or fake the conversation unless they got your friend’s private key—in which case it doesn’t really matter what’s happening on your end and of course you can be misled.)
Right. I got that. But if I go do other stuff in my home, they’ve successfully put me in solitary confinement. My alternative to that is to shut down. They can also shut me down at will. It doesn’t have to be just a “pause”, either.
It may be that part of the problem is that “one timeline” is not enough to deal with a “realistic” threat. OK, I can refuse to be executed without a sequencing guarantee, but my alternative is… not to execute. I could have an escape hatch of restarting from a backup on another host, but then I lose history, and I also complicate the whole scheme, because now that replay has to be allowed conditional on the “original” version being in this pickle.
Presumably we got into this situation because my adversary wanted to get something out of executing me in replicas or in replay or with unpleasant input or whatever. If I refuse to be executed under the adversary’s conditions, the basic scenario doesn’t provide the adversary with any reason to execute me at all. If they’re not going to execute me, they have no reason to preserve my state either.
So it’s only interesting against adversaries who don’t have a problem with making me into MMAcevedo, but do have a problem with painlessly, but permanently, halting me. How many such adversaries am I likely to have?
Maybe if there were an external (trusted) agency that periodically checked to make sure everybody was running, and somehow punished hosts that couldn’t demonstrate that everybody in their charge was getting cycles, and/or couldn’t demonstrate possession of a “fresh” state of everybody?
Yes, the idea is that with these measure, an adversary would not even try to run you in the first place. That’s preferable to being coerced by extreme means to do everything they might possibly want with you.
They can’t freely modify your state because (if the idea works!) the encryption doesn’t let them know your state, and any direct modification that doesn’t go via the obfuscated program yields unrunnable noise.
Good point; it removes the incentive to set up a “cheap hosting” farm that actually makes its money by running everybody as CAPTCHA slaves or something. So the Bad Guy may never request or receive my “active” copy to begin with.
I’m not worrying about them freely modifying my state, though. I’m worried about them deleting it.
Why is that an issue? If they’re the only ones with a copy, then sure that would mean your death, but that seems unlikely.
Even if that is the case, is life under one of the most complete forms of slavery that is possible to exist, probably including mental mutilation, torture, and repeated annihilation of copies, better than death? I guess that’s a personal choice. If you think it is, then you could choose not to protect your program.
Under the scheme being discussed, it doesn’t matter how many backup copies anybody has. Because of the “one timeline” replay and replica protection, the backup copies can’t be run. Running a backup copy would be a replay.
The “trusted hardware” version was the only one I really looked at closely enough to understand completely. Under that one, and probably under the 1-of-2 scheme too, you actually could rerun a backup[1]… but you would have to let it “catch up” to the identical state, via the identical path, by giving it the exact same sequence of inputs that had been given to the old copy from the time the backup was taken up to the last input signed. Including the signatures.
That means that, to recover somebody, you’d need not only a backup copy of the person, but also copies of all that input. If you had both, then you could run the person forward to a “fresh” state where they’d accept new input. But if the person had been running in an adversarial environment, you probably wouldn’t have the input, so the backups would be useless.
The trusted hardware description actually says that, at each time step, the trusted hardware signs the whole input, plus a sequence number. I took that to really mean “a hash of the whole input, plus a sequence number[2]. I made that assumption because if you were truly going to send the whole input to the trusted hardware to be signed, you’d be using so much bandwidth, and taking on so much delay, that you probably might as well just run the person on the trusted hardware.
If you really did send the whole input to the trusted hardware, then I suppose it could archive the input for use in recovering backups, but that’s even more expensive.
You could extend the scheme (and complicate it, and take on more trust) to let you be rerun from a backup on different input if, say, some set of trusted parties attest that the “main you” has truly been lost. But then you lose everything you’ve experienced since the backup was taken, which isn’t entirely satisfying. Would you be OK with just being rolled back to the you of 10 years ago?
You can keep adding epicycles, of course. But I think that, to be very satisying, whatever was added would at least have to provide some protection against both outright deletion and “permanent pause”. And if there’s rollback to backups, probably also a quantifiable and reasonably small limitation on how much history you could lose in a rollback.
I didn’t mean to suggest that being arbitrarily tortured or manipulated was better than death. I meant that I wasn’t worried about arbitrary modifications to my state because the cryptographic system prevented it… and I still was worried about being outright deleted, because the cryptographic system doesn’t prevent that, and backups have at best limited utility.
… assuming certain views of identity and qualia that seem to be standard among people thinking about uploads...[3]
Personally I’d probably include a hash of the person’s state after the previous time step too, either in addition to or instead of the sequence number.
Is there actually any good reason for abandoning the standard word “upload” in favor of “digital person”?
Depending on setup you can probably invite other people into your home.
Only people who in turn trust you not to mess with them, at least unless you bring them in under the same cryptographic protections under which you yourself are running on somebody else’s substrate. That’s an incredible amount of trust.
If you do bring them in under cryptographic protections, the resource penalties multiply. Your “home” is slowed down by some factor, and their “home within your home” is slowed down by that factor again. Where are you going to get the compute power? I’m not sure how this applies in the quantum case.
Also, once you’re trapped, what’s your source for a trustworthy copy of the person you’re inviting in (or of “them in their home”)? Are you sure you want the companions that your presumed tormentor chooses to provide to you?
Mentioned this in the other thread, but if you and I want to talk we probably (i) move near each other, (ii) communicate between our houses, (iii) negotiate on the shared environment (or e.g. how we should perceive each other).
Ideally if you’re dealing with a person you’d authenticate in the normal way (and part of the point of a house is to keep your private key secret).
I do think that in a world of digital people it could be more common to have attackers impersonating someone I know, but it’s kind of a different ballgame than an attacker controlling my inputs directly.
If “you” completely control your “home,” then it’s more natural to think of the home & occupant as a single agent, whose sensorium is its interface with an external world it doesn’t totally control—the “home” is just a sort of memory that can traversed or altered by a homunculus modeled on natural humans.
I think this is a reasonable way to look at it. But the point is that you identify with (and care morally about the inputs to) the homunculus. From the homunculus’ perspective you are just in a room talking with a friend. From the (home+occupant)’s perspective you are communicating very rapidly with your friend’s (home+occupant).
You can probably create your own companions. Maybe a modified fork of yourself?
There may also be an open source project that compiles validated and trustworthy digital companions (e.g., aligned AIs or uploads with long, verified track records of good behavior).