On the subject of creating a function/predicate able to identify a person. It seems that it is another non-localiseable function. My reasoning goes something like this.
1) We want the predicate to be able to identify paused humans (cryostasis), so that the FAI doesn’t destroy them accidentally.
2) With sufficient scanning technology we could make a digital scan of a human that has the same value as a frozen head, and encrypt with a one time pad, making it indistinguishable from the output of /dev/rng.
From 1 and 2 it follows that the AI will have to look at the environment (to see if people are encrypting people with one-time pads), before making a decision on what is a human or not. How much of the AI needs to encompass before making that decision seems a non-trivial question to answer.
It depends when the singularity occurs. It is also indicative that there might be other problems. Let us say that an AI might be able to recreate some (famous) people from their work/habitation and memories in other people, along with a thorough understanding of human biology.
If an AI can it should preserve as much of the human environment as possible (no turning it into computronium), until it gains that ability. However it doesn’t know whether a bit of the world will be useful for that purpose (hardened footprints in mud), until it has lots of computronium.
This problem just looks like the usual question of how much of our resources should be conserved and how much should be used. There is some optimal combination of physical conservation and virtual conservation that leaves enough memory and computronium for other things. We’re always deciding between immediate economic growth and long term access to resources (fossil fuels, clean air, biodiversity, fish). In this case the resource is famous person memorabilia/ human environment. But this isn’t a tricky conceptual issue, just a utility calculation, and the AI will get better at making this calculation the more information it has. The only programming question is how much we value recreating famous people relative to other goods.
I also don’t see how this issue is indicated by the ‘functional definition of a person’ issue.
Besides what gwern said, it could just scan and save at appropriate resolution everything that gets turned into computronium. This seems desirable even before you get into possibly reconstructing people.
Every qubit might be precious so you would need more matter than the earth to do it (if you wanted to accurately simulate things like when/how volcanos/typhoons happened, so that the memories would be correct).
Possibly the rest of the solar system would be useful as well so you can rewind the clock on solar flares etc.
I wonder what a non-disruptive biosphere scan would look like.
However it doesn’t know whether a bit of the world will be useful for that purpose (hardened footprints in mud), until it has lots of computronium.
If it’s that concerned, it can just blast off into space, couldn’t it? Might slow down development, but the hypothetical mud footprints out to be fine… No harm done by computronium in the sun.
The question is should we program it to be that concerned? The human predicate is necessary for CEV if I remember correctly, you would want to extrapolate the volition of everyone currently informationally smeared across the planet as well as the more concentrated humans. I can’t find the citation at the moment, I’ll hunt tomorrow.
The human predicate is necessary for CEV if I remember correctly, you would want to extrapolate the volition of everyone currently informationally smeared across the planet as well as the more concentrated humans.
I think the (non)person predicate is necessary for CEV only to avoid stomping on persons while running it. It may not be essential to try to make the initial dynamic as expansive as possible, since a less-expansive one can always output “learn enough to do a broader CEV, and do so superseding this”.
On the subject of creating a function/predicate able to identify a person. It seems that it is another non-localiseable function. My reasoning goes something like this.
1) We want the predicate to be able to identify paused humans (cryostasis), so that the FAI doesn’t destroy them accidentally.
2) With sufficient scanning technology we could make a digital scan of a human that has the same value as a frozen head, and encrypt with a one time pad, making it indistinguishable from the output of /dev/rng.
From 1 and 2 it follows that the AI will have to look at the environment (to see if people are encrypting people with one-time pads), before making a decision on what is a human or not. How much of the AI needs to encompass before making that decision seems a non-trivial question to answer.
Poorly labeled encrypted persons may well be destroyed. I’m not sure this matters too much.
It depends when the singularity occurs. It is also indicative that there might be other problems. Let us say that an AI might be able to recreate some (famous) people from their work/habitation and memories in other people, along with a thorough understanding of human biology.
If an AI can it should preserve as much of the human environment as possible (no turning it into computronium), until it gains that ability. However it doesn’t know whether a bit of the world will be useful for that purpose (hardened footprints in mud), until it has lots of computronium.
This problem just looks like the usual question of how much of our resources should be conserved and how much should be used. There is some optimal combination of physical conservation and virtual conservation that leaves enough memory and computronium for other things. We’re always deciding between immediate economic growth and long term access to resources (fossil fuels, clean air, biodiversity, fish). In this case the resource is famous person memorabilia/ human environment. But this isn’t a tricky conceptual issue, just a utility calculation, and the AI will get better at making this calculation the more information it has. The only programming question is how much we value recreating famous people relative to other goods.
I also don’t see how this issue is indicated by the ‘functional definition of a person’ issue.
Besides what gwern said, it could just scan and save at appropriate resolution everything that gets turned into computronium. This seems desirable even before you get into possibly reconstructing people.
Every qubit might be precious so you would need more matter than the earth to do it (if you wanted to accurately simulate things like when/how volcanos/typhoons happened, so that the memories would be correct).
Possibly the rest of the solar system would be useful as well so you can rewind the clock on solar flares etc.
I wonder what a non-disruptive biosphere scan would look like.
If it’s that concerned, it can just blast off into space, couldn’t it? Might slow down development, but the hypothetical mud footprints out to be fine… No harm done by computronium in the sun.
The question is should we program it to be that concerned? The human predicate is necessary for CEV if I remember correctly, you would want to extrapolate the volition of everyone currently informationally smeared across the planet as well as the more concentrated humans. I can’t find the citation at the moment, I’ll hunt tomorrow.
I think the (non)person predicate is necessary for CEV only to avoid stomping on persons while running it. It may not be essential to try to make the initial dynamic as expansive as possible, since a less-expansive one can always output “learn enough to do a broader CEV, and do so superseding this”.
Hmm, I think you are right.
We still need to have some estimate of what it will do though so we can predict its speed somewhat.