AGI will change our world in many ways, one of which concerns our views on personal identity.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
Copy implies a version that is somehow lesser
That’s not how I was intending to use the word.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
identity is not binary
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
[...] What matters most [...] this isn’t nearly as important [...] far less important [...] What actually matters [...]
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
We are in the same situation today
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI).
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
That’s not how I was intending to use the word.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.