Let me try to see where I am on the ladder by critiquing each. Most of them I start to agree with and then you add a
conclusion that I don’t think follows from the lead-in. I think you’ve got an unstated and likely incorrect assumption
that “me” in the past, “me” as experiencing something in the moment and “me” in the future are all singular and
necessarily linked. I think they’ve been singular in my experience, but that’s not the only way it could be.
If you insist on mapping to QM, my answers match MWI somewhat. Each branch’s past has an amplitude of 1, the future
diverges into all possible states which sum to 1. I’m not actually uncertain that “possible” is sensible concept, though, so
I’ll answer these as if we were talking about copies in a classical universe, so they can sum to more than 1 of “me”.
1: without defining the mechanism of copy, I don’t know how it differs from cloning. I think from further questions,
you’re positing some copy of existing brain configuration, potentials, and inputs, which is very different from a clone
(a copy of DNA and some amount of similarity of early environment).
2: The first and last parts of these are separate. Why not “both copies have true memories, there are two distinct
entities (as soon as state diverges) both of which have equal claim to being me”.
3: First half makes sense, but I anticipate waking up in both. The two mes will each begin to experience just one
line from each of the two.
4: First half fine, but you’re weirdly assuming that dieing matters in this form. I think there’s no experiential
difference, except only one of me wakes up rather than two. I would not hesitate to undertake this unless the chance
that BOTH would die is greater than the chance that I’d die if I didn’t participate.
5: I don’t think it’s probability of waking up as one or the other. I think it’s both, and each will truly be me in
their pasts, and different mes in the future. If only one wakes, then it’s more similar to today’s experience as
there’s only one entity who experiences apparent-continuity. If one wakes then dies, that one is me and experiences
death, the other one is me and doesn’t.
6: Incoherent. Likelihood is about exclusive options, and this framing is not exclusive: both happen in the same universe. I predict that branches of me will experience both.
7: Good, then a weird “transfer” concept. A perfect duplicate at any point has the same conscous experience, and
diverges when the inputs diverge. It’s not transfered, it’s just in both places/times.
8: I’m uncertain what level of fidelity is required to be “me”. My intuition is that a sufficiently-true simulation is
effectively me, just like a sufficiently-exact copy.
9: Sure. I don’t think it necessarily follows that the ONLY way the superintelligence can “think about what I would do”
is to execute a full-fidelity model, it could very easily use much simpler models (like humans do for each other) for
Most of them I start to agree with and then you add a conclusion that I don’t think follows from the lead-in.
This is by design. I tried to make the levels mutually exclusive. The way I did this was by having each level add a significant insight to the truth (as I see it) and then say something wrong (as I see it) to constrain any further insight.
I think you’ve got an unstated and likely incorrect assumption
that “me” in the past, “me” as experiencing something in the moment and “me” in the future are all singular and
necessarily linked. I think they’ve been singular in my experience, but that’s not the only way it could be.
If you insist on mapping to QM, my answers match MWI somewhat. Each branch’s past has an amplitude of 1, the future
diverges into all possible states which sum to 1. I’m not actually uncertain that “possible” is sensible concept, though, so
I’ll answer these as if we were talking about copies in a classical universe, so they can sum to more than 1 of “me”.
My intention is not to ignore QM/MWI or anything, but I did intend to provide levels where someone who doesn’t understand (or even know about) QM would find themselves. The language I used was (hopefully) the language someone at that level would use to describe what they think, so all levels that can’t be true under QM should sound ignorant of any QM insights. That was my intention. Intuition about QM should automatically push you at least to the first level where it sounds like I stopped describing a classical universe.
Further, this is mostly about our anticipation of subjective experiences. I didn’t really mention amplitude, I just alluded to it by mentioning the squares of them we’d use to calculate our anticipation. When I say “me,” I mean “some unmangled amplitude of me”.
Unfortunately, even if I tried to use precise language, I’d have had trouble, and I wasn’t even trying to do that, as this is supposed to be a resource anyone could use to place themselves.
I would address each of your entries, but most of them would probably be rephrasings of what I said above. Each level is supposed to contain something objectionable that pushes you to the next level.
6: Incoherent. Likelihood is about exclusive options, and this framing is not exclusive: both happen in the same universe. I predict that branches of me will experience both.
As far as this goes, I was trying to use the intuitive but inaccurate language from here. If you prefer, pretend I said “squared amplitude”. Alternatively, if you have suggestions for better language someone at this level would still intuitively use, I’d be happy to hear it.
7: Good, then a weird “transfer” concept. A perfect duplicate at any point has the same conscous experience, and
diverges when the inputs diverge. It’s not transfered, it’s just in both places/times.
It’s really hard to describe anticipation of subjective experience in these scenarios. If you have a suggestion of language I can use that is still wrong in a way that precludes the insights from successive levels, and also speaks as someone who is wrong in that way would speak about their expectation, I am very open to suggestions.
8: I’m uncertain what level of fidelity is required to be “me”. My intuition is that a sufficiently-true simulation is
effectively me, just like a sufficiently-exact copy.
That is the intentional problem with 8, yes.
9: Sure. I don’t think it necessarily follows that the ONLY way the superintelligence can “think about what I would do”
is to execute a full-fidelity model, it could very easily use much simpler models (like humans do for each other) for
many questions.
I agree that the only model isn’t a perfect model, but I have a strong hypothesis that “perfect model” and “implementation” are synonymous.
Edit: I see your objection with 9 and I (hopefully) fixed it.
It’s really hard to describe anticipation of subjective experience in these scenarios
True! These scenarios don’t actually happen (yet) to humans, and you’re trying to extrapolate from a fairly poorly defined base case (individual experiential continuity). However, I think most of them dissolve if you believe (as I do) that consciousness is a purely physical reductive phenomenon.
Take the analogy of a light bulb, where you duplicate everything including the current (pun intended) state of electrical potential in the wires and element, but then after duplication allow that the future electrical inputs may vary. You can easily answer all of these questions about anticipated light output levels. It’s identical at time of duplication, and diverges afterward.
Let me try to see where I am on the ladder by critiquing each. Most of them I start to agree with and then you add a
conclusion that I don’t think follows from the lead-in. I think you’ve got an unstated and likely incorrect assumption
that “me” in the past, “me” as experiencing something in the moment and “me” in the future are all singular and
necessarily linked. I think they’ve been singular in my experience, but that’s not the only way it could be.
If you insist on mapping to QM, my answers match MWI somewhat. Each branch’s past has an amplitude of 1, the future
diverges into all possible states which sum to 1. I’m not actually uncertain that “possible” is sensible concept, though, so
I’ll answer these as if we were talking about copies in a classical universe, so they can sum to more than 1 of “me”.
1: without defining the mechanism of copy, I don’t know how it differs from cloning. I think from further questions,
you’re positing some copy of existing brain configuration, potentials, and inputs, which is very different from a clone
(a copy of DNA and some amount of similarity of early environment).
2: The first and last parts of these are separate. Why not “both copies have true memories, there are two distinct
entities (as soon as state diverges) both of which have equal claim to being me”.
3: First half makes sense, but I anticipate waking up in both. The two mes will each begin to experience just one
line from each of the two.
4: First half fine, but you’re weirdly assuming that dieing matters in this form. I think there’s no experiential
difference, except only one of me wakes up rather than two. I would not hesitate to undertake this unless the chance
that BOTH would die is greater than the chance that I’d die if I didn’t participate.
5: I don’t think it’s probability of waking up as one or the other. I think it’s both, and each will truly be me in
their pasts, and different mes in the future. If only one wakes, then it’s more similar to today’s experience as
there’s only one entity who experiences apparent-continuity. If one wakes then dies, that one is me and experiences
death, the other one is me and doesn’t.
6: Incoherent. Likelihood is about exclusive options, and this framing is not exclusive: both happen in the same universe. I predict that branches of me will experience both.
7: Good, then a weird “transfer” concept. A perfect duplicate at any point has the same conscous experience, and
diverges when the inputs diverge. It’s not transfered, it’s just in both places/times.
8: I’m uncertain what level of fidelity is required to be “me”. My intuition is that a sufficiently-true simulation is
effectively me, just like a sufficiently-exact copy.
9: Sure. I don’t think it necessarily follows that the ONLY way the superintelligence can “think about what I would do”
is to execute a full-fidelity model, it could very easily use much simpler models (like humans do for each other) for
many questions.
This is by design. I tried to make the levels mutually exclusive. The way I did this was by having each level add a significant insight to the truth (as I see it) and then say something wrong (as I see it) to constrain any further insight.
My intention is not to ignore QM/MWI or anything, but I did intend to provide levels where someone who doesn’t understand (or even know about) QM would find themselves. The language I used was (hopefully) the language someone at that level would use to describe what they think, so all levels that can’t be true under QM should sound ignorant of any QM insights. That was my intention. Intuition about QM should automatically push you at least to the first level where it sounds like I stopped describing a classical universe.
Further, this is mostly about our anticipation of subjective experiences. I didn’t really mention amplitude, I just alluded to it by mentioning the squares of them we’d use to calculate our anticipation. When I say “me,” I mean “some unmangled amplitude of me”.
Unfortunately, even if I tried to use precise language, I’d have had trouble, and I wasn’t even trying to do that, as this is supposed to be a resource anyone could use to place themselves.
I would address each of your entries, but most of them would probably be rephrasings of what I said above. Each level is supposed to contain something objectionable that pushes you to the next level.
As far as this goes, I was trying to use the intuitive but inaccurate language from here. If you prefer, pretend I said “squared amplitude”. Alternatively, if you have suggestions for better language someone at this level would still intuitively use, I’d be happy to hear it.
It’s really hard to describe anticipation of subjective experience in these scenarios. If you have a suggestion of language I can use that is still wrong in a way that precludes the insights from successive levels, and also speaks as someone who is wrong in that way would speak about their expectation, I am very open to suggestions.
That is the intentional problem with 8, yes.
I agree that the only model isn’t a perfect model, but I have a strong hypothesis that “perfect model” and “implementation” are synonymous.
Edit: I see your objection with 9 and I (hopefully) fixed it.
I think my main point in most of this is:
True! These scenarios don’t actually happen (yet) to humans, and you’re trying to extrapolate from a fairly poorly defined base case (individual experiential continuity). However, I think most of them dissolve if you believe (as I do) that consciousness is a purely physical reductive phenomenon.
Take the analogy of a light bulb, where you duplicate everything including the current (pun intended) state of electrical potential in the wires and element, but then after duplication allow that the future electrical inputs may vary. You can easily answer all of these questions about anticipated light output levels. It’s identical at time of duplication, and diverges afterward.
Every level but the last one is supposed to be wrong.
The point is they’re supposed to be wrong in a specifically crafted way.