You know, I do have this nagging doubt: why am I me, and not someone else? I do see a problem with subjective experience. On the one hand, it doesn’t make intuitive sense in a universe that runs on math, but on the other, what could there be beyond the causal stuff? I sense something fishy.
I too view reductionistic materialism as mainly an empirical claim. What I do view as necessary is the mere existence of something. I think, therefore “something” is. Maybe that “something” is limited to my personal experience, but whatever it is, it works somehow, and what I think won’t change it (unless magical thinking works, but then that is how the world runs).
I am not confident mind uploading should work. But I have empirical reasons to believe it may. First, we have cut&paste transportation. I’m confident it works because current physics says so. The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration. Current laws of physics could be mistaken (they’re not even complete, so they are mistaken somewhere), but this “no identity” stuff looks like something that won’t go away.
Second, I imagined this thought experiment: suspend you, restart you in a green room, suspend you again, then restart you in the laboratory. Result: you have the memory of having been in a green room. The other possibility is, suspend you, scan your brain, run the emulation in a simulated green room identical to the real one, pause the em, rewire your brain to match the em end state, restart you in the laboratory. Result: you have the memory of having been in a green room. It’s the same configuration in both case, so no memory is less real than the other. Conclusion: you have been in a green room. It doesn’t matter if it was physically or in uploaded form.
Note that I become much less confident when I think about leaving up my physical brain (edit: I mean, my original protoplasm wetware) for good.
If uploading doesn’t work, it still can be valuable: If I have goals beyond my own existence, a ghost may be better at achieving them rather than nothing at all. It also prevents total oblivion.
Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
why am I me, and not someone else?
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I think, therefore “something” is
I agree with this part.
The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
But it’s clear that what we can see of reality is made of more than just causality.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
For instance, while conciousness is still mysterious to me, it sure has causal power
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
If you believe that, and believe it still isn’t conscious, I guess you believe in PZombies.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.
I assume by “physical brain” here you mean one made of protoplasm. What does contemplating the possibility that you aren’t running on such a brain now do to your confidence?
If I knew that I am currently running on a silicon chip (Gunm-style), then I would be highly confident that replacing that chip by another, identical one, preserves my identity, because it’s the same configuration. Moreover, replacing my old chip by a newer one, before the physical deterioration significantly affects the actual software processing, probably would work as well.
But if we’re talking about running my software on a different chip through, say, a virtual machine that emulate my original chip, then I would be less confident that it would still be me. As confident as I am that, an EM of my current wetware would still be me. Which is, currently, not confident enough to make the leap.
Ah, and if I do learn that I run on a chip, I won’t turn crazy. I may be worried if I knew my wetware self were still running around, and I may not tell my mother, but besides that I don’t really care. If I knew that my wetware self was “dead”, then I would wonder if I should feel sorry for him, or if I’m actually him. Because I value my life, I know that my wetware self did too. But I’d probably get over it with the knowledge that the rest of the world (including my family) didn’t lose anything, (or at least they wouldn’t suspect a thing).
Presumably the reason you have such confidence about the interchangeability of identical chips is because your experience encompasses lots of examples of such chips behaving interchangeably to support a given application. More generally, you’ve learned the lesson through experience that while two instances of the same product coming off similar assembly lines may not be 100% identical, they are reliably close enough along the dimensions we care about to be interchangeable.
And, lacking such experience about hardware/wetware interchangeability, you are properly less certain about the corresponding conclusion.
Presumably, if that sort of experience became commonplace, your confidence would increase.
As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.
Consider the following set of statements: 1) “I am my meat.” 2) “I am the unique pattern of information-flow that occurs within my meat.” 3) “I am the class of patterns of information-flow that can occur within meat, of which this unique pattern is one example.” 4) “I am the class of patterns of information-flow that can occur within any substrate, of which this unique pattern is one example.” 5) “I am all the matter and energy in the universe.”
What sorts of experiences would constitute evidence for one of them over the others?
The class of patterns of information-flow that can occur within meat includes the pattern of information-flow that occurs within your meat. 3 therefore asserts that I am you, in addition to being me. 2 does not assert this. They seem like different claims to me, insofar as any of these claims are different from the others.
I’m not really sure what non-local phenomena are, or what they have to do with psychic powers, or what they have to do with the proper referent for “I”.
Good point. This is precisely the source of my doubt, and the reason why I’m not sure that changing substrate preserves identity.
The thing is, quantum mechanics makes me confident that if I go from configuration X to configuration Y, through a path that preserves identity, then any path from X to Y preserves my identity. But I am less confident about intermediate states (like the temporary emulation in the simulated green room).
I’m not sure that’s a meaningful question. I undoubtedly change from year to year, so… But there is some kind of continuity, which I’m afraid could be broken by a change of substrate. (But then again, we could change my substrate bit by bit…
If it weren’t, I would not care, because it wouldn’t break anything I value. If preservation of identity doesn’t even happen currently in our mundane world, I would be stupid to value it. And I’ll happily upload, then (modulo the mundane risk of being badly emulated of course).
But first, I must be convinced that either identity wasn’t preserved in the first place, or that uploading preserves identity, or that I was just confused because the world actually works like… who knows.
A change of substrate occurs daily for you. It’s just of a similar class. What beyond simple “yuck factor” gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?
No, it doesn’t. You could argue that there’s a renewal of atoms (most notably water), but swapping water atoms doesn’t have physical meaning, so… No. Heck, even cut&paste transportation doesn’t change substrate.
The “yuck factor” I feel cause me to doubt this: If an EM of me would be created during my sleep, what probability would I assign to wake up as silicon, or as wetware? I’m totally not sure I can say 1⁄2.
Actually it’s more complicated than that. Not just water atoms; over time your genetic pattern changes—the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.
Neurogenesis does occur in adults—so not even on a cellular level is your brain the same today as it was yesterday.
Furthermore—what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.
Yes, they do. And that’s the end of this dialogue.
(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for “Aumanning.”)
You know, I do have this nagging doubt: why am I me, and not someone else? I do see a problem with subjective experience. On the one hand, it doesn’t make intuitive sense in a universe that runs on math, but on the other, what could there be beyond the causal stuff? I sense something fishy.
I too view reductionistic materialism as mainly an empirical claim. What I do view as necessary is the mere existence of something. I think, therefore “something” is. Maybe that “something” is limited to my personal experience, but whatever it is, it works somehow, and what I think won’t change it (unless magical thinking works, but then that is how the world runs).
I am not confident mind uploading should work. But I have empirical reasons to believe it may. First, we have cut&paste transportation. I’m confident it works because current physics says so. The universe doesn’t care whether I landed in vulcan by shuttle or by energy beam, it’s the same configuration. Current laws of physics could be mistaken (they’re not even complete, so they are mistaken somewhere), but this “no identity” stuff looks like something that won’t go away.
Second, I imagined this thought experiment: suspend you, restart you in a green room, suspend you again, then restart you in the laboratory. Result: you have the memory of having been in a green room. The other possibility is, suspend you, scan your brain, run the emulation in a simulated green room identical to the real one, pause the em, rewire your brain to match the em end state, restart you in the laboratory. Result: you have the memory of having been in a green room. It’s the same configuration in both case, so no memory is less real than the other. Conclusion: you have been in a green room. It doesn’t matter if it was physically or in uploaded form.
Note that I become much less confident when I think about leaving up my physical brain (edit: I mean, my original protoplasm wetware) for good.
If uploading doesn’t work, it still can be valuable: If I have goals beyond my own existence, a ghost may be better at achieving them rather than nothing at all. It also prevents total oblivion.
Maybe my basic point is that there is more to the “stuff” than just “being causal”. This is why I talk about abstracted causal models as ontologically deficient. Describing yourself or the world as a state machine just says that reality is a merry-go-round of “states” which follow each other according to a certain pattern. It says nothing about the nature of those states, except that they follow the pattern. This is why functionalist theories of mind lead to patternist theories of identity.
But it’s clear that what we can see of reality is made of more than just causality. Causal relations are very important constitutive relations, but then we can ask about the relata themselves, the things connected by causality, and we can also look for connecting relations that aren’t causal relations. Being shaped like a square isn’t a causal relation. It’s a fact that can play a causal role, but it is not itself made of causality.
These are ontological questions, and the fact that we can ask them and even come up with the tentative ontologies that we do, itself must have ontological implications, and then you can attempt an ontological analysis of these implication relations… If you could go down that path, using beyond-Einsteinian intellectual superpowers, you should figure out the true ontology, or as much of it as is accessible to our sort of minds. I consider Husserl to be the person who got the furthest here.
One then wants to correlate this ontology derived from a phenomenological-epistemological circle of reflection, with the world-models produced in physics and biology, but since the latter models just reduce to state-machine models, they cannot in themselves move you beyond ontological hollowness. Eventually you must use an ontology derived from the analysis of conscious experience itself, to interpret the formal ontology employed by natural science. This doesn’t have to imply panpsychism; you may be able to say that some objects really are “things without an inside”, and other objects do “have a subjectivity”, and be able to specify exactly what it is that makes a difference.
This is a little removed from the indexical problem of
That’s a question which probably has no answer, beyond enumerating the causes of what you are. The deep reasons are reserved for why there is something rather than nothing, and why it is the sort of universe it is. But in a universe with many minds, you were always going to be one among many.
If you were to find that the nature of your personal existence looked rather improbable, that would revive the question a little. For example, if we thought electrons were conscious, then being a conscious being at the Avogadro’s-number-of-electrons level of organization, rather than at the single-electron level of organization, might look suspiciously improbable, given the much larger numbers of electrons in the universe. But then the question would be “why am I human, and not just an electron?” which isn’t quite what you asked.
I agree with this part.
I think what you’re saying is that in the present, there’s no difference between your current configuration having resulted from a life lived for 20+ years, and your current configuration having materialized five seconds ago. Well, if by hypothesis the configuration is exactly the same in the two scenarios under consideration, then the configuration is exactly the same. That much is true tautologically or by assumption.
Now if we ask whether it’s “still you” in both cases—one where you live out your life with physical continuity, and one in which you are briefly eradicated and then replaced by a physical duplicate—you do have some freedom of self-definition, so the answer may depend a little on the definition. (For now I will not consider the Yudkowskian possibility that there is a unique correct definition of personal identity to be found by superintelligent extrapolation of human cognitive dispositions, analogous to the CEV theory of how to arrive at a human-correct morality.)
But there are obvious and not-so-obvious problems with just saying “the configuration’s the same, therefore there’s no difference”. An obvious problem: suppose we make more than one copy of you—are they both “you”? Less obvious: what if the history of how the configuration was created does matter, in deciding whether you are the same person as before?
Does “having the memory of being in a green room” really imply “you have been in a green room”? We don’t normally trust memory that absolutely, and here we are talking about “memories” that were copied into the brain from a blueprint, rather than being caused in the usual fashion, by endogenous processing of sensory input. It is reasonable to imagine that you could be that person, whose brain was rewired in that way, and that after reflecting for long enough on the situation and on how the process worked, you concluded that it wasn’t you who was in that room, or even that nobody was in that room.
I’m not even convinced that the unlimited capacity to recreate a whole conscious mind “in midstream”, implied by so many thought-experiments, is necessarily possible. There are dynamical systems where you just can’t get to places deep in the state-space without crossing intermediate territory. If all that matters for identity is having the right ensemble of mesoscopic computational states (i.e. described at a level of coarseness, relative to the exact microphysical description, which would reduce a whole neuron to just a few bits), then it should be possible to create a person in mid-stream. But if the substrate of consciousness is a single quantum Hilbert space, for some coherent physical subsystem of the brain, then it’s much less obvious that you can do that. You might be able to bang together a classical simulation of what goes on in that Hilbert space, in mid-stream, but that’s the whole point of my version of quantum-mind thinking—that substrates matter, and just implementing a state machine doesn’t guarantee consciousness, let alone persistence of identity.
Not to me. For instance, while conciousness is still mysterious to me, it sure has causal power, if only the power to make me think of it —and the causal power to make Chalmers write papers about it.
I think I mean something stronger than that. You may want to re-read the part of the Quantum Physics sequence. The universe actually doesn’t even encode the notion of different particles, so that talking about putting this carbon atom there and that carbon atom here doesn’t even makes sense. When you swap 2 atoms, you’re back to square one in a stronger sense than when you swap 2 numbered (but otherwise indistinguishable) billiard balls. Configuration space is folded on itself, so it really is the same configuration, not a different one that happens to be indistinguishable from the inside.
Err… Let my brain be replaced by a silicon chip. Let’s leave aside the question of personal identity. Is that thing concious ? It will behave the same as me, and write about conciousness the same I do. If you believe that, and believe it still isn’t concious, I guess you believe in PZombies. I don’t. Maybe changing my substrate would kill me, but I strongly believe the result is still concious, and human in the dimensions I care about.
I agree that consciousness has causal power. I’m saying consciousness is not just causal power. It’s “something” that has causal power. The ontological deficiencies of materialist and computational theories of consciousness all lie in what they say about the nature of this “something”. They say it’s a collection of atoms and/or a computational state machine. The “collection of atoms” theory explains neither the brute features of consciousness like color, nor the subtle features like its “unity”. The state machine theory has the same problems and also requires that you reify a particular abstracted description of the physical reality. In both cases, if one were to insist that that really is the ontological basis of everything, property dualism would be necessary, just to accommodate phenomenological (experiential) reality. But since we now have a physics based on Hilbert spaces and exotic algebras, rather than on particles arranged in space, I would hope to find a physical ontology that can explain consciousness without property dualism, and in which the physical description of the brain contained “entities” which really could be identified with the “entities” constituting conscious experience, and not just correlated with them.
The basis for that statement is that when you calculate the transition probability from “particle at x0, particle at y0” to “particle at x1, particle at y1″, you sum over histories where x0 goes to x1 and y0 goes to y1, as well as over histories where x0 goes to y1 and y0 goes to x1. But note that in any individual history, there is persistence of identity.
I suppose the real logic here is something like “I am a particular configuration, and contributions to my amplitude came from histories in which my constituent particles had different origins.” So you ground your identity in the present moment, and deny that you even had a unique previous state.
Pardon me for being skeptical about that claim—that my present moment is either to be regarded as existing timelessly and not actually as one stage in a connected flow of time, or alternatively that it is to be regarded as a confluence of multiple intersecting histories that immediately then diverges into multiple futures rather than a unique one.
The ontological implications of quantum mechanics are far from self-evident. If I truly felt driven to believe in the many worlds interpretation, I would definitely want to start with an ontology of many histories that are self-contained but which are interacting neighbors. In a reality like that, there’s no splitting and joining, there are just inter-world “forces”. For some reason, no-one has even really tried to develop such a model, despite the conservation of probability density flow which allows a formalism like Bohmian mechanics to work.
Returning to the question of identity for particles, another option, which is more in line with my own ideas, is to think of the ontological state as a tensor product of antisymmetrized n-particle states where the size of n is variable both between the tensor factors and during the history of an individual factor. The ontology here is one in which the world isn’t really made of “particles” at all, it’s made of “entities” with a varying number of degrees of freedom, and a “particle” is just an entity with the minimum number of degrees of freedom. The fungibility of “particles” here would only apply to degrees of freedom within a single entity; the appearance of fungibility between different entities would have a dynamical origin. I have no idea whether you can do that in a plausible, uncontrived way; it’s yet another possibility that hasn’t been explored. And there are still more possibilities.
Yes, definitely. Especially if we’re going to talk about imperfect simulations, as has been discussed on one or two recent threads. A spambot, or a smiley face on a stick, is a type of “simulated human being”. We definitely agree, there’s no-one home in either of those situations, right? The intuition that an upload would be conscious arises from the belief that a human brain is conscious, a human brain consists of numerous discrete processors in decentralized communication with each other, and so to be conscious must somehow arise from being a particular sort of computational network. But although we don’t know the precise condition, the universality of computation implies that some sufficiently accurate simulation would be capable of reproducing that network of computation in a new medium, in a way that meets the unknown criterion of consciousness, and so therefore conscious uploads must be possible.
I have argued in a recent comment that functionalism, and also ordinary atomistic materialism, implies property dualism. The constituent properties of consciousness, especially the basic sensory properties, do not exist in standard physical ontology, which historically was constructed explicitly to exclude those sensory properties. So if you want to extend physical ontology to account for consciousness as well, you have to add some new ingredients. Personally I hope for a new physical ontology which doesn’t have to be dualistic, and I even just mentioned a possible mathematical ingredient, namely a division of the world into “multi-particle” tensor factors rather than into single particles. If a single whole conscious experience could be identified with a single tensor factor, that would at least begin to explain the unity of consciousness; you would have elementary degrees of freedom canonically and objectively clustered together into complex unities, whereas in the current ontology, you just have mobs of particles whose edges are a bit fuzzy and arbitrary, something which provides a poor ontological foundation for a theory of objectively existing persons.
Returning to the issue of zombies, suppose for the purposes of argument that people really are sharply defined tensor factors of the wavefunction of the universe, and that conscious states, in our current formalism, would correspond to some of these antisymmetrized n-fermion wavefunctions that I’ve mentioned. The point is that, in this scenario, consciousness is always a property of a single tensor factor, but that you could simulate one of those very-high-dimensional tensor factors by using a large number of low-dimensional tensor factors. This implies that you could simulate consciousness without the simulation being conscious.
I don’t at all insist that this is how things work. The business with the tensor factors would be one of my better ideas, but it’s just a beginning—it’s a long conceptual trek from an n-fermion wavefunction to an intricate state of consciousness such as we experience—and the way things actually work may be very very different. What I do insist is that none of the orthodox materialist theories of mind work. An explicit property dualism, such as David Chalmers has proposed, at least has room in its ontology for consciousness, but it seems contrived to me. So I think the answer is some thing that we haven’t thought of yet, that involves quantum biology, new physical ontology, and revived respect for the ontology of mind.
Your writing is difficult to read for me. I’m tired right now, so I plan to answer properly later, in a few days. Hopefully my brain will do better processing.
I assume by “physical brain” here you mean one made of protoplasm.
What does contemplating the possibility that you aren’t running on such a brain now do to your confidence?
Yes, I meant protoplasm.
If I knew that I am currently running on a silicon chip (Gunm-style), then I would be highly confident that replacing that chip by another, identical one, preserves my identity, because it’s the same configuration. Moreover, replacing my old chip by a newer one, before the physical deterioration significantly affects the actual software processing, probably would work as well.
But if we’re talking about running my software on a different chip through, say, a virtual machine that emulate my original chip, then I would be less confident that it would still be me. As confident as I am that, an EM of my current wetware would still be me. Which is, currently, not confident enough to make the leap.
Ah, and if I do learn that I run on a chip, I won’t turn crazy. I may be worried if I knew my wetware self were still running around, and I may not tell my mother, but besides that I don’t really care. If I knew that my wetware self was “dead”, then I would wonder if I should feel sorry for him, or if I’m actually him. Because I value my life, I know that my wetware self did too. But I’d probably get over it with the knowledge that the rest of the world (including my family) didn’t lose anything, (or at least they wouldn’t suspect a thing).
I’m confident an EM would not be a PZombie.
(nods) Makes sense.
Presumably the reason you have such confidence about the interchangeability of identical chips is because your experience encompasses lots of examples of such chips behaving interchangeably to support a given application. More generally, you’ve learned the lesson through experience that while two instances of the same product coming off similar assembly lines may not be 100% identical, they are reliably close enough along the dimensions we care about to be interchangeable.
And, lacking such experience about hardware/wetware interchangeability, you are properly less certain about the corresponding conclusion.
Presumably, if that sort of experience became commonplace, your confidence would increase.
As I often say; you are not your meat. You are the unique pattern of information-flow that occurs within your meat. The meat is not necessary to the information, but the information does require a substrate.
Consider the following set of statements:
1) “I am my meat.”
2) “I am the unique pattern of information-flow that occurs within my meat.”
3) “I am the class of patterns of information-flow that can occur within meat, of which this unique pattern is one example.”
4) “I am the class of patterns of information-flow that can occur within any substrate, of which this unique pattern is one example.”
5) “I am all the matter and energy in the universe.”
What sorts of experiences would constitute evidence for one of them over the others?
1 v 2 -- is your “meat” persistent over time? (It is not).
2 v 3 are non differentiable -- 2 is 3.
4 is implied by 2⁄3. It is affirmed by physics simulations that have atomic-level precision, and by research like the Blue Brain project.
5 is excluded by the absence of non-local phenomena (‘psychic powers’).
I agree that my meat does not persist over time.
The class of patterns of information-flow that can occur within meat includes the pattern of information-flow that occurs within your meat. 3 therefore asserts that I am you, in addition to being me. 2 does not assert this. They seem like different claims to me, insofar as any of these claims are different from the others.
I’m not really sure what non-local phenomena are, or what they have to do with psychic powers, or what they have to do with the proper referent for “I”.
Missed that about the class. Makes a difference, definitely.
Two options: trust the assertions of those who are sure, or learn of them for yourself. :)
Good point. This is precisely the source of my doubt, and the reason why I’m not sure that changing substrate preserves identity.
The thing is, quantum mechanics makes me confident that if I go from configuration X to configuration Y, through a path that preserves identity, then any path from X to Y preserves my identity. But I am less confident about intermediate states (like the temporary emulation in the simulated green room).
Given your understanding of quantum mechanics, is your identity in this sense preserved from year to year today?
If it weren’t, would you care?
I’m not sure that’s a meaningful question. I undoubtedly change from year to year, so… But there is some kind of continuity, which I’m afraid could be broken by a change of substrate. (But then again, we could change my substrate bit by bit…
If it weren’t, I would not care, because it wouldn’t break anything I value. If preservation of identity doesn’t even happen currently in our mundane world, I would be stupid to value it. And I’ll happily upload, then (modulo the mundane risk of being badly emulated of course).
But first, I must be convinced that either identity wasn’t preserved in the first place, or that uploading preserves identity, or that I was just confused because the world actually works like… who knows.
A change of substrate occurs daily for you. It’s just of a similar class. What beyond simple “yuck factor” gives you cause to believe that a transition from cells to silicon would impact your identity? That it would look different?
No, it doesn’t. You could argue that there’s a renewal of atoms (most notably water), but swapping water atoms doesn’t have physical meaning, so… No. Heck, even cut&paste transportation doesn’t change substrate.
The “yuck factor” I feel cause me to doubt this: If an EM of me would be created during my sleep, what probability would I assign to wake up as silicon, or as wetware? I’m totally not sure I can say 1⁄2.
Actually it’s more complicated than that. Not just water atoms; over time your genetic pattern changes—the composition of cancerous to non-cancerous cells; the composition of senescent to non-senescent cells; the physical structures of the brain itself change.
Neurogenesis does occur in adults—so not even on a cellular level is your brain the same today as it was yesterday.
Furthermore—what makes you confident you are not already in a Matrix? I have no such belief, myself. Too implausible to believe we are in the parent of all universes given physics simulations work.
Note that neither of these developments are generally considered good.
Indeed. But they do demonstrate the principle in question.
The principal you’re trying to demonstrate is that one shouldn’t fear changing one’s substrate since it’s already happening. So, no they don’t.
Yes, they do. And that’s the end of this dialogue.
(EDIT: By end of this dialogue I meant that he and I were at an impasse and unable to adjust our underlying assumptions to a coherent agreement in this discussion. They are too fundamentally divergent for “Aumanning.”)
It would just be an argument over the definition of “I”. Here, tabooing “I” is probably a useful exercise.
OK… what would you replace “I” with, then?