No.
scientism
I wouldn’t say that a brain transplant is nothing at all like a heart transplant. I don’t take the brain to have any special properties. However, this is one of those situations where identity can become vague. These things lie on a continuum. The brain is tied up with everything we do, all the ways in which we express our identity, so it’s more related to identity than the heart. People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc). You can be rough and ready when replacing the heart in a way you can’t be when replacing the brain.
Let me put it this way: The reason we talk of “brain death” is not because the brain is the seat of our identity but because it’s tied up with our identity in ways other organs are not. If the brain is beyond repair, typically the human being is beyond saving, even if the rest of the body is viable. So I don’t think the brain houses identity. In a sense, it’s just another organ, and, to the degree that that is true, a brain transplant wouldn’t be more problematic (logically) than a heart transplant, provided the dynamics underlying our behaviour could be somehow preserved. This is an extremely borderline case though.
So I’m not saying that you need to preserve your brain in order to preserve your identity. However, in the situation being discussed, nothing survives. It’s a clear case of death (we have a corpse) and then a new being is created from a description. This is quite different from organ replacement! What I’m objecting to is the idea that I am information or can be “transformed” or “converted” into information.
What you’re saying, as far as I can tell, is that you care more about “preserving” a hypothetical future description of yourself (hypothetical because presumably nobody has scanned you yet) than you do about your own life. These are very strange values to have—but I wish you luck!
I don’t see how using more detailed measurements makes it any less a cultural practice. There isn’t a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of “identity” according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).
That’s essentially correct. Preservation of your brain is preservation of your brain, whereas preservation of a representation of your brain (X) is not preservation of your brain or any aspect of you. The existence of a representation of you (regardless of detail) has no relationship to your survival whatsoever. Some people want to be remembered after they’re dead, so I suppose having a likeness of yourself created could be a way to achieve that (albeit an ethically questionable one if it involved creating a living being).
The brain constructed in your likeness is only normatively related to your brain. That’s the point I’m making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It’s a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a set of rules.
This is obvious when you consider how the procedure might be developed. We would have to have a great many trial runs and would decide when we had got it right. That decision would be based on a set of normative criteria, a set of measurements. So it would only be “successful” according to a set of human norms. The procedure would be a cultural practice rather than a physical process. But there is just no such thing as something physical being “converted” or “transformed” into a description (or information or a pattern or representation) - because these are all normative concepts—so such a step cannot possibly conserve identity.
As I said, the only way the person in cryonic suspension can continue to live is through a standard process of revival—that is, one that doesn’t involve the step of being described and then having a likeness created—and if such a revival doesn’t occur, the person is dead. This is because the process of being described and then having a likeness created isn’t any sort of revival at all and couldn’t possibly be. It’s a logical impossibility.
In the example being discussed we have a body. I can’t think of a clearer example of death than one where you can point to the corpse or remains. You couldn’t assert that you died 25 minutes ago—since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued to post on the Internet, that would be evidence that you hadn’t died. Although the explanation that someone just like you was continuing to post on the Internet would be consistent with your having died.
I take it that my death and the being’s ab initio creation are both facts. These aren’t theoretical claims. The claim that I am “really” a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn’t amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being’s behaviour, its ability to fool others, and its own confused state doesn’t make any difference to the argument. It’s possible to dream up scenarios where identity breaks down, but surely not ones where we have a clear example of death.
I would also point out that there are people who are quite content with severe mental illness. You might have delusions of being Napoleon and be quite happy about it. Perhaps such a person would argue that “I feel like Napoleon and that’s good enough for me!”
In the animation, the woman commits suicide and the woman created by the teleportation device is quite right that she isn’t responsible for anything the other woman did, despite resembling her.
It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.
I suppose, to put myself in that situation, I would, willpower permitting, have the false memories removed (if possible), adopt a different name and perhaps change my appearance (or at least move far away). But I see the situation as unimaginably cruel. You’re creating a being—presumably a thinking, feeling being—and tricking it into thinking it did certain things in the past, etc, that it did not do. Even if it knows that it was created, that still seems like a terrible situation to be in, since it’s essentially a form of (inflicted) mental illness.
I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.
The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.
Consider that, if you had this technology, you could presumably create a being that thinks it is a fictional person. You could fool it into thinking all kinds of nonsensical things. Convincing it that it has the same identity as a dead person is just one among many strange tricks you could play on it.
The problem with the computationalist view is that it confuses the representation with what is represented. No account of the structure of the brain is the brain. A detailed map of the neurons isn’t any better than a child’s crude drawing of a brain in this respect. The problem isn’t the level of detail, it’s that it makes no sense to claim a representation is the thing represented. Of course, the source of this confusion is the equally confused idea that the brain itself is a sort of computer and contains representations, information, etc. The confusions form a strange network that leads to a variety of absurd conclusions about representation, information, computation and brains (and even the universe).
Information about a brain might allow you to create something that functions like that brain or might allow you to alter another brain in some way that would make it more like the brain you collected information about (“like” is here relative), but it wouldn’t then be the brain. The only way cryonics could lead to survival is if it led to revival. Any account that involves a step where somebody has to create a description of the structure of your brain and then create a new brain (or simulation or device) from that, is death. The specifics of your biology do not enter into it.
Cyan’s post below demonstrates this confusion perfectly. A book does contain information in the relevant sense because somebody has written it there. The text is a representation. The book contains information only because we have a practice of representing language using letters. None of this applies to brains or could logically apply to brains. But two books can be said to be “the same” only for this reason and it’s a reason that cannot possibly apply to brains.
I’m not quite sure what you’re saying. I don’t think there’s a way to identify whether a goal is meaningless at a more fundamental level of description. Obviously Bob would be prone to say things like “today I did x in pursuit of my goal of time travel” but there’s no way of telling that it’s meaningless at any other level than that of meaning, i.e., with respect to language. Other than that, it seems to me that he’d be doing pretty much the same things, physically speaking, as someone pursuing a meaningful goal. He might even do useful things, like make breakthroughs in theoretical physics, despite being wholly confused about what he’s doing.
You’re right that a meaningless goal cannot be pursued, but nor can you be said to even attempt to pursue it—i.e., the pursuit of a meaningless goal is itself a meaningless activity. Bob can’t put any effort into his goal of time travel, he can only confusedly do things he mistakenly thinks of as “pursuing the goal of time travel”, because pursuing the goal of time travel isn’t a possible activity. What Bob has learned is that he wasn’t pursuing the goal of time travel to begin with. He was altogether wrong about having a terminal value of travelling back in time and riding a dinosaur because there’s no such thing.
- Sep 27, 2013, 1:24 PM; 0 points) 's comment on What makes us think _any_ of our terminal values aren’t based on a misunderstanding of reality? by (
I’d be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven’t read the book):
“Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”
This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.
“Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these turn out to be in the end).”
True and uncontroversial on a loose enough interpretation of “constrained”.
“Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science.”
This is the central claim in the thesis—and the most (only?) controversial one—but he’s already qualifying it with “potentially.” I’m guessing any response of his will turn on (a) the fact that he’s only saying it might be the case and (b) arbitrarily broadening the definition of science. Nevertheless, moral questions aren’t (even potentially) empirical, since they’re obviously seeking normative and not factual answers. But given that this is obvious, it’s hard to imagine that one could change his mind. It’s rather like being invited to challenge the thesis of someone who claims scientific theories are works of fiction. You’ve got your work cut out when somebody has found themselves that far off the beaten path. I suspect the argument of the book runs: this philosophical thesis is misguided, this philosophical thesis is misguided, etc, science is good, we can get something that sort of looks like morality from science, so science—i.e., he takes himself to be explaining morality when he’s actually offering a replacement. That’s very hard to argue against. I think, at best, you’re looking at $2000 for saying something he finds interesting and new, but that’s very subjective.
“On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.”
Assuming “what they deem important in life” is supposed to be parsed as “morality” then this appears to follow from his thesis.
I think the told/meant distinction is confused. You’re conflating different uses of “meant.” When somebody misunderstands us, we say “I meant...”, but it doesn’t follow that when they do understand us we didn’t mean what we told them! The “I meant...” is because they didn’t get the meaning the first time. I can’t do what I’m told without knowing what you meant; in fact, doing what I’m told always implies knowing what you meant. If I tried to follow your command, but didn’t know what you meant by your command, I wouldn’t be doing what I was told. Doing what I’m told is a success term. Somebody who says “I was just doing what you told me!” is expressing a misunderstanding or an accusation that we didn’t make ourselves clear (or perhaps is being mischievous or insubordinate).
There is no following commands without knowing the meaning. The only thing we can do in language without knowing what is meant is to misunderstand, but to misunderstand one must first be able to understand, just as to misperceive one must first be able to perceive. There’s no such thing as misunderstanding all the time or misunderstanding everything. The notion of a wish granting genie that always misunderstands you is an entertaining piece of fiction (or comedy), but not a real possibility.
There’s only two options here. Either the universe is made of atoms and void and a non-material Cartesian subject who experiences the appearance of something else or the universe is filled with trees, cars, stars, colours, meaningful expressions and signs, shapes, spatial arrangements, morally good and bad people and actions, smiles, pained expressions, etc, all of which, under the appropriate conditions, are directly perceived without mediation. Naturalism and skeptical reductionism are wholly incompatible: if it was just atoms and void there would be nothing to be fooled into thinking otherwise.
I think it helps to look at statements of personal narrative and whether they’re meaningful and hence whether they can be true or false. So, for example, change is part of our personal narrative; we mature, we grow old, we suffer injuries, undergo illness, etc. Any philosophical conception of personal identity that leads to conclusions that make change problematic should be taken as a reductio ad absurdum of that conception and not a demonstration of the falsity of our common sense concepts (that is, it shows that the philosopher went wrong in attempting to explicate personal identity, not that we are wrong). Statements of personal narrative are inclusive of our conception, birth, events of our life, etc. Most cultures give meaning to post-death statements but it’s a clearly differentiated meaning. But I can’t meaningfully speak of being in two places at once, of being destroyed and recreated, of not existing for periods of time, etc, so a large range of philosophical and science fiction scenarios are ruled out. (Again, if a philosopher’s attempt to explicate personal identity makes these things possible then it is the philosopher who erred, since the common sense concept clearly precludes them; or he/she is now using a novel concept and hence no pertinent inferences follow). If we create a new person and give him the memories of a dead man, we have only played a cruel trick on him, for a statement of personal narrative that includes being destroyed and recreated has no sense (“I didn’t exist between 1992 and 1998” isn’t like “I was unconscious/asleep between 1992 and 1998″ because non-existence is not a state one can occupy).
Note that the meaningfulness of novel statements like “I teleported from Earth to Mars” or “I uploaded to Konishi Polis in 2975″ depend entirely on unpacking the meaning of the novel terms. Are “teleported” and “uploaded” more like “travelled” or more like “destroyed and recreated”? Is the Konishi Polis computer a place that I can go to? The relevant issue here isn’t personal identity but the nature of the novel term which determines whether these statements are meaningful. If you start from the assumption that “I teleported from Earth to Mars” has a clear meaning, you are obviously going to come to a conclusion where it has a clear meaning. Whether “teleported” means “travelled” or “destroyed and recreated” does not turn on the nature of personal identity but on the relationship of teleportation to space—i.e., whether it’s a form of movement through space (and hence travel). If it involves “conversion from matter to information” we have to ask what this odd use of “conversion” means and whether it is a species of change or more like making a description of an object and then destroying it. The same is true of uploading. With cryonics the pertinent issue is whether it will involve so much damage that it will require that you are recreated rather than merely recovered.
I think they’re all examples of compliance—i.e., in each example he gets them to go along with something that isn’t true. The creepy clown is the most obvious. He has put her in a confusing situation and then makes her confusion look like agreement. He also appears to be mirroring and then provoking her body language. He manages to get her to not walk away and to say he’s right, but most of the time she appears to be completely baffled. With the pet name, I suspect the main part of the trick is making the man wait an extremely long time and making him sympathise with the woman, so that he’ll agree with whatever she says. In his book he explicitly claims to never use camera tricks, he says it’s always a mix of traditional magic and psychological techniques, with one sometimes posing as the other.
Wittgenstein advanced philosophy to the point where it could have become an applied discipline, having solved many philosophical problems once and for all, but philosopher’s balked at the idea of an ultimate resolution to philosophical problems.
I think the view that automation is now destroying jobs, the view that the economy always re-allocates the workforce appropriately and the views defended in this anti-FAQ all rest on a faulty generalisation. The industrial revolution and the early phases of computerisation produced jobs for specific reasons. Factories required workers and computers required data entry. It wasn’t a consequence of a general law of economics, it was a fortuitous consequence of the technology. We are now seeing the end of those specific reasons, but not because of a general trend to automation, but because our new technologies do not have the same fortuitous consequences. Namely, modern robotics do not create factory jobs and the end-to-end ubiquity of the Internet means data entry is done by the end-user. General intelligence doesn’t come into it; there has never been mass employment of general intelligence.
It’s the loss of faculties that constitutes the loss of identity, but faculties aren’t transferable. For example, a ball might lose its bounciness if it is deflated and regain it if it is reinflated, but there’s no such thing as transferring bounciness from one ball to another or one ball having the bounciness of another. The various faculties that constitute my identity can be lost and sometimes regained but cannot be transferred or stored. They have no separate existence.