I completely agree that if on examination of the black box claiming to be a person, it doesn’t behave like a person, then I ought to dismiss that claim.
You seem to be suggesting that even if it does behave like a person (as with the Chinese Room), I should still dismiss the claim, based on some pre-existing theory about a “human spark” and what kinds of things such a spark can reside in. That suggestion seems unjustified to me. But then, I give the Systems Reply to Searle: if the room can carry on a conversation in Chinese, then the room knows Chinese, whether the person inside the room knows Chinese or not.
If Y happens more often when I do X than when I don’t, then doing X when I want Y to happen is perfectly sensible, and I ought to increase my estimate of the probability of Y when I observe X. This is just as true when X is “emulate cargo” and Y is “get real cargo” as any other pair of actions. If the correlation is high (that is, Y happens much more often when I do X than when I don’t), the increase in my estimate of the probability of Y given X ought to be correspondingly high.
•You seem to be suggesting that even if it does behave like a person (as with the Chinese Room), I should still dismiss the claim, based on some pre-existing theory about a “human spark” and what kinds of things such a spark can reside in. That suggestion seems unjustified to me. But then, I give the Systems Reply to Searle: if the room can carry on a conversation in Chinese, then the room knows Chinese, whether the person inside the room knows Chinese or not.
Until shown otherwise, I believe there are two possible kinds of systems that behave like a consciousness:
1) Conscious systems
2) Systems built to look like conscious systems that are not conscious systems.
A strong AI-like hypothesis for artificial consciousness would essentially be the statement that systems of the 2nd type are not possible, that a system built to look similar enough to consciousness must itself be conscious. This is a hypothesis, an assumption, possibly untestable, certainly untested. Until Strong AI-Consciousness is proved, it seems counterproductive to assume it.
Let me ask you a question about the Chinese Room. Suppose someone implemented an emulation of you as a chinese room. A chinese room can certainly implement a Turing machine, suppose you do believe that you (including your action with the world around you) are nothig more than a Turing machine. Once this chinese room was up and running, call it TheChineseDave, what moral status does it hold? Would you feel it was immoral to open the doors to this chinese room and yell in to all the people there “Hey the experiment is over, you can all go home.” Would you think that you had committed murder, or even killed something?
I agree that type-1 and type-2 systems are both possible.
A number of hypotheses that I would label “strong AI” don’t claim that type-2 systems are impossible, they merely claim that type-1 systems can be constructed by means other than gestating a zygote. (That said, I don’t think it matters much what we attach the label “strong AI” to.)
Re: your question… it’s probably worth saying that I don’t believe that it’s possible in practice to construct a classic Chinese Room (that is, one in which the rules for how to respond to inputs are entirely captured as instructions that a human, or a group of humans, can successfully execute) that can emulate a person. Or simulate one, for that matter.
But OK, for the sake of answering your question, I will hypothetically assume that I’m wrong, and someone has done so to build TheChineseDave, and that I am convinced that TheChineseDave is a person.
Yes, in that hypothetical scenario, I would feel that sending all the people inside home and ending the experiment would constitute killing a person. (Or perhaps suspending that person’s life, if it is possible to pick up TheChineseDave where it left off, which it seems like it ought to be.)
Also, as specified I would probably consider it an ethical killing, since TheChineseDave’s continued existence depends on, but does not justify, the involuntary labor of the people inside the room. But I suspect that’s beside your point.
Yes, in that hypothetical scenario, I would feel that sending all the people inside home and ending the experiment would constitute killing a person. (Or perhaps suspending that person’s life, if it is possible to pick up TheChineseDave where it left off, which it seems like it ought to be.)
I think the moral component of emulation and how we get there needs to be explored. I may think killing an 18 year old genius is ethical if he dies incidental to my saving 700 lives on a crashing airliner in some easy to construct hypothetical. But the point is his death is a large moral value that is being weighed in the moral economy.
If disbanding the chinese room destroys a consciousness (or “suspends” it likely forever), it is essentially the same moral value as killing a meat-instantiated person.
To develop emulations, I hypothesize (and will happily bet in a prediction market) that MANY MANY partial successes partial failures will be achieved before the technology is on line and good. This means there will be a gigantic moral/ethical question around doing the research and development to get things working. Will the broken-but-conscious versions be left alive for thousands of subjective years of suffering? Will they be “euthanized?”
Morality should be difficult for rationalists: it is not fundamentally a rational thing. Morality starts with an assumption, whether it is that suffering defined as X should be minimized or that some particular set of features (coherent extrapolated volition) should be maximized. For me, various scales and versions of mechanical consciousness challenge conventional morality, which of course evolved when there was only one copy of each person, it was in meat, and it lasted for a finite time. It makes sense that conventional morality should be challenged as we contemplate consciousnesses that violate all of these conditions, just as classical mechanics is challenged by very fast objects and very small objects in physics.
You see morality as something beyond just a choice? Yes, once you set down some principles, you rationally derive the morality of various situations from that, but the laying down of the original principal, where does that come from? It is just declared, is it not? “I hold this truth to be self-evident, that X is a moral principle.”
If I’m understanding what you’re saying here correctly (which is far from certain), I agree with you that what moral systems I endorse depend on what I assign value to. If that’s all you meant by morality not being fundamentally rational, then I don’t think we disagree on anything here.
Cool! I assume what I assign value to is largely determined by evolution. That my ancestors who had very different inborn value systems didn’t make it (and so are not really my ancestors) and the values I have are the ones that produced a coherent and effective organized cadre of humans who could then outcompete the other humans for control over the resources, as a group.
To me it seems irrational to assign much weight to these inborn values. I can’t say I know what my choice is, what my alternative is. But an example of the kind of irrationality that i see is cryonics, and the deification, it seems to me, of the individual life. I suppose evolution built into each of us a fear of death and a drive to survive. Our potential ancestors that didn’t have that as strongly lost out to our actual ancestors, that at least makes sense. But why would I just “internalize” that value? I can see it came about as a direction, not a true end. All my ancestors no matter how strong their drive to survive have died. Indeed without their dying, evolution would have stopped, or slowed down gigantically. So one might also say my candidate=ancestors that didn’t die as readily are also not my true ancestors, my true ancestors are the ones who wanted to live but didnt and so evolved a bit faster and beat the slower-evolving longer-lived groups.
The world “wants” brilliant and active minds. It is not at all clear it benefits more, or even equally, from an old frozen mind as it does from a new mind getting filled from the start with new stuff and having the peculiar plasticities that newer minds have.
It is clear to me that the reason I am afraid of death is because I was bred to be afraid of death.
It is THIS sense in which I say our values are irrational. They are the result of evolution. Further, a lot of those values evolved when our neocortexes were a lot less effective: I believe what we have are mammalian and primate values, that the values part of our brain has been evolving for millions of years that were social not just the 100s of thousands of years that our neocortex was so bitchin’.
So to me just using rationality to advance my inbred values would be like using modern materials science to build a beautiful temple to Neptune to improve our ocean-faring commerce.
That values, regardless of their source, may be the only motivational game in town is not evidence that it makes sense to exalt them more than they already assert themselves. Rather the opposite I would imagine, it would make it likely to be valuable to question them, to reverse engineer nature’s purposes in giving them to us.
Agreed that our inborn values are the result of our evolutionary heritage. Of course, so is the system that we use to decide whether to optimize for those values or some other set.
If I reject what I model as my evolution-dictated value system (hereafter EDV) in favor of some other value set, I don’t thereby somehow separate myself from my evolutionary heritage. It’s not clear to me that there’s any way to do that, or that there’s any particular reason to do so if there were.
I happen to value consistency, and it’s easier to get at least a superficial consistency if I reject certain subsets of EDV, but if I go too far in that direction I end up with a moral system that’s inconsistent with my actual behaviors. So I try to straddle that line of maximum consistency. But that’s me.
Agreed that it’s not clear that my continued existence provides more value to the world than various theoretically possible alternatives. Then again, it’s also not clear that my continued existence is actually in competition with those alternatives. And if world A has everything else I want + me remaining alive, and world B has everything else I want to the same extent + me not alive, I see no reason to choose B over A.
Agreed that it’s valuable to understand the mechanisms (both evolutionary and cognitive) whereby we come to hold the values we hold.
A number of hypotheses that I would label “strong AI” don’t claim that type-2 systems are impossible, they merely claim that type-1 systems can be constructed by means other than gestating a zygote.
I thought the value of the strong AI hypothesis was that you didn’t have to wonder if you had created true consciousness or just a simulation of consciousness. That the essence of the consciousness was somehow built in to the patterns of the consciousness no matter how they were instantiated, once you saw those patterns working you knew you had a consciousness.
Your weaker version doesn’t have that advantage. If all I know is something that does everything a consciousness does MIGHT be a consciousness, then I am still left with the burden of figuring out how to distinguish real consciousnesses from simulations of consciousnesses.
An underappreciated aspect of these issues is the red herring thrown in by some students of the philosophy of science. Science rightly says about ALMOST everything: “if it looks like a duck and it sounds like a duck and it feels like a duck and it tastes like a duck, and it nourishes me when I eat it, then it is a duck.” But consciousness is different. From the point of view of a dictatorial leader, it is not different. If dictator can build a clone army and/or a clone workforce, what would he possibly care if they are truly conscious or not or only simulations of consciousness? It is only sombody who believes we should treat consciousnesses differently than we treat uncsonclous objects.
I continue to think it doesn’t much matter what we attach the label “strong AI” to. It’s fine with me if you’d prefer to attach that label only to theories that, if true, mean we are spared the burden of figuring out how to distinguish real consciousness from non-conscious simulations of consciousness.
Regardless of labels: yes, if it’s important to me to treat those two things differently, then it’s also important to me to be able to tell the difference.
Would you feel it was immoral to open the doors to this chinese room and yell in to all the people there “Hey the experiment is over, you can all go home.” Would you think that you had committed murder, or even killed something?
Explain to me how a bullet isn’t yelling to my various components “Hey, the experiment is over, you can all go home.”
If it turns out that my various components, upon being disassembled by a bullet, cannot function independently as distinct people, then the implicit analogy fails.
I don’t think the analogy fails, but I do see a couple narrow objections.
1) If you believe that in being tasked with simulating you, the other people are made unable to live out their lives as they wish, then we may be saving them through your death. In this case, I would say that it is still death, and still bad in that respect, but possibly not on balance. If those people are there by their own free will, and feel that simulating TheChineseDave is their calling in life, then the morality plays out differently.
2) If you pause the experiment but do not destroy the data, such that people (the same or others) may pick up where they left off, then it is closer to cryonic suspension than death. To make it truly analogous, we might require destruction of their notes (or at least enough of them to prohibit resumption).
Aside from those, I cannot see any relevant difference between the components making up TheChineseDave and those making up TheOtherDave—in both cases, it’s the system that is conscious. Is there anything you think I have overlooked?
Important to whether it is murder, but not to whether something was killed, unless I am missing something.
I suppose “bullet” may have been a poor choice, as there may be too many cached associations with murder, but bullets can certainly take life in situations we view as morally justified—in this case, defense of others.
Agreed that bullets can take life in morally justifiable ways.
I’m not sure what you’re responding to, but I’m pretty sure it’s something I didn’t say.
Your original comment implied an analogy between disassembling the team implementing TheChineseDave on the one hand, and a bullet in your brain. I replied that the implicit analogy fails, because disassembling the team implementing TheChineseDave frees up a bunch of people to live their lives, whereas a bullet in your brain does not do so.
What you seem to be saying is that yes, that’s true, but the analogy doesn’t fail because that difference between the two systems isn’t relevant to whatever point it is you were trying to make in the original comment.
Which may well be true… I’m not quite sure what point you were making, since you left that implicit as well.
My point was, “I don’t see any way in which it is not ‘killing’, and I think turning the question around makes this clearer.” A bullet in my brain doesn’t destroy most of my constituent parts (all of my constituent atoms are preserved, nearly all of my constituent molecules and cells are preserved), but destroys the organization. Destroying that organization is taking a life, whether the organization is made up of cells or people or bits, and I expect it to be morally relevant outside extremely unusual circumstances (making a backup and then immediately destroying it without any intervening experience is something I would have a hard time seeing as relevant, but I could perhaps be convinced).
The fact of the act of killing is what I was saying was preserved. I was not trying to make any claim about the total moral picture, which necessarily includes details not specified in the original framework. If the people were prisoners, then I agree that it’s not murder. If the people were employees, that’s an entirely different matter. If enthusiasts (maybe a group meets every other Tuesday to simulate TheChineseDave for a couple hours), it’s something else again. Any of these could reasonably be matched by a parallel construction in the bullet case; in particular, the prisoner case seems intuitive—we will kill someone who is holding others prisoner (if there is no other option) and not call it murder.
A few things.
I completely agree that if on examination of the black box claiming to be a person, it doesn’t behave like a person, then I ought to dismiss that claim.
You seem to be suggesting that even if it does behave like a person (as with the Chinese Room), I should still dismiss the claim, based on some pre-existing theory about a “human spark” and what kinds of things such a spark can reside in. That suggestion seems unjustified to me. But then, I give the Systems Reply to Searle: if the room can carry on a conversation in Chinese, then the room knows Chinese, whether the person inside the room knows Chinese or not.
If Y happens more often when I do X than when I don’t, then doing X when I want Y to happen is perfectly sensible, and I ought to increase my estimate of the probability of Y when I observe X. This is just as true when X is “emulate cargo” and Y is “get real cargo” as any other pair of actions. If the correlation is high (that is, Y happens much more often when I do X than when I don’t), the increase in my estimate of the probability of Y given X ought to be correspondingly high.
Until shown otherwise, I believe there are two possible kinds of systems that behave like a consciousness: 1) Conscious systems 2) Systems built to look like conscious systems that are not conscious systems.
A strong AI-like hypothesis for artificial consciousness would essentially be the statement that systems of the 2nd type are not possible, that a system built to look similar enough to consciousness must itself be conscious. This is a hypothesis, an assumption, possibly untestable, certainly untested. Until Strong AI-Consciousness is proved, it seems counterproductive to assume it.
Let me ask you a question about the Chinese Room. Suppose someone implemented an emulation of you as a chinese room. A chinese room can certainly implement a Turing machine, suppose you do believe that you (including your action with the world around you) are nothig more than a Turing machine. Once this chinese room was up and running, call it TheChineseDave, what moral status does it hold? Would you feel it was immoral to open the doors to this chinese room and yell in to all the people there “Hey the experiment is over, you can all go home.” Would you think that you had committed murder, or even killed something?
I agree that type-1 and type-2 systems are both possible.
A number of hypotheses that I would label “strong AI” don’t claim that type-2 systems are impossible, they merely claim that type-1 systems can be constructed by means other than gestating a zygote. (That said, I don’t think it matters much what we attach the label “strong AI” to.)
Re: your question… it’s probably worth saying that I don’t believe that it’s possible in practice to construct a classic Chinese Room (that is, one in which the rules for how to respond to inputs are entirely captured as instructions that a human, or a group of humans, can successfully execute) that can emulate a person. Or simulate one, for that matter.
But OK, for the sake of answering your question, I will hypothetically assume that I’m wrong, and someone has done so to build TheChineseDave, and that I am convinced that TheChineseDave is a person.
Yes, in that hypothetical scenario, I would feel that sending all the people inside home and ending the experiment would constitute killing a person. (Or perhaps suspending that person’s life, if it is possible to pick up TheChineseDave where it left off, which it seems like it ought to be.)
Also, as specified I would probably consider it an ethical killing, since TheChineseDave’s continued existence depends on, but does not justify, the involuntary labor of the people inside the room. But I suspect that’s beside your point.
I think the moral component of emulation and how we get there needs to be explored. I may think killing an 18 year old genius is ethical if he dies incidental to my saving 700 lives on a crashing airliner in some easy to construct hypothetical. But the point is his death is a large moral value that is being weighed in the moral economy.
If disbanding the chinese room destroys a consciousness (or “suspends” it likely forever), it is essentially the same moral value as killing a meat-instantiated person.
To develop emulations, I hypothesize (and will happily bet in a prediction market) that MANY MANY partial successes partial failures will be achieved before the technology is on line and good. This means there will be a gigantic moral/ethical question around doing the research and development to get things working. Will the broken-but-conscious versions be left alive for thousands of subjective years of suffering? Will they be “euthanized?”
Morality should be difficult for rationalists: it is not fundamentally a rational thing. Morality starts with an assumption, whether it is that suffering defined as X should be minimized or that some particular set of features (coherent extrapolated volition) should be maximized. For me, various scales and versions of mechanical consciousness challenge conventional morality, which of course evolved when there was only one copy of each person, it was in meat, and it lasted for a finite time. It makes sense that conventional morality should be challenged as we contemplate consciousnesses that violate all of these conditions, just as classical mechanics is challenged by very fast objects and very small objects in physics.
I agree with basically everything you say here except “morality is not fundamentally a rational thing.”
You see morality as something beyond just a choice? Yes, once you set down some principles, you rationally derive the morality of various situations from that, but the laying down of the original principal, where does that come from? It is just declared, is it not? “I hold this truth to be self-evident, that X is a moral principle.”
If I have that wrong, I would LOVE to know it!
If I’m understanding what you’re saying here correctly (which is far from certain), I agree with you that what moral systems I endorse depend on what I assign value to. If that’s all you meant by morality not being fundamentally rational, then I don’t think we disagree on anything here.
Cool! I assume what I assign value to is largely determined by evolution. That my ancestors who had very different inborn value systems didn’t make it (and so are not really my ancestors) and the values I have are the ones that produced a coherent and effective organized cadre of humans who could then outcompete the other humans for control over the resources, as a group.
To me it seems irrational to assign much weight to these inborn values. I can’t say I know what my choice is, what my alternative is. But an example of the kind of irrationality that i see is cryonics, and the deification, it seems to me, of the individual life. I suppose evolution built into each of us a fear of death and a drive to survive. Our potential ancestors that didn’t have that as strongly lost out to our actual ancestors, that at least makes sense. But why would I just “internalize” that value? I can see it came about as a direction, not a true end. All my ancestors no matter how strong their drive to survive have died. Indeed without their dying, evolution would have stopped, or slowed down gigantically. So one might also say my candidate=ancestors that didn’t die as readily are also not my true ancestors, my true ancestors are the ones who wanted to live but didnt and so evolved a bit faster and beat the slower-evolving longer-lived groups.
The world “wants” brilliant and active minds. It is not at all clear it benefits more, or even equally, from an old frozen mind as it does from a new mind getting filled from the start with new stuff and having the peculiar plasticities that newer minds have.
It is clear to me that the reason I am afraid of death is because I was bred to be afraid of death.
It is THIS sense in which I say our values are irrational. They are the result of evolution. Further, a lot of those values evolved when our neocortexes were a lot less effective: I believe what we have are mammalian and primate values, that the values part of our brain has been evolving for millions of years that were social not just the 100s of thousands of years that our neocortex was so bitchin’.
So to me just using rationality to advance my inbred values would be like using modern materials science to build a beautiful temple to Neptune to improve our ocean-faring commerce.
That values, regardless of their source, may be the only motivational game in town is not evidence that it makes sense to exalt them more than they already assert themselves. Rather the opposite I would imagine, it would make it likely to be valuable to question them, to reverse engineer nature’s purposes in giving them to us.
Agreed that our inborn values are the result of our evolutionary heritage.
Of course, so is the system that we use to decide whether to optimize for those values or some other set.
If I reject what I model as my evolution-dictated value system (hereafter EDV) in favor of some other value set, I don’t thereby somehow separate myself from my evolutionary heritage. It’s not clear to me that there’s any way to do that, or that there’s any particular reason to do so if there were.
I happen to value consistency, and it’s easier to get at least a superficial consistency if I reject certain subsets of EDV, but if I go too far in that direction I end up with a moral system that’s inconsistent with my actual behaviors. So I try to straddle that line of maximum consistency. But that’s me.
Agreed that it’s not clear that my continued existence provides more value to the world than various theoretically possible alternatives. Then again, it’s also not clear that my continued existence is actually in competition with those alternatives. And if world A has everything else I want + me remaining alive, and world B has everything else I want to the same extent + me not alive, I see no reason to choose B over A.
Agreed that it’s valuable to understand the mechanisms (both evolutionary and cognitive) whereby we come to hold the values we hold.
I thought the value of the strong AI hypothesis was that you didn’t have to wonder if you had created true consciousness or just a simulation of consciousness. That the essence of the consciousness was somehow built in to the patterns of the consciousness no matter how they were instantiated, once you saw those patterns working you knew you had a consciousness.
Your weaker version doesn’t have that advantage. If all I know is something that does everything a consciousness does MIGHT be a consciousness, then I am still left with the burden of figuring out how to distinguish real consciousnesses from simulations of consciousnesses.
An underappreciated aspect of these issues is the red herring thrown in by some students of the philosophy of science. Science rightly says about ALMOST everything: “if it looks like a duck and it sounds like a duck and it feels like a duck and it tastes like a duck, and it nourishes me when I eat it, then it is a duck.” But consciousness is different. From the point of view of a dictatorial leader, it is not different. If dictator can build a clone army and/or a clone workforce, what would he possibly care if they are truly conscious or not or only simulations of consciousness? It is only sombody who believes we should treat consciousnesses differently than we treat uncsonclous objects.
I continue to think it doesn’t much matter what we attach the label “strong AI” to. It’s fine with me if you’d prefer to attach that label only to theories that, if true, mean we are spared the burden of figuring out how to distinguish real consciousness from non-conscious simulations of consciousness.
Regardless of labels: yes, if it’s important to me to treat those two things differently, then it’s also important to me to be able to tell the difference.
Explain to me how a bullet isn’t yelling to my various components “Hey, the experiment is over, you can all go home.”
If it turns out that my various components, upon being disassembled by a bullet, cannot function independently as distinct people, then the implicit analogy fails.
I don’t think the analogy fails, but I do see a couple narrow objections.
1) If you believe that in being tasked with simulating you, the other people are made unable to live out their lives as they wish, then we may be saving them through your death. In this case, I would say that it is still death, and still bad in that respect, but possibly not on balance. If those people are there by their own free will, and feel that simulating TheChineseDave is their calling in life, then the morality plays out differently.
2) If you pause the experiment but do not destroy the data, such that people (the same or others) may pick up where they left off, then it is closer to cryonic suspension than death. To make it truly analogous, we might require destruction of their notes (or at least enough of them to prohibit resumption).
Aside from those, I cannot see any relevant difference between the components making up TheChineseDave and those making up TheOtherDave—in both cases, it’s the system that is conscious. Is there anything you think I have overlooked?
Nope, that’s pretty much it. #1, in particular, seems important.
Important to whether it is murder, but not to whether something was killed, unless I am missing something.
I suppose “bullet” may have been a poor choice, as there may be too many cached associations with murder, but bullets can certainly take life in situations we view as morally justified—in this case, defense of others.
Agreed that bullets can take life in morally justifiable ways.
I’m not sure what you’re responding to, but I’m pretty sure it’s something I didn’t say.
Your original comment implied an analogy between disassembling the team implementing TheChineseDave on the one hand, and a bullet in your brain. I replied that the implicit analogy fails, because disassembling the team implementing TheChineseDave frees up a bunch of people to live their lives, whereas a bullet in your brain does not do so.
What you seem to be saying is that yes, that’s true, but the analogy doesn’t fail because that difference between the two systems isn’t relevant to whatever point it is you were trying to make in the original comment.
Which may well be true… I’m not quite sure what point you were making, since you left that implicit as well.
My point was, “I don’t see any way in which it is not ‘killing’, and I think turning the question around makes this clearer.” A bullet in my brain doesn’t destroy most of my constituent parts (all of my constituent atoms are preserved, nearly all of my constituent molecules and cells are preserved), but destroys the organization. Destroying that organization is taking a life, whether the organization is made up of cells or people or bits, and I expect it to be morally relevant outside extremely unusual circumstances (making a backup and then immediately destroying it without any intervening experience is something I would have a hard time seeing as relevant, but I could perhaps be convinced).
The fact of the act of killing is what I was saying was preserved. I was not trying to make any claim about the total moral picture, which necessarily includes details not specified in the original framework. If the people were prisoners, then I agree that it’s not murder. If the people were employees, that’s an entirely different matter. If enthusiasts (maybe a group meets every other Tuesday to simulate TheChineseDave for a couple hours), it’s something else again. Any of these could reasonably be matched by a parallel construction in the bullet case; in particular, the prisoner case seems intuitive—we will kill someone who is holding others prisoner (if there is no other option) and not call it murder.
Ah, gotcha. Thanks for clarifying.
I’m glad it did, in fact, clarify!