This is a fair answer. I disagree with it, but it is fair in the sense that it admits ignorance. The two distinct points of view are that (mine) there is something about human consciousness that cannot be explained within the language of Turing machines and (yours) there is something about human consciousness that we are not currently able to explain in terms of Turing machines. Both people at least admit that consciousness has no explanation currently, and absent future discoveries I don’t think there is a sure way to tell which one is right.
I find it hard to fully develop a theory of morality consistent with your point of view. For example, would it be wrong to (given a computer simulation of a human mind) run that simulation through a given painful experience over and over again? Let us assume that the painful experience has happened once...I just ask whether it would be wrong to rerun that experience. After all, it is just repeating the same deterministic actions on the computer, so nothing seems to be wrong about this. Or, for example, if I make a backup copy of such a program, and then allow that backup to run for a short period of time under slightly different stimuli, at which point does that copy acquire an existence of its own, that would make it wrong to delete that copy in favor of the original? I could give many other similar questions, and my point is not that your point of view denies a morality, but rather that I find it hard to develop a full theory of morality that is internally consistent and that matches your assumptions (not that developing a full theory of morality under my assumptions is that much easier).
Among professional scientists and mathematicians, I have encountered both viewpoints: those who hold it obvious to anyone with even the simplest knowledge that Turing machines cannot be conscious, and those who hold that the opposite it true. Mathematicians seem to lean a little more toward the first viewpoint than other disciplines, but it is a mistake to think that a professional, world-class research level, knowledge of physics, neuroscience, mathematics, or computer science necessarily inclines one towards the soulless viewpoint.
Of course, my original comment had nothing to do with god. It had to do with “souls”, for lack of a better term as that was the term that was used in the original discussion (suggest reading the original post if you want to know more—basically, as I understand the intent it simply referred to some hypothetical quality that is associated with consciousness that lies outside the realm of what is simulable on a Turing machine). If you think that humans are nothing but Turing machines, why is it morally wrong to kill a person but not morally wrong to turn off a computer? Please give a real answer...either provide an answer that admits that humans cannot be simulated by Turing machines, or else give your answer using only concepts relevant to Turing machines (don’t talk about consciousness, qualia, hopes, whatever, unless you can precisely quantify those concepts in the language of Turing machines). And in the second case, your answer should allow me to determine where the moral balance between human and computers lies....would it be morally bad to turn off a primitive AI, for example, with intelligence at the level of a mouse?