“It doesn’t make sense to ask questions like, Does a computer program of a mind really instantiate consciousness?”
This is a misunderstanding of how language works. Once we discover what the ontological nature of conscious states is (physical, biological, functional, functional computational, etc.) is and what their content has to be (for example, if conscious states are functional states, not every functional state is a conscious state), we have discovered the true thing we had referred to, and there is an objective fact on the matter as to whether that thing is or is not instantiated somewhere.
For example, imagine you tell me that there are qualia related to smelling coffee, such that the qualia make no functional difference to your behaviour, but do make a difference to your subjective experience. I say this is debunkable, because if qualia make no functional difference, then they don’t influence what you say, including about the supposed qualia.
You have gotten at something extremely important here—namely, that once software passes the Turing test, it’s unjustified to demand that it implements a specific computation to be called conscious, because the presence of that computation (compared to the information processing being implemented differently) makes no functional difference.
Yes, no (outside the host, yes inside), yes. Given what we mean by life, a simulation of life is life.
This question doesn’t make sense, because taste is relative to the consumer. What is really tasty for one person might not be really tasty for another. Consciousness isn’t like that—what is consciousness for one person is consciousness for everyone, people just don’t know what the definition of consciousness fixed by their implicit beliefs is.
Right.
I don’t know. The way I understand (and don’t subscribe to) computational functionalism is that you can have different computations implementing the same behavior.