The word “it” here is referring to the superintelligence correct?
Yes
As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.)
I disagree with this
Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
The superintelligence doesn’t need to know for certain the abstract fact that qualia supervene upon brain states. But in each case of a brain that does experience qualia, it too experiences qualia when it runs their computations. Since it knows that the computations are exactly the same, it knows or learns that in each specific case the brain in question is as a matter of fact producing qualia.
What it doesn’t learn (for certain) is whether the fully general condition always holds that human brains with similar-looking computations all have qualia – unless it were to entirely exhaust the space of possible minds which I suppose it does not. But that is unnecessary. We are only demanding (to vanquish “extra-physicality”) whether it knows for certain that the specific brains in its sphere of understanding have qualia. And since it is running their computations, which it is certain are theirs – i.e. it has incorporated their brains – it does so.
I suppose you might be objecting that one part of the mind might have imperfect knowledge about what the other part is doing, so it doesn’t “know” that it is actually experiencing qualia. But you might equally say that regarding communication across the mind about physical knowledge. So you see there is symmetry there between physical knowledge and knowledge of qualia, whether or not you want to postualte that the superintelligence also has perfect intra-brain communication.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.
Yes
I disagree with this
The superintelligence doesn’t need to know for certain the abstract fact that qualia supervene upon brain states. But in each case of a brain that does experience qualia, it too experiences qualia when it runs their computations. Since it knows that the computations are exactly the same, it knows or learns that in each specific case the brain in question is as a matter of fact producing qualia.
What it doesn’t learn (for certain) is whether the fully general condition always holds that human brains with similar-looking computations all have qualia – unless it were to entirely exhaust the space of possible minds which I suppose it does not. But that is unnecessary. We are only demanding (to vanquish “extra-physicality”) whether it knows for certain that the specific brains in its sphere of understanding have qualia. And since it is running their computations, which it is certain are theirs – i.e. it has incorporated their brains – it does so.
I suppose you might be objecting that one part of the mind might have imperfect knowledge about what the other part is doing, so it doesn’t “know” that it is actually experiencing qualia. But you might equally say that regarding communication across the mind about physical knowledge. So you see there is symmetry there between physical knowledge and knowledge of qualia, whether or not you want to postualte that the superintelligence also has perfect intra-brain communication.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.