Firstly, thanks for the first positive feedback I’ve received!
It’s theoretically possible that its simulation of my brain produces qualia, but my simulation of my brain (which is, of course, just my brain) doesn’t
The thought experiment postulates that the omniscient being possess perfect information (being omniscient!) about a certain volume of the Universe. As such, the computation it performs is a perfect likeness of the computation that occurs in your brain. Therefore, from our perspective as the people conducting the thought experiment, we believe with extremely high probability that the outcome of the thought experiment is that the omniscient being experiences the same qualia that the human in its sphere of understanding does.
To restate, our belief in this outcome of the thought experiment is merely extremely confident. But given this most probable outcome, since the computation is exactly the same, the qualia experienced by the omniscient being are certainly exactly the same (as it sees things). This is quite a subtle distinction!
What may also be confusing you is that the existence of “perfect” knowledge i.e. omniscience is unphysical—this is after all a thought experiment. But as I suggested in my article, I think the same principle applies if the being is not omniscient but merely possesses a detailed physical understanding of a volume of the Universe. All that changes is that the discussion becomes more long-winded. It is still the case that there is no uncertainty on the part of the superintelligence concerning qualia that is not directly related to uncertainty about physical configurations.
To restate, our belief in this outcome of the thought experiment is merely extremely confident. But given this most probable outcome, since the computation is exactly the same, the qualia experienced by the omniscient being are certainly exactly the same. This is quite a subtle distinction!
I’m not sure you took my point correctly. I am arguing that the omniscient entity, and not just us, can only be extremely confident that other people are having conscious experiences.
The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations. If they don’t supervene in this way, two identical computations may differ in the qualia they produce. Furthermore, certain knowledge of this supervenience is not built into the entity’s omniscience. So he lacks certain knowledge of my experiences as a result of his simulation, even while obtaining certain knowledge of his own. So even though his computations have led him to perfect knowledge of the configuration of all quarks and the like, he still lacks perfect knowledge regarding my qualia. This is the conclusion Chalmers is trying to arrive at.
Recall that we are taking as a given that qualia do in fact supervene upon brain states (regardless of whether the superintelligence knows this).
Now, the superintelligence is certain about your physical make-up, despite the fact that you are separate from it. If it performs a computation, which it is certain is the same one occurring in your brain, then when it experiences qualia it knows for certain that these qualia are caused by the computation. When it doesn’t run the computation, it doesn’t get the qualia. When it runs it, it does. Since it is certain that the computation is the same as yours, it is certain that you experience the same qualia. You see this does not depend on an abstract belief that certain computations bring about qualia, it gets to actually run the computation—which simply is the computation in your brain, and see for itself that qualia are produced.
I think that you are having trouble grasping this because there is no such thing as perfect certainty, and you are applying your realistic intuitions of fallibility to the idea—either that or I’m wrong!
I think that you are having trouble grasping this because there is no such thing as perfect certainty, and you are applying your realistic intuitions of fallibility to the idea—either that or I’m wrong!
Phlebas, like antigonus, I really enjoyed your essay (without agreeing with all of it). But the same objection that antigonus raises occurred to me. I’m not sure that you understood antigonus’s objection, so I will try to rephrase it in my words.
I follow you this far:
Now, the superintelligence is certain about your physical make-up, despite the fact that you are separate from it. … [I]t performs a computation, which it is certain is the same one occurring in your brain, …
And I agree that the superintelligence then experiences qualia. But what I don’t see is why
… when it experiences qualia it knows for certain that these qualia are caused by the computation.
Since you want to leave open the possibility that qualia are irreducible, you can’t assume that the superintelligence (SI) sees how the computation logically necessitates the generation of the qualia. The only alternative is that the SI reaches its conclusion through empirical observation. Indeed, this is how you describe the SI’s inference when you say,
When it doesn’t run the computation, it doesn’t get the qualia. When it runs it, it does.
But how can this kind of empirical observation provide that the SI with absolute certainty that the computation, and the computation alone, causes the qualia?
For example, how can the SI rule out the possibility that some nonphysical fact F applies to itself, but not to you (or the infant or whatever), and that [the computation + F] suffices to generate qualia, while [the computation—F] does not?
It seems that the SI has to leave open some small chance that, when it runs the computation, the computation generates qualia, but when you run the computation, you do not experience qualia because some additional nonphysical ingredient is missing. To deny this in a debate with Chalmers would seem to beg the question.
OK, a multi-paragraph summary first (skip it if you like; I feel it’s helpul to avoid any further arguing at cross-purposes) – since my position in the argument slowly morphed and became disorganised:
People claim to have “qualia”, which is different to mere Dennettian “consciousness” but can’t seemingly be defined. On examination of the brain, we will inevitably find some physical reason for why people discuss consciousness. It is highly improbable that this physical reason is unrelated to “qualia” OR “consciousness”. However, it is misleading to bundle these two together when discussing the subejct – and I do not accept the rather absurd claim that it’s OK to do so, because consciousness is a “confusion” – since arguments valid under certain assumptions about this “confusion” are not valid under other assumptions. In other words, Eliezer’s failure to attempt to distinguish reducible “qualia”, irreducible “qualia”, and qualia-eliminative “consciousness” as a preliminary step in his essay renders the essay liable to beg the question (question-begging seems to be the crux of the matter in general in this discussion) – unless he considers the irreducible qualia idea to be a priori nonsense.
If he does consider the irreducible qualia concept to be a priori nonsense:
a) Why not say so?
b) Why was such a misleadingly long essay necessary?
c) Why assume such a thing? OK, it doesn’t seem very Bayesian. But Bayes’s Theorem, Bayesian rationality and reductionism are just rules that apply perfectly to everything we’ve ever tried to apply them to – but in my ontology like most others’, there’s everything else and there’s qualia. There is no other concept, apart from “qualia”, that a supermajority of people affirm to be real in the absolute strongest terms – including people such as myself who are otherwise Bayesian reductionists – but which appears to be irreducible.
But anyhow, there is only an actual flaw in Eliezer’s refutation of Chalmer’s if we assume that “qualia” are real and irreducible. If qualia are real but irreducible, there must however be a reductionist causal explanation for physical humans discussing consciousness. I find the idea of our discovering a reductionist explanation of qualia, rather than mere consciousness, improbable. Therefore let us suppose that on examining the brain we discover a Dennettian causal explanation of our talking about consciousness, and then people are left to decide whether they accept this as a refutation of “qualia” or decide that such a belief is crazy and that qualia must be irreducible, existing apart from the physical cause of talk about consciousness.
Then, if we are not Dennettians we have very good reason still to believe that qualia supervene upon brain computations – presumably including the computations that constitute the physical reason for our discussing consciousness. Whatever happens to our brains physically, we experience our qualia changing synchronously and in qualitative relation. They may be “causally isolated” in the sense that we understand causality necessarily to involve reducible phenomena, but they “supervene” – when brain states change, qualia change likewise.
This distinction reveals the essentially question-begging nature of Eliezer’s talk about “causally closed outer Chalmers” being deranged – if we believe as seems to be the case that he dismisses the concept of irreduicble qualia out of hand. That is to say, the causal chain leading back from Chalmers’s hands typing on the keyboard about “qualia” leads precisely to the brain computations (viewed by other parts of the brain – per Dennett’s description) that are generating qualia – outer Chalmers is not deranged – but if we take a (somewhat) detailed look at a brain from outside, all we see is the brain examining itself in action; we might (naively?) assume that such a thing as “qualia” have been explained away. It’s only when a given brain apprehends another brain in sufficient detail (however much detail that may be) such that it is running approximately the selfsame computations, that it actually notices the qualia (like you said, in an empirical manner).
So, let us assume for the sake of argument that this belief is the accurate one regarding consciousness/qualia. I suspect that in believing in real, irreducible qualia I am somewhere between Eliezer’s and Chalmers’s stances, because it seems to me that Eliezer is not favourable towards such an idea, but pace Chalmers I do not consider there to be anything “extra-physical” about qualia – they are irreducible, but they supervene upon physical brain states therefore they are fully determined by mundane physical configurations of the Universe.
So, having tussled with Eliezer I still need to tussle with Chalmers. Perhaps Eliezer has done the job for me? Apparently not, because if we grant that there is a real, irreducible phenomenon “qualia”, Eliezer’s argument (if it applies at all) is simply that it’s improbable that humans would talk about having qualia, if they didn’t have qualia. This doesn’t prove that qualia are fully determined by physical configurations: seemingly a superintelligence that knows everything about some physical volume of the Universe is merely confident that beings inside (which do, in fact, experience qualia) have qualia.
Tyrrell, you are right in saying that I argue that the superintelligence concludes through empirical observation that these beings experience qualia.
You ask:
For example, how can the SI rule out the possibility that some nonphysical fact F applies to itself, but not to you (or the infant or whatever), and that [the computation + F] suffices to generate qualia, while [the computation—F] does not?
It seems that the SI has to leave open some small chance that, when it runs the computation, the computation generates qualia, but when you run the computation, you do not experience qualia because some additional nonphysical ingredient is missing. To deny this in a debate with Chalmers would seem to beg the question.
The superintelligence cannot rule that out. I agree, and I now understand Antigonus’s objection better too.
However, can it rule any “non-physical fact F” out? What about the non-physical fact F that its supposedly perfect knowledge about a certain volume of the Universe is bunkum? Is there any limitation to the purely physical knowledge that “non-physical facts” can potentially undermine – even in the eyes of a (physically) omniscient being?
If not, is it not unfair to apply this standard – having to rule out the possibility of some “non-physical fact” disrupting expectations – to the superintelligence’s knowledge of qualia, but not to its knowledge of everything else?
You may argue that this is question-begging. However, our objective is to prove that an omniscient superintelligence knows about qualia just as much as it knows about physical brains – assuming that (from our perspective, with extremely high probability) qualia do supervene on brain states (and also assuming that qualia are real and irreducible, to make the discussion meaningful). And we have proven that: what the superintelligence, as an omniscient mind, does is effectively to take physical brains, inhabit them and see if they experience qualia.
If this wasn’t the case – if we were stumped: “Um yeah, I don’t see how this superintelligence knows if I have qualia” – then we might have to concede the point to Chalmers. It would appear that qualia were not fully determined by physical configurations, therefore they must be “extra-physical” rather supervening on brain states and being merely irreducible.
The difference is that the “non-physical fact” that you speak of is equally capable of undermining anything. It is fully general. If we were arguing with Chalmers about whether there are “non-physical facts” in general then I would be begging the question – that seems an a priori irresolvable argument. But what we are actually arguing about is whether we are forced to admit a specific, apparent gap in physics where a real phenomenon is seen to lack a physical underpinning. This would prove that there is at least one “non-physical fact”. In other words, we are not trying to prove the non-existence of non-physical facts in general (heaven forbid!) but merely to disprove the idea that there is any particular reason why we should believe that there are any non-physical facts.
I’m worried we’re talking past each other, since I would give largely the same reply as before.
Since it is certain that the computation is the same as yours, it is certain that you experience the same qualia.
The word “it” here is referring to the superintelligence correct? Because if so, this is the specific inference I’m disputing the superintelligence will legitimately make. As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.) Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
The word “it” here is referring to the superintelligence correct?
Yes
As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.)
I disagree with this
Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
The superintelligence doesn’t need to know for certain the abstract fact that qualia supervene upon brain states. But in each case of a brain that does experience qualia, it too experiences qualia when it runs their computations. Since it knows that the computations are exactly the same, it knows or learns that in each specific case the brain in question is as a matter of fact producing qualia.
What it doesn’t learn (for certain) is whether the fully general condition always holds that human brains with similar-looking computations all have qualia – unless it were to entirely exhaust the space of possible minds which I suppose it does not. But that is unnecessary. We are only demanding (to vanquish “extra-physicality”) whether it knows for certain that the specific brains in its sphere of understanding have qualia. And since it is running their computations, which it is certain are theirs – i.e. it has incorporated their brains – it does so.
I suppose you might be objecting that one part of the mind might have imperfect knowledge about what the other part is doing, so it doesn’t “know” that it is actually experiencing qualia. But you might equally say that regarding communication across the mind about physical knowledge. So you see there is symmetry there between physical knowledge and knowledge of qualia, whether or not you want to postualte that the superintelligence also has perfect intra-brain communication.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.
Firstly, thanks for the first positive feedback I’ve received!
The thought experiment postulates that the omniscient being possess perfect information (being omniscient!) about a certain volume of the Universe. As such, the computation it performs is a perfect likeness of the computation that occurs in your brain. Therefore, from our perspective as the people conducting the thought experiment, we believe with extremely high probability that the outcome of the thought experiment is that the omniscient being experiences the same qualia that the human in its sphere of understanding does.
To restate, our belief in this outcome of the thought experiment is merely extremely confident. But given this most probable outcome, since the computation is exactly the same, the qualia experienced by the omniscient being are certainly exactly the same (as it sees things). This is quite a subtle distinction!
What may also be confusing you is that the existence of “perfect” knowledge i.e. omniscience is unphysical—this is after all a thought experiment. But as I suggested in my article, I think the same principle applies if the being is not omniscient but merely possesses a detailed physical understanding of a volume of the Universe. All that changes is that the discussion becomes more long-winded. It is still the case that there is no uncertainty on the part of the superintelligence concerning qualia that is not directly related to uncertainty about physical configurations.
I’m not sure you took my point correctly. I am arguing that the omniscient entity, and not just us, can only be extremely confident that other people are having conscious experiences.
The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations. If they don’t supervene in this way, two identical computations may differ in the qualia they produce. Furthermore, certain knowledge of this supervenience is not built into the entity’s omniscience. So he lacks certain knowledge of my experiences as a result of his simulation, even while obtaining certain knowledge of his own. So even though his computations have led him to perfect knowledge of the configuration of all quarks and the like, he still lacks perfect knowledge regarding my qualia. This is the conclusion Chalmers is trying to arrive at.
Recall that we are taking as a given that qualia do in fact supervene upon brain states (regardless of whether the superintelligence knows this).
Now, the superintelligence is certain about your physical make-up, despite the fact that you are separate from it. If it performs a computation, which it is certain is the same one occurring in your brain, then when it experiences qualia it knows for certain that these qualia are caused by the computation. When it doesn’t run the computation, it doesn’t get the qualia. When it runs it, it does. Since it is certain that the computation is the same as yours, it is certain that you experience the same qualia. You see this does not depend on an abstract belief that certain computations bring about qualia, it gets to actually run the computation—which simply is the computation in your brain, and see for itself that qualia are produced.
I think that you are having trouble grasping this because there is no such thing as perfect certainty, and you are applying your realistic intuitions of fallibility to the idea—either that or I’m wrong!
Phlebas, like antigonus, I really enjoyed your essay (without agreeing with all of it). But the same objection that antigonus raises occurred to me. I’m not sure that you understood antigonus’s objection, so I will try to rephrase it in my words.
I follow you this far:
And I agree that the superintelligence then experiences qualia. But what I don’t see is why
Since you want to leave open the possibility that qualia are irreducible, you can’t assume that the superintelligence (SI) sees how the computation logically necessitates the generation of the qualia. The only alternative is that the SI reaches its conclusion through empirical observation. Indeed, this is how you describe the SI’s inference when you say,
But how can this kind of empirical observation provide that the SI with absolute certainty that the computation, and the computation alone, causes the qualia?
For example, how can the SI rule out the possibility that some nonphysical fact F applies to itself, but not to you (or the infant or whatever), and that [the computation + F] suffices to generate qualia, while [the computation—F] does not?
It seems that the SI has to leave open some small chance that, when it runs the computation, the computation generates qualia, but when you run the computation, you do not experience qualia because some additional nonphysical ingredient is missing. To deny this in a debate with Chalmers would seem to beg the question.
OK, a multi-paragraph summary first (skip it if you like; I feel it’s helpul to avoid any further arguing at cross-purposes) – since my position in the argument slowly morphed and became disorganised:
People claim to have “qualia”, which is different to mere Dennettian “consciousness” but can’t seemingly be defined. On examination of the brain, we will inevitably find some physical reason for why people discuss consciousness. It is highly improbable that this physical reason is unrelated to “qualia” OR “consciousness”. However, it is misleading to bundle these two together when discussing the subejct – and I do not accept the rather absurd claim that it’s OK to do so, because consciousness is a “confusion” – since arguments valid under certain assumptions about this “confusion” are not valid under other assumptions. In other words, Eliezer’s failure to attempt to distinguish reducible “qualia”, irreducible “qualia”, and qualia-eliminative “consciousness” as a preliminary step in his essay renders the essay liable to beg the question (question-begging seems to be the crux of the matter in general in this discussion) – unless he considers the irreducible qualia idea to be a priori nonsense.
If he does consider the irreducible qualia concept to be a priori nonsense:
a) Why not say so?
b) Why was such a misleadingly long essay necessary?
c) Why assume such a thing? OK, it doesn’t seem very Bayesian. But Bayes’s Theorem, Bayesian rationality and reductionism are just rules that apply perfectly to everything we’ve ever tried to apply them to – but in my ontology like most others’, there’s everything else and there’s qualia. There is no other concept, apart from “qualia”, that a supermajority of people affirm to be real in the absolute strongest terms – including people such as myself who are otherwise Bayesian reductionists – but which appears to be irreducible.
But anyhow, there is only an actual flaw in Eliezer’s refutation of Chalmer’s if we assume that “qualia” are real and irreducible. If qualia are real but irreducible, there must however be a reductionist causal explanation for physical humans discussing consciousness. I find the idea of our discovering a reductionist explanation of qualia, rather than mere consciousness, improbable. Therefore let us suppose that on examining the brain we discover a Dennettian causal explanation of our talking about consciousness, and then people are left to decide whether they accept this as a refutation of “qualia” or decide that such a belief is crazy and that qualia must be irreducible, existing apart from the physical cause of talk about consciousness.
Then, if we are not Dennettians we have very good reason still to believe that qualia supervene upon brain computations – presumably including the computations that constitute the physical reason for our discussing consciousness. Whatever happens to our brains physically, we experience our qualia changing synchronously and in qualitative relation. They may be “causally isolated” in the sense that we understand causality necessarily to involve reducible phenomena, but they “supervene” – when brain states change, qualia change likewise.
This distinction reveals the essentially question-begging nature of Eliezer’s talk about “causally closed outer Chalmers” being deranged – if we believe as seems to be the case that he dismisses the concept of irreduicble qualia out of hand. That is to say, the causal chain leading back from Chalmers’s hands typing on the keyboard about “qualia” leads precisely to the brain computations (viewed by other parts of the brain – per Dennett’s description) that are generating qualia – outer Chalmers is not deranged – but if we take a (somewhat) detailed look at a brain from outside, all we see is the brain examining itself in action; we might (naively?) assume that such a thing as “qualia” have been explained away. It’s only when a given brain apprehends another brain in sufficient detail (however much detail that may be) such that it is running approximately the selfsame computations, that it actually notices the qualia (like you said, in an empirical manner).
So, let us assume for the sake of argument that this belief is the accurate one regarding consciousness/qualia. I suspect that in believing in real, irreducible qualia I am somewhere between Eliezer’s and Chalmers’s stances, because it seems to me that Eliezer is not favourable towards such an idea, but pace Chalmers I do not consider there to be anything “extra-physical” about qualia – they are irreducible, but they supervene upon physical brain states therefore they are fully determined by mundane physical configurations of the Universe.
So, having tussled with Eliezer I still need to tussle with Chalmers. Perhaps Eliezer has done the job for me? Apparently not, because if we grant that there is a real, irreducible phenomenon “qualia”, Eliezer’s argument (if it applies at all) is simply that it’s improbable that humans would talk about having qualia, if they didn’t have qualia. This doesn’t prove that qualia are fully determined by physical configurations: seemingly a superintelligence that knows everything about some physical volume of the Universe is merely confident that beings inside (which do, in fact, experience qualia) have qualia.
Tyrrell, you are right in saying that I argue that the superintelligence concludes through empirical observation that these beings experience qualia.
You ask:
The superintelligence cannot rule that out. I agree, and I now understand Antigonus’s objection better too. However, can it rule any “non-physical fact F” out? What about the non-physical fact F that its supposedly perfect knowledge about a certain volume of the Universe is bunkum? Is there any limitation to the purely physical knowledge that “non-physical facts” can potentially undermine – even in the eyes of a (physically) omniscient being?
If not, is it not unfair to apply this standard – having to rule out the possibility of some “non-physical fact” disrupting expectations – to the superintelligence’s knowledge of qualia, but not to its knowledge of everything else?
You may argue that this is question-begging. However, our objective is to prove that an omniscient superintelligence knows about qualia just as much as it knows about physical brains – assuming that (from our perspective, with extremely high probability) qualia do supervene on brain states (and also assuming that qualia are real and irreducible, to make the discussion meaningful). And we have proven that: what the superintelligence, as an omniscient mind, does is effectively to take physical brains, inhabit them and see if they experience qualia.
If this wasn’t the case – if we were stumped: “Um yeah, I don’t see how this superintelligence knows if I have qualia” – then we might have to concede the point to Chalmers. It would appear that qualia were not fully determined by physical configurations, therefore they must be “extra-physical” rather supervening on brain states and being merely irreducible.
The difference is that the “non-physical fact” that you speak of is equally capable of undermining anything. It is fully general. If we were arguing with Chalmers about whether there are “non-physical facts” in general then I would be begging the question – that seems an a priori irresolvable argument. But what we are actually arguing about is whether we are forced to admit a specific, apparent gap in physics where a real phenomenon is seen to lack a physical underpinning. This would prove that there is at least one “non-physical fact”. In other words, we are not trying to prove the non-existence of non-physical facts in general (heaven forbid!) but merely to disprove the idea that there is any particular reason why we should believe that there are any non-physical facts.
I’m worried we’re talking past each other, since I would give largely the same reply as before.
The word “it” here is referring to the superintelligence correct? Because if so, this is the specific inference I’m disputing the superintelligence will legitimately make. As I wrote: “The entity can be certain that my qualia exist and are identical to his-simulation-of-me’s qualia only if he’s antecedently certain that qualia supervene on the physical facts that are the subject of his computations.” (It would be helpful for me if you gave me a simple yes-or-no to this principle.) Even if we suppose ourselves to be certain of the supervenience (and therefore certain that the entity undergoes identical experiences to mine in the process of simulating me), what matters here is the superintelligence’s certainty around it. So in this scenario, there is no “regardless of whether the superintelligence knows qualia supervene upon brain states.”
Yes
I disagree with this
The superintelligence doesn’t need to know for certain the abstract fact that qualia supervene upon brain states. But in each case of a brain that does experience qualia, it too experiences qualia when it runs their computations. Since it knows that the computations are exactly the same, it knows or learns that in each specific case the brain in question is as a matter of fact producing qualia.
What it doesn’t learn (for certain) is whether the fully general condition always holds that human brains with similar-looking computations all have qualia – unless it were to entirely exhaust the space of possible minds which I suppose it does not. But that is unnecessary. We are only demanding (to vanquish “extra-physicality”) whether it knows for certain that the specific brains in its sphere of understanding have qualia. And since it is running their computations, which it is certain are theirs – i.e. it has incorporated their brains – it does so.
I suppose you might be objecting that one part of the mind might have imperfect knowledge about what the other part is doing, so it doesn’t “know” that it is actually experiencing qualia. But you might equally say that regarding communication across the mind about physical knowledge. So you see there is symmetry there between physical knowledge and knowledge of qualia, whether or not you want to postualte that the superintelligence also has perfect intra-brain communication.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.