O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.
O.K., you’re correct that full-fledged supervenience isn’t necessary. What the superintelligence needs to be certain is instead certain knowledge of the following weaker claim:
(1) Any two identical computational processes yield the same qualia if at some point the process is performed inside of the specific region R of the universe that the superintelligence is looking at.
But since the superintelligence can’t be certain of (1), either, it doesn’t really make a difference. If you disagree, how can the superintelligence deduce (1) from its complete description of the physical events in R? It seems to me that all it can deduce are A. the state of the matter in R at any particular time, and B. that its own performance of some of the processes in R yields qualia. But (1) is clearly not a logical consequence of A. and B.