Step 2 codifies objective existence of subjective states. But let’s suppose that homomorphic computation can be decrypted in two ways: one is what we encoded and the output is something like “it feels real”, other is a minimally conscious state that happened to exist when decoding with a different key and its output is a noisy grunt expressing dissatisfaction with the noisy environment. Should the second one be included in M?
It seems that g and h cannot be efficiently computable if we decide to include the second state into M. On the second thought, if we don’t have a list of minds in R, we need to analyze all the (spatially localized?) subsets of R to decide which ones of them are conscious. Could it be done efficiently?
ETA: Also, how to codify subjective existence of subjective states?
When I said “subjective existence” I’ve meant some model where we don’t need a list of minds or exhaustive search for minds to make them real. After all the brain has its own computing power and requiring additional compute or data to make subjective experiences associated with its computations real looks extraneous. Interactions of a mind with our world, on the other hand, seem crucial for our ability to determine its existence.
BTW, thank you for laying out all this in such detail. It makes reasoning much more focused.