2) The most obvious obstacle for a human is, that I don’t have the power to precisely observe and remember everything that I do, and I absolutely don’t have an ability to reason about which specific source code could cause me to do exactly those things. Even the information that in theory is there, I can’t process it. I guess this is logical uncertainty. It’s like being unable to calculate the millionth digit of pi, especially if I couldn’t even count to ten correctly.
But even if I had the super-ability to correctly determine which kinds of source code could produce my behavior and which couldn’t, there would still be multiple solutions. I could limit the set of possible source codes to a subset, but I couldn’t limit it to exactly one source code. Not even to a group of behaviorally identical source codes, because there are always realistic situations that I have never experienced, and some of the remaining source codes could do different things there. So within the remaining set, this seems like indexical uncertainty. I could be any of them, meaning that different copies of “me” in different possible worlds could have different algorithms within this set, and so far the same experience.
There is a problem with the second part—if I have an information about maximum possible size of my source code, it means there are only finitely many options, so I could hypothetically gradually reduce it to exactly one, which means removing the indexical uncertainty. On the other hand, this would work for “normal” scenarios, but not for the “brain in the jar” scenarios: if I am in a Matrix, my assumption that my human source code is limited by the size of my body could be wrong.
2) The most obvious obstacle for a human is, that I don’t have the power to precisely observe and remember everything that I do, and I absolutely don’t have an ability to reason about which specific source code could cause me to do exactly those things. Even the information that in theory is there, I can’t process it. I guess this is logical uncertainty. It’s like being unable to calculate the millionth digit of pi, especially if I couldn’t even count to ten correctly.
But even if I had the super-ability to correctly determine which kinds of source code could produce my behavior and which couldn’t, there would still be multiple solutions. I could limit the set of possible source codes to a subset, but I couldn’t limit it to exactly one source code. Not even to a group of behaviorally identical source codes, because there are always realistic situations that I have never experienced, and some of the remaining source codes could do different things there. So within the remaining set, this seems like indexical uncertainty. I could be any of them, meaning that different copies of “me” in different possible worlds could have different algorithms within this set, and so far the same experience.
There is a problem with the second part—if I have an information about maximum possible size of my source code, it means there are only finitely many options, so I could hypothetically gradually reduce it to exactly one, which means removing the indexical uncertainty. On the other hand, this would work for “normal” scenarios, but not for the “brain in the jar” scenarios: if I am in a Matrix, my assumption that my human source code is limited by the size of my body could be wrong.