If that take on things is correct then it may be that emulating a human by training a skeleton AI using constant video streaming etc over a 10-20 year period (about how long neurons last before replacement) to optimally better predict behaviour of the human being modelled will eventually arrive at an AI with almost exactly the same beliefs and behaviours as the human being emulated.
That’s the premise of Greg Egan’s “Jewel” stories. I think it’s wrong. A person who never saw a spider will still get scared when seeing one for the first time, because humans are hardwired for that. A person who has a specific memory and doesn’t mention it to anyone for many years, probably doesn’t give enough information through their behavior to infer the memory in detail. And the extreme example of why input/output is not enough to infer everything about inner life: imagine a human in a box, which has no input/output at all, but plenty of inner life. I think we all have lots of inner degrees of freedom, that can’t be fully determined even from a full record of our behavior over a long time.
That’s the premise of Greg Egan’s “Jewel” stories. I think it’s wrong. A person who never saw a spider will still get scared when seeing one for the first time, because humans are hardwired for that. A person who has a specific memory and doesn’t mention it to anyone for many years, probably doesn’t give enough information through their behavior to infer the memory in detail. And the extreme example of why input/output is not enough to infer everything about inner life: imagine a human in a box, which has no input/output at all, but plenty of inner life. I think we all have lots of inner degrees of freedom, that can’t be fully determined even from a full record of our behavior over a long time.