I think a major crux is that the things you couldn’t impart on Mary through language (assuming that such things do exist) would be wishy-washy stuff like qualia whose existence, for a nonhuman system modelling humans, essentially doesn’t matter for predictive accuracy. In other words, a universe where Mary does learn something new and a universe where she doesn’t are essentially indistinguishable from the outside, so whether it shows up in world models is irrelevant.
Well if Mary does learn something new( how it feels “from the inside” to see red or whatever ) she would notice, and her brainstate would reflect that plus whatever information she learned.
Otherwise it doesn’t make sense to say she learned anything.
And just the fact she learned something and might have thought something like “neat, so that’s what red looks like” would be relevant to predictions of her behavior even ignoring possible information content of qualia.
I think a major crux is that the things you couldn’t impart on Mary through language (assuming that such things do exist) would be wishy-washy stuff like qualia whose existence, for a nonhuman system modelling humans, essentially doesn’t matter for predictive accuracy. In other words, a universe where Mary does learn something new and a universe where she doesn’t are essentially indistinguishable from the outside, so whether it shows up in world models is irrelevant.
Well if Mary does learn something new( how it feels “from the inside” to see red or whatever ) she would notice, and her brainstate would reflect that plus whatever information she learned. Otherwise it doesn’t make sense to say she learned anything.
And just the fact she learned something and might have thought something like “neat, so that’s what red looks like” would be relevant to predictions of her behavior even ignoring possible information content of qualia.
So it seems distinguishable to me.