You are getting the statement of the Chinese room wrong. The claim isn’t that the human inside the room will learn Chinese. Indeed, it’s a key feature of the argument that the person *doesn’t* ever count as knowing Chinese. It is only the system consisting of the person plus all the rules written down in the room etc.. which knows Chinese. This is what’s supposed to (but not convincingly IMO) be an unpalatable conclusion.
Secondly, no one is suggesting that there isn’t an algorithm that can be followed which makes it appear as if the room understands Chinese. The question is whether or not there is some conscious entity corresponding to the system of the guy plus all the rules which has the qualitative experience of understanding the Chinese words submitted etc.. As such the points you raise don’t really address the main issue.
It doesn’t really make sense to talk about the agent idealization at the same time as talking about effective precommitment (i.e. deterministic/probabilistic determination of actions).
The notion of an agent is an idealization of actual actors in terms of free choices, e.g., idealizing individuals in terms of choices of functions on game theoretic trees. This is an incompatible idealization with thinking of such actors as being deterministically or probabilistically committed to actions for those same ‘choices.’
Of course, ultimately, actual actors (e.g. people) are only approximated by talk of agents but if you try and simultaneously use the agent idealization while regarding those *same* choices as being effectively precommited you risk contradiction and model absurdity (of course you can decide to reduce the set of actions you regard as free choices in the agent idealization but that doesn’t seem to be the way you are talking about things here).