What I read Qiaochu as saying is that the IRL model doesn’t have an ontology, and the world it lives in is one created by the ontology the programmer implicitly constructs for it based on choices about training data. Thus this problem doesn’t come up because the IRL model isn’t interacting with the whole world; only the parts of it the programmer thought relevant to solving the problem, and the model succeeds in part by how good a job the programmer did in picking what’s relevant.
What I read Qiaochu as saying is that the IRL model doesn’t have an ontology, and the world it lives in is one created by the ontology the programmer implicitly constructs for it based on choices about training data. Thus this problem doesn’t come up because the IRL model isn’t interacting with the whole world; only the parts of it the programmer thought relevant to solving the problem, and the model succeeds in part by how good a job the programmer did in picking what’s relevant.