Good point. In this post I was trying to solve something like the easy goal inference problem by factoring it into some different problems, including ontology identification, but it’s not clear whether this is the right factoring.
It seems like your intuition is something like: “any way of correctly modelling human mistakes when the human and AI share an ontology will also correctly handle mistakes arising from the human and AI having a different ontology”. I think I mostly agree with this intuition. My motivation for working on ontology identification despite this intuition is some combination of (a) “easy” versions of ontology identification seem useful outside the domain of value learning (e.g. making a genie for concrete physical tasks), and (b) I don’t see many promising approaches for directly attacking the easy goal inference problem.
But after writing this, I think I have updated towards looking for approaches to the easy goal inference problem that avoid ontology identification. The most promising thing I can think of right now seems to be some variation on planning algorithm 2, but with an adjustment so that the planning can take into account the AI’s different predictions (but not the AI’s internal representation). It does seem plausible that something in this space would work without directly solving the ontology identification problem.
I don’t see many promising approaches for directly attacking the easy goal inference problem.
I would agree with that. But I don’t see how this situation will change no matter what you learn about ontology identification. It looks to me like the easy goal inference problem is probably just ill-posed/incoherent, and we should avoid any approach that rests on us solving it. The kind of insight that would be required to change this view looks very unlike the kind required to solve ontology identification, and also unlike the kind required to make conventional progress in AI, so to the extent that there are workable approaches to the easy goal inference problem it seems like we could work towards them now. If we can’t see how to attack the problem, then by the same token that leads me to be pessimistic.
On that perspective we might ask: how we are avoiding the problem? The dodges I know would also dodge ontology identification, by cashing everything out in terms of human behavior. It’s harder for me to know what the situation is like for solutions to the goal inference problem—because I don’t yet see any plausible solution strategies—but I would guess that the situation will turn out to be similar.
Good point. In this post I was trying to solve something like the easy goal inference problem by factoring it into some different problems, including ontology identification, but it’s not clear whether this is the right factoring.
It seems like your intuition is something like: “any way of correctly modelling human mistakes when the human and AI share an ontology will also correctly handle mistakes arising from the human and AI having a different ontology”. I think I mostly agree with this intuition. My motivation for working on ontology identification despite this intuition is some combination of (a) “easy” versions of ontology identification seem useful outside the domain of value learning (e.g. making a genie for concrete physical tasks), and (b) I don’t see many promising approaches for directly attacking the easy goal inference problem.
But after writing this, I think I have updated towards looking for approaches to the easy goal inference problem that avoid ontology identification. The most promising thing I can think of right now seems to be some variation on planning algorithm 2, but with an adjustment so that the planning can take into account the AI’s different predictions (but not the AI’s internal representation). It does seem plausible that something in this space would work without directly solving the ontology identification problem.
I would agree with that. But I don’t see how this situation will change no matter what you learn about ontology identification. It looks to me like the easy goal inference problem is probably just ill-posed/incoherent, and we should avoid any approach that rests on us solving it. The kind of insight that would be required to change this view looks very unlike the kind required to solve ontology identification, and also unlike the kind required to make conventional progress in AI, so to the extent that there are workable approaches to the easy goal inference problem it seems like we could work towards them now. If we can’t see how to attack the problem, then by the same token that leads me to be pessimistic.
On that perspective we might ask: how we are avoiding the problem? The dodges I know would also dodge ontology identification, by cashing everything out in terms of human behavior. It’s harder for me to know what the situation is like for solutions to the goal inference problem—because I don’t yet see any plausible solution strategies—but I would guess that the situation will turn out to be similar.