I’m not sure, but I’d guess it wouldn’t produce much. For example, if the agent is just making random decisions, well you won’t be able to learn from that.
The IRL research so far has used training data provided by humans, and can infer human goal shaped utility functions for at least the fairly simple problem domains tested so far. Most of this research was done almost a decade ago and hasn’t been as active recently. In particular if you scaled it up with modern tech, I bet that IRL techniques could learn the score function of Atari from watching human play—for example.
I’m not sure, but I’d guess it wouldn’t produce much. For example, if the agent is just making random decisions, well you won’t be able to learn from that.
The IRL research so far has used training data provided by humans, and can infer human goal shaped utility functions for at least the fairly simple problem domains tested so far. Most of this research was done almost a decade ago and hasn’t been as active recently. In particular if you scaled it up with modern tech, I bet that IRL techniques could learn the score function of Atari from watching human play—for example.