[Question] What is the relationship between Preference Learning and Value Learning?

It ap­pears that in the last few years the AI Align­ment com­mu­nity has ded­i­cated great at­ten­tion to the Value Learn­ing Prob­lem [1]. In par­tic­u­lar, the work of Stu­art Arm­strong stands out to me.

Con­cur­rently, dur­ing the last decade, re­searcher such as Eyke Hüller­meier Jo­hannes Fürnkranz pro­duced a sig­nifi­cant amount of work on the top­ics of prefer­ence learn­ing [2] and prefer­ence-based re­in­force­ment learn­ing [3].

While I am not highly fa­mil­iar with the Value Learn­ing liter­a­ture, I con­sider the two fields closely re­lated if not over­lap­ping, but I have not of­ten seen refer­ences the Prefer­ence Learn­ing work, and vice-versa.

Is this be­cause the two fields are less re­lated than what I think? And more speci­fi­cally, how do the two fields re­late with each other?


References

[1] - Soares, Nate. “The value learn­ing prob­lem.” Ma­chine In­tel­li­gence Re­search In­sti­tute, Berkley (2015).

[2] - Fürnkranz, Jo­hannes, and Eyke Hüller­meier. Prefer­ence learn­ing. Springer US, 2010.

[3] - Fürnkranz, Jo­hannes, et al. “Prefer­ence-based re­in­force­ment learn­ing: a for­mal frame­work and a policy iter­a­tion al­gorithm.” Ma­chine learn­ing 89.1-2 (2012): 123-156.

No comments.