It appears that in the last few years the AI Alignment community has dedicated great attention to the Value Learning Problem . In particular, the work of Stuart Armstrong stands out to me.
Concurrently, during the last decade, researcher such as Eyke Hüllermeier Johannes Fürnkranz produced a significant amount of work on the topics of preference learning  and preference-based reinforcement learning .
While I am not highly familiar with the Value Learning literature, I consider the two fields closely related if not overlapping, but I have not often seen references the Preference Learning work, and vice-versa.
Is this because the two fields are less related than what I think? And more specifically, how do the two fields relate with each other?
 - Soares, Nate. “The value learning problem.” Machine Intelligence Research Institute, Berkley (2015).
 - Fürnkranz, Johannes, and Eyke Hüllermeier. Preference learning. Springer US, 2010.
 - Fürnkranz, Johannes, et al. “Preference-based reinforcement learning: a formal framework and a policy iteration algorithm.” Machine learning 89.1-2 (2012): 123-156.