ML based recommender system seems to be capable of magically learn my preferences that I can’t describe, that’s why it is so addictive. But this could also be beneficial in cases where it can help us discover interests that even ourselves are unaware of.
I’m not that familiar with the latest research in recommender systems, but I’ve noticed a few work that have also been trying to mitigate the tension between addictive instant gratification vs long term well-being: https://arxiv.org/abs/2207.10192https://arxiv.org/abs/2406.01611 combining LLM into these systems is also very trending.
I think a combination of stated preference parsed by LLM + recommender system trained on revealed preference will likely be a more balanced approach.
Telling the system my preferences probably works well in cases where I’m quite certain, like something I absolutely love or hate. But in a lot of cases, I’m uncertain or don’t even know what I want until I see something and show that through my revealed preference. That was the problem motivating research like this https://www.lesswrong.com/posts/k8SbrC8EMq2RpCmNg/post-mortem-ing-my-earliest-ml-research-paper-7-years-later The bias in revealed preference and stated preference is also a classical direction in the behavioral econ and psychology research: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=437620 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=992869
ML based recommender system seems to be capable of magically learn my preferences that I can’t describe, that’s why it is so addictive. But this could also be beneficial in cases where it can help us discover interests that even ourselves are unaware of.
I’m not that familiar with the latest research in recommender systems, but I’ve noticed a few work that have also been trying to mitigate the tension between addictive instant gratification vs long term well-being: https://arxiv.org/abs/2207.10192 https://arxiv.org/abs/2406.01611 combining LLM into these systems is also very trending.