While I’m probably much more of a lib than you guys (at least in ordinary human contexts), I also think that people in AI alignment circles mostly have really silly conceptions of human valuing and the historical development of values.[1] I touch on this a bit here. Also, if you haven’t encountered it already, you might be interested in Hegel’s work on this stuff — in particular, The Phenomenology of Spirit.
Yes, agreed that the concept of value is very often confused, mixing economic utility and decision theory with human preferences, constraints, and goals. Harry Law also discussed the collapse of different conceptions into a single idea of “values” here: https://www.learningfromexamples.com/p/weighed-measured-and-found-wanting
While I’m probably much more of a lib than you guys (at least in ordinary human contexts), I also think that people in AI alignment circles mostly have really silly conceptions of human valuing and the historical development of values.[1] I touch on this a bit here. Also, if you haven’t encountered it already, you might be interested in Hegel’s work on this stuff — in particular, The Phenomenology of Spirit.
This isn’t to say that people in other circles have better conceptions…
Yes, agreed that the concept of value is very often confused, mixing economic utility and decision theory with human preferences, constraints, and goals. Harry Law also discussed the collapse of different conceptions into a single idea of “values” here: https://www.learningfromexamples.com/p/weighed-measured-and-found-wanting