Philosophers have discussed these under the term “desires”. I think there was a lot of progress since the time of the pre-Socratics. Aristotle’s practical syllogism, Buridan’s donkey, Hume emphasis of the independence of beliefs and desires, Kant’s distinction between theoretical reason and practical reason, direction of fit, Richard Jeffrey’s utility theory (where utilities are degrees of desire), analysis of akrasia by various analytic philosophers, Nozick’s experience machine, and various others.
“A lot of progress”.… well, reality doesn’t grade on a curve. Surely someone has said something about something, yes, but have we said enough about what matters? Not even close. If you don’t know how inadequate our understanding of values is I can’t convince you in a comment, but one way to find out would be to try to solve alignment. E.g. see https://tsvibt.blogspot.com/2023/03/the-fraught-voyage-of-aligned-novelty.html
There is quite the difference between “our understanding is still pre-Socratic” and “we haven’t said enough”. In general I think very few people here (not sure whether this applies to you) are familiar with the philosophical literature on topics in this area. For example, there is very little interest on LessWrong in normative ethics and the associated philosophical research. Even though this is directly related to alignment, since, if you you have an intent-aligned ASI (which is probably easier to achieve than shooting straight for value alignment) you probably need to know what ethics it should implement when asking it to create a fully value-aligned ASI.
Interestingly, the situation is quite different for the EA Forum, where there are regular high-quality posts on solving issues in normative ethics with reference to the academic literature, like the repugnant conclusion, the procreation asymmetry and the status of person-affecting theories. Any satisfactory normative ethical theory needs to solve these problems, similar to how any satisfactory normative theory of epistemic rationality needs to solve the various epistemic paradoxes and related issues.
Again, I don’t know whether this applies to you, but most cases of “philosophy has made basically no progress on topic X” seem to come from people who have very little knowledge of the philosophical literature on topic X.
I’m not sure. I did put in some effort to survey various strands of philosophy related to axiology, but not much effort. E.g. looked at some writings in the vein of Anscombe’s study of intention; tried to read D+G because maybe “machines” is the sort of thing I’m asking about (was not useful to me lol); have read some Heidegger; some Nietzsche; some more obscure things like “Care Crosses the River” by Blumenberg; the basics of the “analytical” stuff LWers know (including doing some of my own research on decision theory); etc etc. But in short, no, none of it even addresses the question—and the failure is the sort of failure that was supposed to have its coarsest outlines brought to light by genuinely Socratic questioning, which is why I call it “pre-Socratic”, not to say that “no one since Socrates has billed themselves as talking about something related to values or something”.
I think even communicating the question would take a lot of work, which as I said is part of the problem. A couple hints:
You should think of the question of values as being more like “what is the driving engine” rather than “what are the rules” or “what are the outcomes” or “how to make decisions” etc.
No, that’s part of the problem. There’s pretheoretic material as some of a starting point here:
https://www.lesswrong.com/posts/YLRPhvgN4uZ6LCLxw/human-wanting
Whatever those things are, you’d want to understand the context that makes them what they are:
https://www.lesswrong.com/posts/HJ4EHPG5qPbbbk5nK/gemini-modeling
And refactor the big blob into lots of better concepts, which would probably require a larger investigation and conceptual refactoring:
https://www.lesswrong.com/posts/TNQKFoWhAkLCB4Kt7/a-hermeneutic-net-for-agency
In particular so that we understand how “values” can be stable (https://www.lesswrong.com/posts/Ht4JZtxngKwuQ7cDC/tsvibt-s-shortform?commentId=koeti9ygXB9wPLnnF) and can incorporate novel concepts / deal with novel domains (https://www.lesswrong.com/posts/CBHpzpzJy98idiSGs/do-humans-derive-values-from-fictitious-imputed-coherence) and eventually address the stuff here: https://www.lesswrong.com/posts/ASZco85chGouu2LKk/the-fraught-voyage-of-aligned-novelty
Philosophers have discussed these under the term “desires”. I think there was a lot of progress since the time of the pre-Socratics. Aristotle’s practical syllogism, Buridan’s donkey, Hume emphasis of the independence of beliefs and desires, Kant’s distinction between theoretical reason and practical reason, direction of fit, Richard Jeffrey’s utility theory (where utilities are degrees of desire), analysis of akrasia by various analytic philosophers, Nozick’s experience machine, and various others.
“A lot of progress”.… well, reality doesn’t grade on a curve. Surely someone has said something about something, yes, but have we said enough about what matters? Not even close. If you don’t know how inadequate our understanding of values is I can’t convince you in a comment, but one way to find out would be to try to solve alignment. E.g. see https://tsvibt.blogspot.com/2023/03/the-fraught-voyage-of-aligned-novelty.html
There is quite the difference between “our understanding is still pre-Socratic” and “we haven’t said enough”. In general I think very few people here (not sure whether this applies to you) are familiar with the philosophical literature on topics in this area. For example, there is very little interest on LessWrong in normative ethics and the associated philosophical research. Even though this is directly related to alignment, since, if you you have an intent-aligned ASI (which is probably easier to achieve than shooting straight for value alignment) you probably need to know what ethics it should implement when asking it to create a fully value-aligned ASI.
Interestingly, the situation is quite different for the EA Forum, where there are regular high-quality posts on solving issues in normative ethics with reference to the academic literature, like the repugnant conclusion, the procreation asymmetry and the status of person-affecting theories. Any satisfactory normative ethical theory needs to solve these problems, similar to how any satisfactory normative theory of epistemic rationality needs to solve the various epistemic paradoxes and related issues.
Again, I don’t know whether this applies to you, but most cases of “philosophy has made basically no progress on topic X” seem to come from people who have very little knowledge of the philosophical literature on topic X.
I’m not sure. I did put in some effort to survey various strands of philosophy related to axiology, but not much effort. E.g. looked at some writings in the vein of Anscombe’s study of intention; tried to read D+G because maybe “machines” is the sort of thing I’m asking about (was not useful to me lol); have read some Heidegger; some Nietzsche; some more obscure things like “Care Crosses the River” by Blumenberg; the basics of the “analytical” stuff LWers know (including doing some of my own research on decision theory); etc etc. But in short, no, none of it even addresses the question—and the failure is the sort of failure that was supposed to have its coarsest outlines brought to light by genuinely Socratic questioning, which is why I call it “pre-Socratic”, not to say that “no one since Socrates has billed themselves as talking about something related to values or something”.
I think even communicating the question would take a lot of work, which as I said is part of the problem. A couple hints:
https://www.lesswrong.com/posts/NqsNYsyoA2YSbb3py/fundamental-question-what-determines-a-mind-s-effects (I think if you read this it will seem incredibly boringly obvious and trivial, and yet, literally no one addresses it! Some people sort of try, but fail so badly that it can’t count as progress. Closest would be some bits of theology, maybe? Not sure.)
https://www.lesswrong.com/posts/p7mMJvwDbuvo4K7NE/telopheme-telophore-and-telotect (I think this distinction is mostly a failed attempt to carve things, but the question that it fails to answer is related to the important question of values.)
You should think of the question of values as being more like “what is the driving engine” rather than “what are the rules” or “what are the outcomes” or “how to make decisions” etc.