Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I’ve gotten, I’m not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the “is-ought” gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer’s objections, this is a fundamental problem for determining what terminal values or utility function we should use—a task you and I are both interested in undertaking.
I hadn’t come across the von Neumann-Morgenstern utility theorem before reading this post, thanks for drawing it to my attention.
Looking at Moral Philosophy through the lens of agents working with utility/value functions is an interesting exercise; it’s something I’m still working on. In the long run, I think some deep thinking needs to be done about what we end up selecting as terminal values, and how we incorporate them into a utility function. (I hope that isn’t stating something that is blindingly obvious.)
I guess where you might be headed is into Meta-Ethics. As I understand it, meta-ethics includes debates on moral relativism that are closely related to the existence terminal/intrinsic values. Moral relativism asserts that all values are subjective (i.e., only the beliefs of individuals), rather than objective (i.e., universally true). So no practice or activity is inherently right or wrong, it is just the perception of people that makes it so. As you might imagine, this can be used as a defense of violent cultural practices (it could even be used in defense of baby-eating).
I tend to agree with the position of moral relativism; unfortunate though it may be, I’m not convinced there are things that are objectively valuable. I’m of the belief that if there are no agents to value something, then that something has effectively no value. That holds for people and their values too. That said, we do exist, and I think subjective values count for something.
Humanity has come to some degree of consensus over what should be valued. Probably largely as a result of evolution and social conditioning. So from here, I think it mightn’t be wasted effort to explore the selection of different intrinsic values.
Luke Muehlhauser has called morality an engineering problem. While Sam Harris has described morality as a landscape, i.e., the surface is our terminal value we are trying to maximize (Harris picked the well-being of conscious creatures) and the societal practices as the variables. Though I don’t know that well-being is the best terminal value, I like the idea of treating morality as an optimization problem. I think this is a reasonable way to view ethics. Without objective values, it might just be a matter of testing different sets of terminal subjective values, until we find the optimum (an hopefully don’t get trapped in a local maximum).
Nevertheless, I think it’s interesting to suppose that something is objectively valuable. It doesn’t seem like a stretch to me to say that the knowledge of what is objectively valuable, would, itself, be objectively valuable. And that the search for that knowledge would probably also be objectively valuable. After all that, it would be somewhat ironic if it turned out the universal objective values don’t include the survival of life on Earth.