Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I’ve gotten, I’m not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the “is-ought” gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer’s objections, this is a fundamental problem for determining what terminal values or utility function we should use—a task you and I are both interested in undertaking.
I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.
Sorry, I have only read selections of the sequences, and not many of the posts on metaethics. Though as far as I’ve gotten, I’m not convinced that the sequences really solve, or make obsolete, many of the deeper problems or moral philosophy.
The original post, and this one, seems to be running into the “is-ought” gap and moral relativism. Being unable to separate terminal values from biases is due to there being no truly objective terminal values. Despite Eliezer’s objections, this is a fundamental problem for determining what terminal values or utility function we should use—a task you and I are both interested in undertaking.
I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.