I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.
I think this community vastly over-estimates its grip on meta-ethical concepts like moral realism or moral anti-realism. (E.g. the hopelessly confused discussion in this thread). I don’t think the meta-ethics sequence resolves these sorts of basic issues.
I’m still coming to terms with the philosophical definitions of different positions and their implications, and the Stanford Encyclopedia of Philosophy seems like a more rounded account of the different view points than the meta-ethics sequences. I think I might be better off first spending my time continuing to read the SEP and trying to make my own decisions, and then reading the meta-ethics sequences with that understanding of the philosophical background.
By the way, I can see your point that objections to moral anti-realism in this community may be somewhat motivated by the possibility that friendly AI becomes unprovable. As I understand it, any action can be “rational” if the value/utility function is arbitrary.