The idea of complexity of values explains why “happiness” or “selfishness” can’t be expected to capture the whole thing: when you talk about “good”, you mean “good” and not some other concept. To unpack “good”, you have no other option than to list all the things you value, and such list uttered by a human can’t reflect the whole thing accurately anyway.
Metaethics sequence deals with errors of confusing moral reasons and historical explanations: evolution’s goals are not your own and don’t have normative power over your own goals, even if there is a surface similarity and hence some explanatory power.
I agree that those are important things to learn, just not for the topic Tesseract is writing about.
What do you mean? Tesseract makes these exact errors in the post, and those posts explain how not to err there, which makes the posts directly relevant.
Tesseract’s conclusion is hindered by not having read about the interplay between decision theory and values (i.e. how to define a “selfish action”, which consequences to take into consider, etc.), not the complexity of value as such. Tesseract would me making the same errors on decision theory even if human values were not so complex, and decision theory is the focus of the post.
Might not be relevant to “Tesseract’s conclusion”, but is relevant to other little conclusions made in the post along the way, even if they are all independent and don’t damage each other.
The idea of complexity of values explains why “happiness” or “selfishness” can’t be expected to capture the whole thing: when you talk about “good”, you mean “good” and not some other concept. To unpack “good”, you have no other option than to list all the things you value, and such list uttered by a human can’t reflect the whole thing accurately anyway.
Metaethics sequence deals with errors of confusing moral reasons and historical explanations: evolution’s goals are not your own and don’t have normative power over your own goals, even if there is a surface similarity and hence some explanatory power.
I agree that those are important things to learn, just not for the topic Tesseract is writing about.
What do you mean? Tesseract makes these exact errors in the post, and those posts explain how not to err there, which makes the posts directly relevant.
Tesseract’s conclusion is hindered by not having read about the interplay between decision theory and values (i.e. how to define a “selfish action”, which consequences to take into consider, etc.), not the complexity of value as such. Tesseract would me making the same errors on decision theory even if human values were not so complex, and decision theory is the focus of the post.
Might not be relevant to “Tesseract’s conclusion”, but is relevant to other little conclusions made in the post along the way, even if they are all independent and don’t damage each other.