Eliezer was too confident in his own metaethics, and in his decision theory to a lesser degree (unlike metaethics, he never considered decision theory a solved problem, but was also willing to draw stronger practical conclusions from it than I think was justified) (and probably other philosophical positions that aren’t as salient in my mind EDIT: oh yeah altruism and identity)
Trying to solve philosophical problems like these on a deadline with intent to deploy them into AI is not a good plan, especially if you’re planning to deploy it even if it’s still highly controversial (i.e., a majority of professional philosophers think you are wrong). This includes Eliezer’s effort as well as everyone else’s.
Trying to solve philosophical problems like these on a deadline with intent to deploy them into AI is not a good plan, especially if you’re planning to deploy it even if it’s still highly controversial (i.e., a majority of professional philosophers think you are wrong).
If the majority of profesional philosophers do endorse your metaethics, how seriously should you take that?
And inversely, do you think it’s implausible that you could have correctly reasoned your way to correct metaethics, as validated by a more narrow community of philosophers, but not yet have convinced everyone in the field?
The attitude of the sequences emphasizes often that most people in the world believe in god, so if you’re interested in figuring out the truth, you gotta be comfortable confidently disclaiming widely held beliefs. What do you say to the person who assesses that academic philosophy is a sufficiently broken field with warped incentives that prevent intellectual progress, and thinks that they should discard the opinion of the whole thing?
Do you just claim that they’re wrong about that, on the object level, and that hypothetical person should have more respect for the views of philosophers?
(That said, I’ll observe that there’s an important in practice asymmetry between “almost everyone is wrong in their belief of X, and I’m confident about that” and “I’ve independently reasoned my way to Y, and I’m very confident of it.” Other people are wrong != I am right.)
My position is a combination of:
Eliezer was too confident in his own metaethics, and in his decision theory to a lesser degree (unlike metaethics, he never considered decision theory a solved problem, but was also willing to draw stronger practical conclusions from it than I think was justified) (and probably other philosophical positions that aren’t as salient in my mind EDIT: oh yeah altruism and identity)
Trying to solve philosophical problems like these on a deadline with intent to deploy them into AI is not a good plan, especially if you’re planning to deploy it even if it’s still highly controversial (i.e., a majority of professional philosophers think you are wrong). This includes Eliezer’s effort as well as everyone else’s.
A couple of posts arguing for 1 above:
https://www.lesswrong.com/posts/QvYKSFmsBX3QhgQvF/morality-isn-t-logical
https://www.lesswrong.com/posts/orhEa4wuRJHPmHFsR/six-plausible-meta-ethical-alternatives
Did the above help you figure it out? If not, be more specific about what’s confusing you about that thread?
If the majority of profesional philosophers do endorse your metaethics, how seriously should you take that?
And inversely, do you think it’s implausible that you could have correctly reasoned your way to correct metaethics, as validated by a more narrow community of philosophers, but not yet have convinced everyone in the field?
The attitude of the sequences emphasizes often that most people in the world believe in god, so if you’re interested in figuring out the truth, you gotta be comfortable confidently disclaiming widely held beliefs. What do you say to the person who assesses that academic philosophy is a sufficiently broken field with warped incentives that prevent intellectual progress, and thinks that they should discard the opinion of the whole thing?
Do you just claim that they’re wrong about that, on the object level, and that hypothetical person should have more respect for the views of philosophers?
(That said, I’ll observe that there’s an important in practice asymmetry between “almost everyone is wrong in their belief of X, and I’m confident about that” and “I’ve independently reasoned my way to Y, and I’m very confident of it.” Other people are wrong != I am right.)