I’m assuming that the LessWrongers interested in ‘should I be a vegan?’ are at least somewhat inclined toward effective altruism, uilitarianism, compassion, or what-have-you. I’m not claiming a purely selfish agent should be a vegan. I’m also not saying that the case is purely intellectual (in the sense of having nothing to do with our preferences or emotions); I’m just saying that the intellectual component is correctly reasoned. You can evaluate it as a hypothetical imperative without asking whether the antecedent holds.
the LessWrongers interested in ‘should I be a vegan?’
I am sorry, where is this coming from?
I’m just saying that the intellectual component is correctly reasoned
At this level of argument there isn’t much intellectual component to speak of. If your value system already says “hurting creatures X is bad”, the jump to “don’t eat creatures X” doesn’t require great intellectual acumen. It’s just a direct, first-order consequence.
I didn’t say it requires great intellectual acumen. In the blog post we’re talking about, I called the argument “air-tight”, “very simple”, and “almost too clear-cut”. I wouldn’t have felt the need to explicitly state it at all, were it not for the fact that Eliezer and several other LessWrong people have been having arguments about whether veganism is rational (for a person worried about suffering), and about how confident we can be that non-humans are capable of suffering. Some people were getting the false impression from this that this state of uncertainty about animal cognition was sufficient to justify meat-eating. I’m spelling out the argument only to make it clear that the central points of divergence are normative and/or motivational, not factual.
I’m assuming that the LessWrongers interested in ‘should I be a vegan?’ are at least somewhat inclined toward effective altruism, uilitarianism, compassion, or what-have-you. I’m not claiming a purely selfish agent should be a vegan. I’m also not saying that the case is purely intellectual (in the sense of having nothing to do with our preferences or emotions); I’m just saying that the intellectual component is correctly reasoned. You can evaluate it as a hypothetical imperative without asking whether the antecedent holds.
I am sorry, where is this coming from?
At this level of argument there isn’t much intellectual component to speak of. If your value system already says “hurting creatures X is bad”, the jump to “don’t eat creatures X” doesn’t require great intellectual acumen. It’s just a direct, first-order consequence.
I didn’t say it requires great intellectual acumen. In the blog post we’re talking about, I called the argument “air-tight”, “very simple”, and “almost too clear-cut”. I wouldn’t have felt the need to explicitly state it at all, were it not for the fact that Eliezer and several other LessWrong people have been having arguments about whether veganism is rational (for a person worried about suffering), and about how confident we can be that non-humans are capable of suffering. Some people were getting the false impression from this that this state of uncertainty about animal cognition was sufficient to justify meat-eating. I’m spelling out the argument only to make it clear that the central points of divergence are normative and/or motivational, not factual.