All the work is done in the premises—which is a bad sign rhetorically, but at least a good sign deductively. If I thought cows were close enough to us that there was a 20% chance that hurting a cow was just as bad as hurting a human, I would definitely not want to eat cows.
Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It’s still bad—but its badness is some quite smaller number that is a function of my upbringing, cows’ cognitive differences from me, and the lack of overriding game theoretic concerns as far as I can tell. I don’t think of cows as “mysterious beings with some chance of being Sacred,” I think of them as non-mysterious cows with some small amount of sacredness.
Pretty sure the unpacking goes like “I think it is 20% likely that a moral theory is ‘true’ (I’m interpreting ‘true’ as “what I would agree on after perfect information and time to grow and reflect”) in which hurting cows is as morally bad as hurting humans.”
I think there’s a pretty solid case for that being a non-optimal solution, even if you’ve bought all their other premises. (There’s not enough of them for a single or even mass suicides to inspire other people to do so, and then they’d just lose the longterm memetic war)
Well… the example ran away like this: “If there was a fire and I was given the option of saving just the cow or just the person, I would save the cow”. Presumably it would be the same with a pig or a dog. This is a trasposed version of the trolley situation: ‘I would not actively kill any human, but given the choice, I consider a cow to be more valuable’. The motivating reason was something on the line of “humans are inherently evil, while animals are incapable of evil”.
Like, if you’d kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they’re just as effective), then you clearly value cow-suffering to some degree.
The reason I went basically vegan is I realized I didn’t have enough knowledge to run that calculation, but I was fairly confident that I was ethically okay with eating plants, sludges, and manufactured powders, and most probably the incidental suffering they create, while I learned about those topics.
I am basically with you on the notion that hurting a cow is better than hurting a person, and I think horse is the most delicious meat. I just don’t eat it any-more. (I’d also personally kill some cows, even in relatively painful ways, in order to save a few people I don’t know.)
I REALLY like this question, because I don’t know how to approach it, and that’s where learning happens.
So it’s definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It’s kind of like asking if you’d prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions.
I think it’d be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow didn’t have a damned awesome life and that we should feel like monsters for allowing it to have existed at all.
That having been said, if you give me a choice between cows that have been re-engineered such that their meat is delicious even after they die of natural causes, and humans don’t artificially shorten their lives, and they stand around having cowgasms all day -
and a world where cows grow without brains -
and a world where you grew steaks on bushes -
I think I’ll pick the bush-world, or the brainless cow world, over the cowgasm one, but I’d almost certainly eat cow meat in all of them. My preference there doesn’t have to do with cow-suffering. I suspect it has something to do with my incomplete evolution from one moral philosophy to another.
I’m kind of curious how others approach that question.
All the work is done in the premises—which is a bad sign rhetorically, but at least a good sign deductively. If I thought cows were close enough to us that there was a 20% chance that hurting a cow was just as bad as hurting a human, I would definitely not want to eat cows.
Unfortunately for cows, I think there is an approximately 0% chance that hurting cows is (according to my values) just as bad as hurting humans. It’s still bad—but its badness is some quite smaller number that is a function of my upbringing, cows’ cognitive differences from me, and the lack of overriding game theoretic concerns as far as I can tell. I don’t think of cows as “mysterious beings with some chance of being Sacred,” I think of them as non-mysterious cows with some small amount of sacredness.
I don’t even know what 20% means in this context. That 5 cows = 1 person? Not even a rabid vegan would probably claim that.
Pretty sure the unpacking goes like “I think it is 20% likely that a moral theory is ‘true’ (I’m interpreting ‘true’ as “what I would agree on after perfect information and time to grow and reflect”) in which hurting cows is as morally bad as hurting humans.”
Right, sure. but does it not follow that, if you average over all possible worlds, 5 cows have the same moral worth as 1 human?
I personally know at least one rabid vegan for whom 1 cow > 1 person.
Why “>” and not “=”? Is this true for other animals too or are cows special?
Tentative guess: Humans are considered to have negative value because (among other things) they kill cows (carbon footprint, etc)
Also they might just not be rational.
Kill them all.
I’ve seen it argued.
Notably by Agent Smith from the Matrix.
People who argue this can start with themselves.
I think there’s a pretty solid case for that being a non-optimal solution, even if you’ve bought all their other premises. (There’s not enough of them for a single or even mass suicides to inspire other people to do so, and then they’d just lose the longterm memetic war)
I am quite confident of this result, anyway. Actually, I don’t see any chances for a memetic war at all, never mind long-term X-)
Well… the example ran away like this: “If there was a fire and I was given the option of saving just the cow or just the person, I would save the cow”. Presumably it would be the same with a pig or a dog.
This is a trasposed version of the trolley situation: ‘I would not actively kill any human, but given the choice, I consider a cow to be more valuable’.
The motivating reason was something on the line of “humans are inherently evil, while animals are incapable of evil”.
Well, how comparable are they, in your view?
Like, if you’d kill a cow for a 10,000 dollars (which could save a number of human lives), but not fifty million cows for 10,000 dollars, you evidently see some cost associated with cow-termination. If you, when choosing methods, could pick between methods that induced lots of pain, versus methods that instantly terminated the cow-brain, and have a strong preference toward the less-painful methods (assuming they’re just as effective), then you clearly value cow-suffering to some degree.
The reason I went basically vegan is I realized I didn’t have enough knowledge to run that calculation, but I was fairly confident that I was ethically okay with eating plants, sludges, and manufactured powders, and most probably the incidental suffering they create, while I learned about those topics.
I am basically with you on the notion that hurting a cow is better than hurting a person, and I think horse is the most delicious meat. I just don’t eat it any-more. (I’d also personally kill some cows, even in relatively painful ways, in order to save a few people I don’t know.)
This triggered a question to bubble up in my brain.
How much time of pure wireheading bliss do you need to give to a cow brain in order to feel not guilty about eating steak?
Given my attitude towards wire-heading generally, that would probably make me feel more guilty.
I REALLY like this question, because I don’t know how to approach it, and that’s where learning happens.
So it’s definitely less bad to grow cows with good life experiences than with bad life experiences, even if their ultimate destiny is being killed for food. It’s kind of like asking if you’d prefer a punch in the face and a sandwich, or just a sandwich. Really easy decisions.
I think it’d be pretty suspicious if my moral calculus worked out in such a way that there was no version of maximally hedonistic existence for a cow that I could say that the cow didn’t have a damned awesome life and that we should feel like monsters for allowing it to have existed at all.
That having been said, if you give me a choice between cows that have been re-engineered such that their meat is delicious even after they die of natural causes, and humans don’t artificially shorten their lives, and they stand around having cowgasms all day - and a world where cows grow without brains - and a world where you grew steaks on bushes -
I think I’ll pick the bush-world, or the brainless cow world, over the cowgasm one, but I’d almost certainly eat cow meat in all of them. My preference there doesn’t have to do with cow-suffering. I suspect it has something to do with my incomplete evolution from one moral philosophy to another.
I’m kind of curious how others approach that question.