You seem to be arguing “your theory of moral worth is incomplete, so I don’t have to believe it”. Which is true. But without presenting a better or even different theory of moral worth, it seems like you’re mostly just doing that because you don’t want to believe it.
To the extent you are presenting a different theory, your conclusions seem inconsistent with that theory. To summarize: I agree that you can’t make objective decisions about moral worth. But you can make objective decisions about self-consistency of theories. And the “bees are 7-15% as worthwhile as humans” is more self-consistent than any alternatives I know of, let alone that you’ve presented.
Having said that, I don’t like the conclusions either, and I agree that they’re not based on a thorough theory of consciousness or other objective basis of moral worth. I’ll even admit that I’m going to do essentially the same thing you’re doing, and continuing to enjoy honey here and there based on my ability to not believe just because it’s inconvenient and I suspect there’s something wrong with it. But unlike you, I’m going to admit that I’m being inconsistent by ignoring the best theory on the topic I know of.
Now in a little more detail on why I think you’re being inconsistent:
I don’t see why you’d say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
The argument for worth of bees (which I happened to read) seems like it could be taken as exactly an appeal to consistency. Sure, you could say “well I don’t care about them because they’re bees” but that sounds exactly like the hair color unless it’s accompanied by deeper arguments for the disanalogy between bees and humans (assuming you care about other humans; it’s perfectly consistent to just not care about humans, it’s just a bit harder to make real friends if that’s your position).
So I think there are less-wrong answers out there, we just don’t have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that’s the most consistent position seems to contradict your own stated position that there are more and less consistent arguments.
Separately, on your accusation of bad faith arguments from that side of the aisle:
Your dismissal of people wanting you to read a long blog post as “in bad faith” seems both quite wrong and quite unhelpful (tending to create arguments vs discussion), but I’ll assume you write it in good faith.
I won’t go into details, but I think it is, in short, bad to assume ill intent when incompetence will do. Tracking what is good and bad epistemics is complicated, so I sincerely doubt that most of the authors asking you to read that blog post are thinking anything like “haha, that will stop them from arguing with me regardless of whether I’m right!”. Okay, maybe a little of that thought sometimes—but usually I’d assume it’s mostly in good faith, with the thought being “I’m pretty sure I’m right because I’ve written a much more careful analysis than anyone is bringing to bear against my conclusion. They’d agree if they’d just go read it”. Which might not be a good move, but I do take it to be mostly honest-
Just like I take your rather brusque and shallow dismissal of those arguments to be in good faith.
I don’t have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises. If someone gives me a mathematical proof and I identify a mistake, I don’t need to present an alternative proof before I’m allowed to ignore it.
Like, all of us need to have a position about what we value, because that’s what we use to guide our decisions. But all theories of ethics are “flawed”: basically they’re formulated in natural language, and none of the terms are on very firm mathematical footing.
But you should be very careful with using this as an argument against any specific ethical theory, because that line of reasoning enables you to discount any theory you don’t want to believe, even that theory actually has stronger arguments for it, by your own standards, than what you currently believe.
I think your proof example is not right, a better example is like:
I’m a mathematician and do tons of mathematical work. You show me your proof of the Riemann Hypothesis. I can’t find any real flaws in it, but I tell you its based on ZFC and ZFC is subject to Gödel’s incompleteness theorems, and therefore we cant be sure the system you’re using to prove RH is even consistent, therefore I ignore your proof. You ask me what to use instead of ZFC, I tell you “I don’t have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises....”, then I leave and continue doing my work in william type theory, which is also subject to GI, which I choose not to think about.
Sure, in the case of severely flawed theories. And you’ll have to judge how flawed before you stop believing (or severaly downgrade their likelihood if you’re thinking in Bayesian terms). I agree that you don’t need an alternative theory, and stand corrected.
But rejecting a theory without a better alternative can be suspicious, which is what I was trying to get at.
If you accept some theories with a flaw (like “I believe humans have moral worth even though we don’t have a good theory of consciousness”) while rejecting others because they have that same flaw, you might expect to be accused of inconsistency, or even motivated reasoning if your choices let you do something rewarding (like continuing to eat delicious honey).
But rejecting a theory without a better alternative can be suspicious
Nah, I still disagree, the set of theories is vast, one being promoted to my attention is not strong evidence it is more true than all of those that haven’t. People can separately be hypocritical or inconsistent, but that’s something that should be argued for directly
You seem to be arguing “your theory of moral worth is incomplete, so I don’t have to believe it”. Which is true. But without presenting a better or even different theory of moral worth, it seems like you’re mostly just doing that because you don’t want to believe it.
I would overall summarize my views on the numbers in the RP report as “These provides zero information, you should update to where you would be before you read them.” Of course you can still update on the fact that different animals have complex behaviour, but then you’ll have to make the case for “You should consider bees to be morally important because they can count and show social awareness”. This is a valid argument! It trades the faux-objectivity of the RP report for the much more useful property of being something that can actually be attacked and defended.
I don’t see why you’d say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
I addressed this in another comment but if you want me to give more thoughts I can.
So I think there are less-wrong answers out there, we just don’t have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that’s the most consistent position seems to contradict your own stated position that there are more and less consistent arguments
The thing I take issue with is using the RP report as a Schelling point/anchor point that we have to argue away from. When evidence and theory are both scarce, choosing the Schelling point is most of the argument, and I think the RP report gives zero information.
I agree that you need an argument for “you should consider bees to be morally important because they can count and show social awareness” I was filling that argument in. To me it seems intuitive and a reasonable baseline assumption, but it’s totally reasonable that it doesn’t seem that way to you.
(It’s the same argument I make in a comment justifying neuron count as very rough proxy for moral consideration I in response to Kaj Sotala’s related short form. I do suspect that in this case many of bees cognitive abilities do not correlate with whatever-you-want-to-call-consciousness/sentience in the same way they would in mammals, which is one of the reasons I’ll continue eating honey occasionally.)
Agreed that trying to insist on a Schelling or anchor point is bad argumentation without a full justification. How much justification it needs is in the eye of the beholder. It seems reasonable to me for reasons to complex to go into, and reasonable that it doesn’t to you since you don’t share those background assumptions/reasoning.
For your second part, whoops! I meant to include a disclaimer that I don’t actually think BB is arguing in bad faith, just that his tactics cash out to being pretty similar to lots of people who are, and I don’t blame people for being turned off by it.
Perhaps I’m being a bit naive; I’ve avoided the worst parts of the internet :)
I guess I think of arguing in bad faith as being on a continuum, and mostly resulting from motivated reasoning and not having good theories about what clear/fair argumentation is. I think it’s pretty rare for someone’s faith to be so bad that they’re thinking “I’ll lie/cheat to win this argument”—although I’m sure this does happen occasionally. I think most things that look like really bad faith are a product of it being really easy to fool yourself into thinking you’re making a good valid argument, particularly if you’re moving fast or irritated.
You seem to be arguing “your theory of moral worth is incomplete, so I don’t have to believe it”. Which is true. But without presenting a better or even different theory of moral worth, it seems like you’re mostly just doing that because you don’t want to believe it.
To the extent you are presenting a different theory, your conclusions seem inconsistent with that theory. To summarize: I agree that you can’t make objective decisions about moral worth. But you can make objective decisions about self-consistency of theories. And the “bees are 7-15% as worthwhile as humans” is more self-consistent than any alternatives I know of, let alone that you’ve presented.
Having said that, I don’t like the conclusions either, and I agree that they’re not based on a thorough theory of consciousness or other objective basis of moral worth. I’ll even admit that I’m going to do essentially the same thing you’re doing, and continuing to enjoy honey here and there based on my ability to not believe just because it’s inconvenient and I suspect there’s something wrong with it. But unlike you, I’m going to admit that I’m being inconsistent by ignoring the best theory on the topic I know of.
Now in a little more detail on why I think you’re being inconsistent:
I don’t see why you’d say hair color is obviously a pretty bad criteria but judgments about relative worth are pretty much totally arbitrary and aesthetic. I agree that judgments about moral worth are essentially arbitrary and aesthetic, but surely some claims about relative worth are more self-consistent than others (and probably by a lot), just like hair color.
The argument for worth of bees (which I happened to read) seems like it could be taken as exactly an appeal to consistency. Sure, you could say “well I don’t care about them because they’re bees” but that sounds exactly like the hair color unless it’s accompanied by deeper arguments for the disanalogy between bees and humans (assuming you care about other humans; it’s perfectly consistent to just not care about humans, it’s just a bit harder to make real friends if that’s your position).
So I think there are less-wrong answers out there, we just don’t have them yet. But the best answer we have thus far is 7-15%, and dismissing that without addressing the arguments for why that’s the most consistent position seems to contradict your own stated position that there are more and less consistent arguments.
Separately, on your accusation of bad faith arguments from that side of the aisle:
Your dismissal of people wanting you to read a long blog post as “in bad faith” seems both quite wrong and quite unhelpful (tending to create arguments vs discussion), but I’ll assume you write it in good faith.
I won’t go into details, but I think it is, in short, bad to assume ill intent when incompetence will do. Tracking what is good and bad epistemics is complicated, so I sincerely doubt that most of the authors asking you to read that blog post are thinking anything like “haha, that will stop them from arguing with me regardless of whether I’m right!”. Okay, maybe a little of that thought sometimes—but usually I’d assume it’s mostly in good faith, with the thought being “I’m pretty sure I’m right because I’ve written a much more careful analysis than anyone is bringing to bear against my conclusion. They’d agree if they’d just go read it”. Which might not be a good move, but I do take it to be mostly honest-
Just like I take your rather brusque and shallow dismissal of those arguments to be in good faith.
I don’t have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises. If someone gives me a mathematical proof and I identify a mistake, I don’t need to present an alternative proof before I’m allowed to ignore it.
But it would be better if you did. And more productive. And admirable.
You just have to clearly draw the distinction between “not X” claim and “Y” claim in your writing.
You get his point tho right? It’s basically this scott article
https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/
Like, all of us need to have a position about what we value, because that’s what we use to guide our decisions. But all theories of ethics are “flawed”: basically they’re formulated in natural language, and none of the terms are on very firm mathematical footing.
But you should be very careful with using this as an argument against any specific ethical theory, because that line of reasoning enables you to discount any theory you don’t want to believe, even that theory actually has stronger arguments for it, by your own standards, than what you currently believe.
I think your proof example is not right, a better example is like:
I’m a mathematician and do tons of mathematical work. You show me your proof of the Riemann Hypothesis. I can’t find any real flaws in it, but I tell you its based on ZFC and ZFC is subject to Gödel’s incompleteness theorems, and therefore we cant be sure the system you’re using to prove RH is even consistent, therefore I ignore your proof. You ask me what to use instead of ZFC, I tell you “I don’t have to present an alternative theory in order to disagree with one I believe to be flawed or based on false premises....”, then I leave and continue doing my work in william type theory, which is also subject to GI, which I choose not to think about.
Sure, in the case of severely flawed theories. And you’ll have to judge how flawed before you stop believing (or severaly downgrade their likelihood if you’re thinking in Bayesian terms). I agree that you don’t need an alternative theory, and stand corrected.
But rejecting a theory without a better alternative can be suspicious, which is what I was trying to get at.
If you accept some theories with a flaw (like “I believe humans have moral worth even though we don’t have a good theory of consciousness”) while rejecting others because they have that same flaw, you might expect to be accused of inconsistency, or even motivated reasoning if your choices let you do something rewarding (like continuing to eat delicious honey).
Nah, I still disagree, the set of theories is vast, one being promoted to my attention is not strong evidence it is more true than all of those that haven’t. People can separately be hypocritical or inconsistent, but that’s something that should be argued for directly
I would overall summarize my views on the numbers in the RP report as “These provides zero information, you should update to where you would be before you read them.” Of course you can still update on the fact that different animals have complex behaviour, but then you’ll have to make the case for “You should consider bees to be morally important because they can count and show social awareness”. This is a valid argument! It trades the faux-objectivity of the RP report for the much more useful property of being something that can actually be attacked and defended.
I addressed this in another comment but if you want me to give more thoughts I can.
The thing I take issue with is using the RP report as a Schelling point/anchor point that we have to argue away from. When evidence and theory are both scarce, choosing the Schelling point is most of the argument, and I think the RP report gives zero information.
All good points.
I agree that you need an argument for “you should consider bees to be morally important because they can count and show social awareness” I was filling that argument in. To me it seems intuitive and a reasonable baseline assumption, but it’s totally reasonable that it doesn’t seem that way to you.
(It’s the same argument I make in a comment justifying neuron count as very rough proxy for moral consideration I in response to Kaj Sotala’s related short form. I do suspect that in this case many of bees cognitive abilities do not correlate with whatever-you-want-to-call-consciousness/sentience in the same way they would in mammals, which is one of the reasons I’ll continue eating honey occasionally.)
Agreed that trying to insist on a Schelling or anchor point is bad argumentation without a full justification. How much justification it needs is in the eye of the beholder. It seems reasonable to me for reasons to complex to go into, and reasonable that it doesn’t to you since you don’t share those background assumptions/reasoning.
For your second part, whoops! I meant to include a disclaimer that I don’t actually think BB is arguing in bad faith, just that his tactics cash out to being pretty similar to lots of people who are, and I don’t blame people for being turned off by it.
Thanks, that makes sense.
Perhaps I’m being a bit naive; I’ve avoided the worst parts of the internet :)
I guess I think of arguing in bad faith as being on a continuum, and mostly resulting from motivated reasoning and not having good theories about what clear/fair argumentation is. I think it’s pretty rare for someone’s faith to be so bad that they’re thinking “I’ll lie/cheat to win this argument”—although I’m sure this does happen occasionally. I think most things that look like really bad faith are a product of it being really easy to fool yourself into thinking you’re making a good valid argument, particularly if you’re moving fast or irritated.