Who goes around downvoting this stuff without saying what they object to? It’s a free world and you can downvote what you want, but to me it looks suspiciously like shouting SHUT UP SHUT UP SHUT UP when someone is making arguments that are reasonable, but you find uncomfortable to hear.
Where’s the counterargument or community norm violation? This looks well-written, rational, and important-if-true, prime stuff for this community. It’s not a new argument, but that doesn’t usually lead to such an enthusiastic downvoting.
This looks well-written, rational, and important-if-true
My general experience is that anything touted as important-if-true is neither. I find this to be as reliable a rule as Betteridge’s Law of Headlines (if the headline is a question the answer is “no”). Indeed, the two rules are closely related: the headlines “Scientists may have found…” and “Have scientists found…?” are interchangeable.
I agree. I pass up pieces with those titles. But I don’t think it applies to this piece. The author didn’t pitch it that way, I did. And in fact I think it is true and important, and that there is a lot of emotional hang up and resistance to accepting that we are all falling far, far short of the level of ethics we’d like to imagine we have. I have not taken the giving what we can pledge, so I don’t exactly agree with the author’s conclusions and I don’t think the logic is nearly tight enough. But I think the question of whether we should all take that pledge or similar things is very much an open one; the claim that we’d be happier if we did seems quite plausible to me and very much worthy of debate on lesswrong. I’m disturbed to see it get downvotes instead of debate; I think if a less sensitive and equally important topic was written about this poorly it would be treated much more kindly. The claim that this has been debated to death already so we need not engage with repetitive and bad versions seems simply false to me. I have lived on less wrong for the last 3 years and seen no serious debate of this topic in that time, nor a hint of an accepted consensus.
I happen to think that solving alignment trumps all other ethical concerns, so I’m not going to be the one to do a better treatment here. But I am disappointed in the community for being so hostile to this person’s attempts.
I have previously critiqued Bentham’s Bulldog’s writing on here. This seems also relatively low-quality along similar lines as previous critiques. You should not expect an explanation for each individual instance (and more broadly, demanding explanations, especially given the reasonable expectation of follow-up and associated stress, is quite costly).
I care about many EA principles, and my reaction is definitely not “SHUT UP SHUT UP SHUT UP” about arguments I am uncomfortable about hearing, it’s just that this author just makes a lot of really low-quality arguments that mostly consist of applause lights. There are many great authors writing about similar ideas who I do respect a lot.
My post is a response to the arguments you make in that comment! I find the notion somewhat absurd that my thinking is disreputably poor because I go by the results of the most detailed report on animal consciousness done to date, but it doesn’t accord with your unreflective intuitions (which are largely influenced by a host of obviously irrelevant factors like size!) It wouldn’t seem so unintuitive that giant prehistoric arthropods rolling around in pain were conscious!
I will not repeat the critique I have already made over there here again. You asked for an example, which I gave you.
I also don’t appreciate you calling my intuitions “unreflective” or to assert invariants about those intuitions which I have not expressed (like that “size” inherently matters, which I mean, it clearly does due to at least Landauer limits, but I agree is not particularly strong evidence, but which I have not take any previous stance on).
Your example of me being obviously wrong is that you have an intuition that the numbers I rely on, from the most detailed report to date, are wrong.
Size likely correlates slightly with mental complexity, but not to the extent it affects our intuitions. The gulf between bees and fish mentally is pretty small, while the gulf between bees and fish in terms of our intuitions about consciousness is very large. I was making a general claim about people’s sentience intuitions not yours.
Probably unreflective was the wrong word—direct would have been better. What I meant was that you weren’t relying on any single model, or average of models, or anything of the sort. Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.
(I also find it a bit rich that you are acting like my comment is somehow beyond the pale, when I’m responding to a comment of yours that basically amounts to saying my arguments are consistently so idiotic my posts should be downvoted even when they don’t say anything crazy).
To think insect expected sentience is very low, then you have to be very confident their sentience is low. Such great confidence would require some very compelling argument for why even dramatic behavior isn’t indicative of much sentience. Suffice it to say, I don’t see an argument like that, and I think there are plenty of reasons to think it’s reasonably likely insects feel intense pain.
I will again reiterate that a report being “detailed” really doesn’t mean approximately anything. You are using it as some kind of weird argument from authority. The most “detailed” reports on the moral value of animals are probably some random religious texts in the Vatican about whether animals have souls. Do I care? No.
Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.
No, I am not doing that. I am using different arguments, that seem more robust to me, which are grounded in my own best guesses of how morality works. I absolutely do not think that I can intuitively and directly assess, without the need for sophisticated study, the internal experience of any kind of cognitive system. Indeed, I suggested neuron count myself as one such method! Neuron count is vaguely associated with size, and IMO just a strictly better proxy than it (indeed, my guess is accounting for neuron count the remaining size correlation becomes negative because of greater need to have enough brain to control the bigger body, which reduces available compute to do more morally relevant computation).
To think insect expected sentience is very low, then you have to be very confident their sentience is low. Such great confidence would require some very compelling argument for why even dramatic behavior isn’t indicative of much sentience. Suffice it to say, I don’t see an argument like that, and I think there are plenty of reasons to think it’s reasonably likely insects feel intense pain.
No, that is not how epistemology works. The prior on every hypothesis this specific is extremely low. Strong evidence is common. You do not need some “very compelling argument”, as there is no obvious prior to use for this statement space. I have no trouble coming up with 10^50 statements that I assign as low probability to as 7 bees being more morally relevant than humans.
There are just a lot of ways to cut up morality, and just because you raised one specific hypothesis to consideration that might have really big implications doesn’t mean someone now needs to provide overwhelming evidence. Bits add up really quickly. It’s just really not that hard to build confidence that can overcome a factor of a billion (and indeed it is very rare for a hypothesis to be anywhere else but at approximately 0% or approximately 100%).
If you’re going to rely on neuron counts, you should engage with the arguments RP gives against neuron counts that are, to my mind, very decisive. It’s particularly unreasonable to rely on neuron counts in a domain like this where there’s lots of uncertainty. If a model tells you A matters less than B by a factor of 100,000 or something, most of the expected value of B relative to A is in possible worlds where the model is wrong. https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument
To use neuron counts as a proxy for even simple creatures, you have to be extremely confident—north of 99% confident—that the right proxy only assigns very minimal consciousness to animals. But it’s not clear what justifies this overwhelming confidence.
Analogy: if you have a model that predicts aliens being only one millimeter in size, even if you’re pretty sure it’s right, you shouldn’t use it as a proxy for expected alien size, because the overwhelming majority of expected size is in worlds where the model is wrong.
Why is the hypothesis that bees are more than insignificantly conscious a highly specific prior with insignificant odds? We know that humans are capable of intense pain. There is some neural architecture that produces intense pain. What gives us immense confidence that this isn’t present in animals? Being confident a priori that insects don’t feel intense pain at a billion to one odds is silly—it’s like being confident a priori that insects don’t have a visual cortex. It’s not like there’s some natural paramaterization of possible physical states that give rise to consciousness where only a tiny portion of them entail insect consciousness.
As an aside, I think people take the long lesson away from the Mark Xu essay. Specific evidence gives a very high Bayes factor. The reason someone saying their name is Mark Xu gets such a high Bayes factor is that Mark Xu is a very specific name—as all specific names are. But a person merely asserting some proposition isn’t good for any comparable Bayes factor. For more see http://www.wall.org/~aron/blog/just-how-certain-can-we-be/
Also, as I have said several times, it’s not about aggregate considerations of moral worth but about intensity of valenced experience. It’s about how intense they feel pleasure and pain. I think a human’s life matters more than seven bees. Now, once again, it seems insane to me to start with a prior on the order of one in a billlion of bees feeling pain at least 1⁄7 as intensely as people. What licenses such great confidence?
My question, if we are going to continue this, is as follows:
Are you astronomically certain that insects aren’t conscious at all, or just not intensely conscious?
What licenses very high confidence in this? If it’s the alleged specificity of the hypothesis, what is the parameterization on which this takes up a tiny slice of probability space?
Are you astronomically certain that insects aren’t conscious at all, or just not intensely conscious?
I refuse to demand astronomical certainty here, because that amounts to Pascal’s Mugging.
I don’t even have astronomical certainty that electrons aren’t conscious. Yet I wouldn’t say “there’s a 10^-20 chance that electrons are conscious, so based on the huge number of electrons that still adds up to a lot of suffering and batteries are murder.”
Right so you can discount extremely low probabilities. But presumably the odds of insects being conscious—a view believed by a large number of experts—isn’t low enough to fully discount.
Astronomical certainty, that’s another way of saying “even if the odds are really really low”. If you don’t actually think the odds are really really low, the astronomical certainty is irrelevant.
Who goes around downvoting this stuff without saying what they object to? It’s a free world and you can downvote what you want, but to me it looks suspiciously like shouting SHUT UP SHUT UP SHUT UP when someone is making arguments that are reasonable, but you find uncomfortable to hear.
Where’s the counterargument or community norm violation? This looks well-written, rational, and important-if-true, prime stuff for this community. It’s not a new argument, but that doesn’t usually lead to such an enthusiastic downvoting.
My general experience is that anything touted as important-if-true is neither. I find this to be as reliable a rule as Betteridge’s Law of Headlines (if the headline is a question the answer is “no”). Indeed, the two rules are closely related: the headlines “Scientists may have found…” and “Have scientists found…?” are interchangeable.
I agree. I pass up pieces with those titles. But I don’t think it applies to this piece. The author didn’t pitch it that way, I did. And in fact I think it is true and important, and that there is a lot of emotional hang up and resistance to accepting that we are all falling far, far short of the level of ethics we’d like to imagine we have. I have not taken the giving what we can pledge, so I don’t exactly agree with the author’s conclusions and I don’t think the logic is nearly tight enough. But I think the question of whether we should all take that pledge or similar things is very much an open one; the claim that we’d be happier if we did seems quite plausible to me and very much worthy of debate on lesswrong. I’m disturbed to see it get downvotes instead of debate; I think if a less sensitive and equally important topic was written about this poorly it would be treated much more kindly. The claim that this has been debated to death already so we need not engage with repetitive and bad versions seems simply false to me. I have lived on less wrong for the last 3 years and seen no serious debate of this topic in that time, nor a hint of an accepted consensus.
I happen to think that solving alignment trumps all other ethical concerns, so I’m not going to be the one to do a better treatment here. But I am disappointed in the community for being so hostile to this person’s attempts.
See this discussion for why downvotes-without-explanations are common these days.
I have previously critiqued Bentham’s Bulldog’s writing on here. This seems also relatively low-quality along similar lines as previous critiques. You should not expect an explanation for each individual instance (and more broadly, demanding explanations, especially given the reasonable expectation of follow-up and associated stress, is quite costly).
I care about many EA principles, and my reaction is definitely not “SHUT UP SHUT UP SHUT UP” about arguments I am uncomfortable about hearing, it’s just that this author just makes a lot of really low-quality arguments that mostly consist of applause lights. There are many great authors writing about similar ideas who I do respect a lot.
Can you give an example? I addressed your previous (in my view, quite unpersuasive) objections at some length https://benthams.substack.com/p/you-cant-tell-how-conscious-animals
Sure
(Edit: To clear up confusion, I responded before Bentham’s Bulldog edited their comment, which originally just said “Can you give an example?”)
My post is a response to the arguments you make in that comment! I find the notion somewhat absurd that my thinking is disreputably poor because I go by the results of the most detailed report on animal consciousness done to date, but it doesn’t accord with your unreflective intuitions (which are largely influenced by a host of obviously irrelevant factors like size!) It wouldn’t seem so unintuitive that giant prehistoric arthropods rolling around in pain were conscious!
I will not repeat the critique I have already made over there here again. You asked for an example, which I gave you.
I also don’t appreciate you calling my intuitions “unreflective” or to assert invariants about those intuitions which I have not expressed (like that “size” inherently matters, which I mean, it clearly does due to at least Landauer limits, but I agree is not particularly strong evidence, but which I have not take any previous stance on).
(I am not intending to respond further)
Your example of me being obviously wrong is that you have an intuition that the numbers I rely on, from the most detailed report to date, are wrong.
Size likely correlates slightly with mental complexity, but not to the extent it affects our intuitions. The gulf between bees and fish mentally is pretty small, while the gulf between bees and fish in terms of our intuitions about consciousness is very large. I was making a general claim about people’s sentience intuitions not yours.
Probably unreflective was the wrong word—direct would have been better. What I meant was that you weren’t relying on any single model, or average of models, or anything of the sort. Instead, you were just directly relying on how conscious animals seemed to you which, for the reasons I gave, strikes me as an absolutely terrible method.
(I also find it a bit rich that you are acting like my comment is somehow beyond the pale, when I’m responding to a comment of yours that basically amounts to saying my arguments are consistently so idiotic my posts should be downvoted even when they don’t say anything crazy).
To think insect expected sentience is very low, then you have to be very confident their sentience is low. Such great confidence would require some very compelling argument for why even dramatic behavior isn’t indicative of much sentience. Suffice it to say, I don’t see an argument like that, and I think there are plenty of reasons to think it’s reasonably likely insects feel intense pain.
I will again reiterate that a report being “detailed” really doesn’t mean approximately anything. You are using it as some kind of weird argument from authority. The most “detailed” reports on the moral value of animals are probably some random religious texts in the Vatican about whether animals have souls. Do I care? No.
No, I am not doing that. I am using different arguments, that seem more robust to me, which are grounded in my own best guesses of how morality works. I absolutely do not think that I can intuitively and directly assess, without the need for sophisticated study, the internal experience of any kind of cognitive system. Indeed, I suggested neuron count myself as one such method! Neuron count is vaguely associated with size, and IMO just a strictly better proxy than it (indeed, my guess is accounting for neuron count the remaining size correlation becomes negative because of greater need to have enough brain to control the bigger body, which reduces available compute to do more morally relevant computation).
No, that is not how epistemology works. The prior on every hypothesis this specific is extremely low. Strong evidence is common. You do not need some “very compelling argument”, as there is no obvious prior to use for this statement space. I have no trouble coming up with 10^50 statements that I assign as low probability to as 7 bees being more morally relevant than humans.
There are just a lot of ways to cut up morality, and just because you raised one specific hypothesis to consideration that might have really big implications doesn’t mean someone now needs to provide overwhelming evidence. Bits add up really quickly. It’s just really not that hard to build confidence that can overcome a factor of a billion (and indeed it is very rare for a hypothesis to be anywhere else but at approximately 0% or approximately 100%).
I thought you weren’t planning on responding!
If you’re going to rely on neuron counts, you should engage with the arguments RP gives against neuron counts that are, to my mind, very decisive. It’s particularly unreasonable to rely on neuron counts in a domain like this where there’s lots of uncertainty. If a model tells you A matters less than B by a factor of 100,000 or something, most of the expected value of B relative to A is in possible worlds where the model is wrong. https://www.lesswrong.com/posts/GrtbTAPfkJa4D6jjH/confidence-levels-inside-and-outside-an-argument
To use neuron counts as a proxy for even simple creatures, you have to be extremely confident—north of 99% confident—that the right proxy only assigns very minimal consciousness to animals. But it’s not clear what justifies this overwhelming confidence.
Analogy: if you have a model that predicts aliens being only one millimeter in size, even if you’re pretty sure it’s right, you shouldn’t use it as a proxy for expected alien size, because the overwhelming majority of expected size is in worlds where the model is wrong.
Why is the hypothesis that bees are more than insignificantly conscious a highly specific prior with insignificant odds? We know that humans are capable of intense pain. There is some neural architecture that produces intense pain. What gives us immense confidence that this isn’t present in animals? Being confident a priori that insects don’t feel intense pain at a billion to one odds is silly—it’s like being confident a priori that insects don’t have a visual cortex. It’s not like there’s some natural paramaterization of possible physical states that give rise to consciousness where only a tiny portion of them entail insect consciousness.
As an aside, I think people take the long lesson away from the Mark Xu essay. Specific evidence gives a very high Bayes factor. The reason someone saying their name is Mark Xu gets such a high Bayes factor is that Mark Xu is a very specific name—as all specific names are. But a person merely asserting some proposition isn’t good for any comparable Bayes factor. For more see http://www.wall.org/~aron/blog/just-how-certain-can-we-be/
Also, as I have said several times, it’s not about aggregate considerations of moral worth but about intensity of valenced experience. It’s about how intense they feel pleasure and pain. I think a human’s life matters more than seven bees. Now, once again, it seems insane to me to start with a prior on the order of one in a billlion of bees feeling pain at least 1⁄7 as intensely as people. What licenses such great confidence?
My question, if we are going to continue this, is as follows:
Are you astronomically certain that insects aren’t conscious at all, or just not intensely conscious?
What licenses very high confidence in this? If it’s the alleged specificity of the hypothesis, what is the parameterization on which this takes up a tiny slice of probability space?
Also happy to have you on the podcast!
I refuse to demand astronomical certainty here, because that amounts to Pascal’s Mugging.
I don’t even have astronomical certainty that electrons aren’t conscious. Yet I wouldn’t say “there’s a 10^-20 chance that electrons are conscious, so based on the huge number of electrons that still adds up to a lot of suffering and batteries are murder.”
Right so you can discount extremely low probabilities. But presumably the odds of insects being conscious—a view believed by a large number of experts—isn’t low enough to fully discount.
Astronomical certainty, that’s another way of saying “even if the odds are really really low”. If you don’t actually think the odds are really really low, the astronomical certainty is irrelevant.