There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things)
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
we know to be associated with consciousness in humans
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.
There are lots of critiques spread across lots of forum comments, but no single report I could link to. But you can see the relevant methodology section of the RP report yourself here:
https://rethinkpriorities.org/research-area/the-welfare-range-table/
You can see they approximately solely rely on behavioral proxies. I remember there was some section somewhere in the sequence arguing explicitly for this methodology, using some “we want to make a minimum assumption analysis and anything that looks at brain internals and neuron counts would introduce more assumptions” kind of reasoning, which I always consider very weak.
I currently think neuron count is a much better basis for welfare estimates than the RP welfare ranges (though it’s still not great, and to be clear, consider hedonic utilitarianism also in general not a great foundation for ethics of any kind).
I agree that neuron count carries some information as a proxy for consciousness or welfare, but it seems like a really bad and noisy one that we shouldn’t place much weight on. For example, in humans the cerebellum is the brain region with the largest neuron count but it has nothing to do with consciousness.
It’s not clear to me that a species which showed strong behavioural evidence of consciousness and valenced experience should have their welfare strongly discounted using neuron count.
(To be clear, I get that your main problem with RP is the hedonic utilitarianism assumption which is a fair challenge. I’m mainly challenging the appeal to neuron count.)
EDIT: Adding some citations since the comment got a reaction asking for cites.
This paper describes a living patient born without a cerebellum. The effect of being born without a cerebellum leads to impaired motor function but no impact to sustaining a conscious state.
Neuron counts in this paper put the cerebellum around ~70 billion neurons and the cortex (associated with consciousness) around ~15 billion neurons.
You can’t have “strong behavioral evidence of consciousness”. At least in my theory of mind it is clear that you need to understand what is going on inside of a mind to get strong evidence.
Like, modern video game characters (without any use of AI) would also check a huge number of these “behavioral evidence” checkboxes, and really very obviously aren’t conscious or moral patients of non-negligible weight.
You also have subdivision issues. Like, by this logic you end up with thinking that a swarm of fish is less morally relevant than the individual fish that compose it.
Behavioral evidence is just very weak, and the specific checkbox approach that RP took also doesn’t seem to me like it makes much sense even if you want to go down the behavioral route (in-particular in my opinion you really want to gather behavioral evidence to evalute how much stuff there is going on in the brain of whatever you are looking at, like whether you have complicated social models and long-term goals and other things).
I agree strongly with both of the above points—we should be supplementing the behavioural picture by examining which functional brain regions are involved and whether these functional brain regions bear similarities with regions we know to be associated with consciousness in humans (e.g. the pallium in birds bears functional similarity with the human cortex).
Your original comment calls out that neuron counts are “not great” as a proxy but I think a more suitable proxy would be something like functional similarity + behavioural evidence.
(Also edited the original comment with citatons)
To be clear, my opinion is that we have no idea what “areas of the brain are associated with consciousness” and the whole area of research that claims otherwise is bunk.