For a long time, I wanted to ask some­thing. I was just think­ing about it again when I saw that Ali­corn has a post on a similar topic. So I de­cided to go ahead.

The ques­tion is: what is the differ­ence be­tween morally neu­tral stim­u­lus re­sponces and agony? What fea­tures must an an­i­mal, ma­chine, pro­gram, alien, hu­man fe­tus, molecule, or anime char­ac­ter have be­fore you will say that if their util­ity me­ter is low, it needs to be raised. For ex­am­ple, if you wanted to know if lob­sters suffer when they’re cooked al­ive, what ex­actly are you ask­ing?

On re­flec­tion, I’m ac­tu­ally ask­ing two ques­tions: what is a morally sig­nifi­cant agent (MSA; is there an es­tab­lished term for this?) whose goals you would want to fur­ther; and hav­ing de­ter­mined that, un­der what con­di­tions would you con­sider it to be suffer­ing, so that you would?

I think that an MSA would not be defined by one fea­ture. So try to list sev­eral fea­tures, pos­si­bly as­sign­ing rel­a­tive weights to each.

IIRC, I read a study that tried to de­ter­mine if fish suffer by in­ject­ing them with tox­ins and ob­serv­ing whether their re­ac­tions are planned or en­tirely in­stinc­tive. (They found that there’s a bit of plan­ning among bony fish, but none among the car­tilag­i­nous.) I don’t know why they had to ac­tu­ally hurt the fish, es­pe­cially in a way that didn’t leave much room for plan­ning, if all they wanted to know was if the fish can plan. But that was their defi­ni­tion. You might also name in­tro­spec­tion, re­mem­ber­ing the pain af­ter it’s over...

This is the ul­ti­mate sub­jec­tive ques­tion, so the only wrong an­swer is one that is never given. Speak, or be wrong. I will down­vote any post you don’t make.

BTW, I think the most im­por­tant defin­ing fea­ture of an MSA is abil­ity to kick peo­ple’s asses. Very hu­man­iz­ing.