Agents of No Moral Value: Constrained Cognition?

Thought ex­per­i­ments in­volv­ing mul­ti­ple agents usu­ally pos­tu­late that the agents have no moral value, so that the ex­plic­itly speci­fied pay­off from the choice of ac­tions can be con­sid­ered in iso­la­tion, as both the sole rea­son and eval­u­a­tion crite­rion for agents’ de­ci­sions. But is that re­ally pos­si­ble to re­quire from an op­pos­ing agent to have no moral value, with­out con­strain­ing what it’s al­lowed to think about?

If agent B is not a per­son, how do we know it can’t de­cide to be­come a per­son for the sole rea­son of gam­ing the prob­lem, ma­nipu­lat­ing agent A (since B doesn’t care about per­son­hood, so it costs B noth­ing, but A does)? If it’s stipu­lated as part of the prob­lem state­ment, it seems that B’s cog­ni­tion is re­stricted, and the most ra­tio­nal course of ac­tion is pro­hibited from be­ing con­sid­ered for no within-thought-ex­per­i­ment rea­son ac­cessible to B.

It’s not enough to re­quire that the other agent is in­hu­man in the sense of not be­ing a per­son and not hold­ing hu­man val­ues, as our agent must also not care about the other agent. And once both agents don’t care about each other’s cog­ni­tion, the re­quire­ment for them not be­ing per­sons or valuable be­comes ex­tra­ne­ous.

Thus, in­stead of re­quiring that the other agent is not a per­son, the cor­rect way of set­ting up the prob­lem is to re­quire that our agent is in­differ­ent to whether the other agent is a per­son (and con­versely).

(It’s not a very sub­stan­tive ob­ser­va­tion I would’ve posted with less pol­ish in an open thread if not for the dis­cus­sion sec­tion.)