Agents of No Moral Value: Constrained Cognition?

Thought experiments involving multiple agents usually postulate that the agents have no moral value, so that the explicitly specified payoff from the choice of actions can be considered in isolation, as both the sole reason and evaluation criterion for agents’ decisions. But is that really possible to require from an opposing agent to have no moral value, without constraining what it’s allowed to think about?

If agent B is not a person, how do we know it can’t decide to become a person for the sole reason of gaming the problem, manipulating agent A (since B doesn’t care about personhood, so it costs B nothing, but A does)? If it’s stipulated as part of the problem statement, it seems that B’s cognition is restricted, and the most rational course of action is prohibited from being considered for no within-thought-experiment reason accessible to B.

It’s not enough to require that the other agent is inhuman in the sense of not being a person and not holding human values, as our agent must also not care about the other agent. And once both agents don’t care about each other’s cognition, the requirement for them not being persons or valuable becomes extraneous.

Thus, instead of requiring that the other agent is not a person, the correct way of setting up the problem is to require that our agent is indifferent to whether the other agent is a person (and conversely).

(It’s not a very substantive observation I would’ve posted with less polish in an open thread if not for the discussion section.)