I share your uncertainty about whether a lobster, let alone a carrot, feels anything like I do, and I distrust one-number ethics.
What puzzles me is the double standard. We cheerfully use words like blue, harm, or value even though I can’t know our private images line up—yet when the word is qualia, we demand lab-grade inter-subjective proof before letting it into the taxonomy.
Why the extra burden? Physics kept “heat” on the books long before kinetic theory—its placeholder helped, never hurt. Likewise, qualia is a rough pointer that stops us from editing felt experience out of the ontology just because we can’t yet measure it.
A future optimiser that tracks disk-thrashing but not suffering will tune for the former and erase the latter. Better an imperfect pointer to the phenomenon of felt valence than a brittle catalogue of “beings that can hurt.” Qualia names the capacity-for-hurt-or-joy; identity-independent, like heat, and present wherever the right physical pattern appears.
If you had to draft a first-pass rule today, which observable features would you check to decide whether an AI system, a lobster, or a human fetus belongs in the “moral-patient” set? And what language would you use for those features?
These are great questions!
I’m not deeply familiar with, nor a supporter of, Ayn Rand’s broader philosophical framework, but it does sound as though she employs transcendental argumentation. Consciousness and existence are classic examples —the very act of denying them (broadly) presupposes a witness to some event, which in general terms implies both existence and subjectivity.
I think your point on anthropic reasoning is apt, and there is absolutely an intuition there. The broader point may be that no “many-worlds” scenarios or counter-factual multiverses we consider when playing out decision strategies can violate the requirement that each scenario necessarily needs to co-exist with this world state now.
To argue that there could be a world where, for example, you decide to “two-box” on Newcombs problem without being the type of person who argues that very statement is meaningless. The possibility of such a world is eliminated by virtue of the justification. Any frame of reasoning already includes a model of the reasoner; if a theory requires that the reasoner not exist, or reason differently, the theory can’t coherently describe that world.
Identifying cases of transcendentally null argumentation has probably been one of the strongest reasoning techniques for me to dialectically synthesize toward a first-principles model aligned with contemporary understanding. And so, if its not part of the toolkit under a different name, I wanted to share it for consideration. If it is deliberately not in the toolbox—I’d be fascinated to learn the reasons for why not.
As for examples of transcendentally null argumentation, I will add another blog post outlining some grittier examples—but here are a few quick ones:
“There are no objective truths — everything is just perspective.”—To assert that claim as true is already to step outside relativism. If everything is just perspective, then that very claim is also “just perspective”, which means it carries no binding force on anyone else reasoning.
“All moral claims are expressions of emotion, not truth-apt statements.”—If moral claims are never truth-apt, then this moral claim (“no moral claims are truth-apt”) is also not truth-apt—so why should anyone take it seriously?