So, I agree that at some level of abstraction any ought can be rationalized with an is. But, at some point, agents need to define meta-strategies for dealing with uncertain situations; for example—all the decision theories and thought experiments that are necessary to ground the rational frameworks used to evaluate and reason as to what any agent should do to maximize expected outcome based on the utility functions they ascribe an agent should have with respect to the world.
While there is no scientific justification or explanation for value beyond what we ascribe—thus there being no ontological basis for morals—we generally agree that reality is self-aware through our conscious experience. And unless everything is fundamentally conscious, or consciousness does not exist, then the various loci of subjectivity (however you want to define them) form the rational basis for value calculus. So then isn’t the debate on what constitutes consciousness, the ‘camps’ that argue over its definition, and the conclusions we draw from it, exactly what would be used to derive the desired utility recipients of such decision frameworks such a CEV? And is this not a moral philosophy and meta-ethical practice in of itself? Until that’s settled—the Camp #2 framework gives you a taxonomy for the structures whereby meta-ethics should be applied (without even a mysticism import), and Camp #1 uses a language that keeps morality ontologically (or least linguistically) inert.
At some point we adopt and agree on axioms where science does not give us the data to reason and those should be whatever we agree may have the highest utility. But by virtue of them not being determined by experiment beforehand we can only use counterfactual reasoning to agree on them—of which in of itself the counterfactual that we ought to have done this because we will do this becomes equally up for debate.
I share your uncertainty about whether a lobster, let alone a carrot, feels anything like I do, and I distrust one-number ethics.
What puzzles me is the double standard. We cheerfully use words like blue, harm, or value even though I can’t know our private images line up—yet when the word is qualia, we demand lab-grade inter-subjective proof before letting it into the taxonomy.
Why the extra burden? Physics kept “heat” on the books long before kinetic theory—its placeholder helped, never hurt. Likewise, qualia is a rough pointer that stops us from editing felt experience out of the ontology just because we can’t yet measure it.
A future optimiser that tracks disk-thrashing but not suffering will tune for the former and erase the latter. Better an imperfect pointer to the phenomenon of felt valence than a brittle catalogue of “beings that can hurt.” Qualia names the capacity-for-hurt-or-joy; identity-independent, like heat, and present wherever the right physical pattern appears.
If you had to draft a first-pass rule today, which observable features would you check to decide whether an AI system, a lobster, or a human fetus belongs in the “moral-patient” set? And what language would you use for those features?