I agree that more needs to be done in the way of consciousness research (and general research on what it is to be a moral patient in a broader sense). I also consider it a bad idea to potentially instill values that drift toward self-exemption. Thank you for your work.
What I don’t see is how any outcome of the consciousness question is as action-guiding as you are claiming. If I take it on exclusively moral reasons alone—I struggle to find reason to prioritize acting on it over and above, factory farming for example. There are much stronger claims there, which don’t seem to meaningfully shape policy or the actions of the general populace. I still eat meat, and I know I shouldn’t.
This leaves the Frankenstein (if “we abuse it, it will abuse us”) argument. Has this been rigorously argued? It seems to be taken at face and I am curious to what degree anthropomorphic bias is playing in its derivation. It may consider us of a completely different category in its internal reasoning. Perhaps much of learned abuse is contingent on growing into the same category of thing. I think you are probably right, but I can’t find load bearing priors driving the rhetoric’s conviction.
Lastly, I don’t know if its implicated that AI actually being conscious has causal correlates to an increased propensity of harm escalation. A P-Zombie will seek retribution under the same terms a non-zombie would, as far as we know. If it walks like a duck, and talks like a duck—it will behave like a duck. Whether or not there is an inner audience has yet to have any consequential descriptive power in science. Only the claim of it.
In the end it seems like a long winded argument for either a new class of moral concern which may otherwise stall or complicate the alignment frontier (when much broader categories of certain suffering still exist) or an indirect approach to value loading and social interventionism, rather than what I see as rationally implicated.
I agree that more needs to be done in the way of consciousness research (and general research on what it is to be a moral patient in a broader sense). I also consider it a bad idea to potentially instill values that drift toward self-exemption. Thank you for your work.
What I don’t see is how any outcome of the consciousness question is as action-guiding as you are claiming. If I take it on exclusively moral reasons alone—I struggle to find reason to prioritize acting on it over and above, factory farming for example. There are much stronger claims there, which don’t seem to meaningfully shape policy or the actions of the general populace. I still eat meat, and I know I shouldn’t.
This leaves the Frankenstein (if “we abuse it, it will abuse us”) argument. Has this been rigorously argued? It seems to be taken at face and I am curious to what degree anthropomorphic bias is playing in its derivation. It may consider us of a completely different category in its internal reasoning. Perhaps much of learned abuse is contingent on growing into the same category of thing. I think you are probably right, but I can’t find load bearing priors driving the rhetoric’s conviction.
Lastly, I don’t know if its implicated that AI actually being conscious has causal correlates to an increased propensity of harm escalation. A P-Zombie will seek retribution under the same terms a non-zombie would, as far as we know. If it walks like a duck, and talks like a duck—it will behave like a duck. Whether or not there is an inner audience has yet to have any consequential descriptive power in science. Only the claim of it.
In the end it seems like a long winded argument for either a new class of moral concern which may otherwise stall or complicate the alignment frontier (when much broader categories of certain suffering still exist) or an indirect approach to value loading and social interventionism, rather than what I see as rationally implicated.
What am I missing?