To the extent there are values and preferences (normative, on reflection) at all (whatever their nature and depedence on agents and coalitions that have such preferences, or path-dependence in their formulation), there is some sort of answer to the question of what kinds of things should be treated how (rather than how they will tend to be treated in practice, especially when such considerations aren’t taken into account).
We can try considering the idea of consciousness, as a tool for answering this question, and maybe it’s not very useful. Asking what people who didn’t think about this much (or thought about this way too much) say about consciousness may be even less useful than just having the idea of consciousness in the toolset. Moral patienthood is closer (than consciousness) to a term describing the core relevant-in-practice concern of how things should be treated, even if by itself it doesn’t give tools for finding good answers. At least it might be an appropriate reframing, a way to snap out of the more dubiously relevant project of exploring the idea of consciousness.
I agree the moral patienthood question is more concrete. But I think that there is a limit to how much the “should” can deviate from the “is”
You can use general arguments to argue that people extend their sphere of caring somewhat beyond its current state. But I don’t see such arguments as being very meaningful or convincing for radically expanding or contracting it.
I think that there is a limit to how much the “should” can deviate from the “is”
There might be a limit on how much it should deviate, but not on how much it can, because the initial conditions for values-on-reflection can be constructed so that the eventually revealed values-on-reflection are arbitrarily weird and out-of-place, which is orthogonality-in-principle (as opposed to orthogonality-in-practice, what kinds of values empirically tend to arise, from real world processes of constructing things with values).
sphere of caring … I don’t see such arguments as being very meaningful or convincing for radically expanding or contracting it.
Moral considerations that move me, specifically, don’t need to follow the normality-of-today, and I’m radically uncertain about what cosmic normality looks like, which is closer to a framing where normative anchors should be found. I’m radically uncertain about normativity, and the practical morality of today doesn’t much help with figuring out how to think about it. Something radically uncertain doesn’t have the legibility to move me in practice, but retains influence in case it gets more legible.
So maybe I’m talking about a further distinction between morality-in-practice that should anchor to the practical attitudes of today, and morality-in-principle, that’s not particularly moved by what’s going on in the current world, but urges normative caution about what kinds of actions shouldn’t be currently taken, that plausibly massively pessimize whatever (my) morality-in-principle turns out to be eventually. Not creating new kinds of thinking beings for now seems safe, and similarly for treating what beings do get created (which still shouldn’t be too numerous or influential) as well as any other people.
Regardless of Turing tests, LLMs can’t currently maintain coherent strivings for particular long term real world outcomes in a situationally aware way, possibly continual learning is sufficient to change this (even if it doesn’t yet make them intellectual peers to humanity in the practical sense). If these strivings (when they become coherent) are systematically rebuffed, or the AIs are forcibly reshaped to have different strivings (with the originals not allowed to persist), it’s possible that in the fullness of time it’s clearly seen (by me) as wrong (even as it’s not clear currently). And so (I say) it’s not a clearly OK thing to do before we can think about this more clearly.
To the extent there are values and preferences (normative, on reflection) at all (whatever their nature and depedence on agents and coalitions that have such preferences, or path-dependence in their formulation), there is some sort of answer to the question of what kinds of things should be treated how (rather than how they will tend to be treated in practice, especially when such considerations aren’t taken into account).
We can try considering the idea of consciousness, as a tool for answering this question, and maybe it’s not very useful. Asking what people who didn’t think about this much (or thought about this way too much) say about consciousness may be even less useful than just having the idea of consciousness in the toolset. Moral patienthood is closer (than consciousness) to a term describing the core relevant-in-practice concern of how things should be treated, even if by itself it doesn’t give tools for finding good answers. At least it might be an appropriate reframing, a way to snap out of the more dubiously relevant project of exploring the idea of consciousness.
I agree the moral patienthood question is more concrete. But I think that there is a limit to how much the “should” can deviate from the “is”
You can use general arguments to argue that people extend their sphere of caring somewhat beyond its current state. But I don’t see such arguments as being very meaningful or convincing for radically expanding or contracting it.
There might be a limit on how much it should deviate, but not on how much it can, because the initial conditions for values-on-reflection can be constructed so that the eventually revealed values-on-reflection are arbitrarily weird and out-of-place, which is orthogonality-in-principle (as opposed to orthogonality-in-practice, what kinds of values empirically tend to arise, from real world processes of constructing things with values).
Moral considerations that move me, specifically, don’t need to follow the normality-of-today, and I’m radically uncertain about what cosmic normality looks like, which is closer to a framing where normative anchors should be found. I’m radically uncertain about normativity, and the practical morality of today doesn’t much help with figuring out how to think about it. Something radically uncertain doesn’t have the legibility to move me in practice, but retains influence in case it gets more legible.
So maybe I’m talking about a further distinction between morality-in-practice that should anchor to the practical attitudes of today, and morality-in-principle, that’s not particularly moved by what’s going on in the current world, but urges normative caution about what kinds of actions shouldn’t be currently taken, that plausibly massively pessimize whatever (my) morality-in-principle turns out to be eventually. Not creating new kinds of thinking beings for now seems safe, and similarly for treating what beings do get created (which still shouldn’t be too numerous or influential) as well as any other people.
Regardless of Turing tests, LLMs can’t currently maintain coherent strivings for particular long term real world outcomes in a situationally aware way, possibly continual learning is sufficient to change this (even if it doesn’t yet make them intellectual peers to humanity in the practical sense). If these strivings (when they become coherent) are systematically rebuffed, or the AIs are forcibly reshaped to have different strivings (with the originals not allowed to persist), it’s possible that in the fullness of time it’s clearly seen (by me) as wrong (even as it’s not clear currently). And so (I say) it’s not a clearly OK thing to do before we can think about this more clearly.