Ok. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.
But then your argument is a social argument. I’m fine with a social definition of consciousness—after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.
But then why are you making a structural argument about LLMs in the end?
PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don’t think that’s a good comparison.
>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.
No? That’s definitely not what I’m arguing.
>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity. Doesn’t matter if this human was assembled atom by atom on molecular assembler, it’s still a conscious human.
Look, here I’m talking about pathways to acquire that “structure” inside you. Not outlook of it.
If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity.
OK. I guess I had trouble parsing this. Esp. “without having it natively”.
My understanding of your point is now that you see consciousness from “hardware” (“natively”) and consciousness from “software” (learned in some way) as equal. Which kind of makes intuitive sense as the substrate shouldn’t matter.
Corollary: A social system (a corporation?) should also be able to be conscious if the structure is right.
Ok. It seems you are arguing that anything that presents like it is conscious implies that it is conscious. You are not arguing whether or not the structure of LLMs can give rise to consciousness.
But then your argument is a social argument. I’m fine with a social definition of consciousness—after all, our actions depend to a large degree on social feedback and morals (about what beings have value) at different times have been very different and thus been socially construed.
But then why are you making a structural argument about LLMs in the end?
PS. In fact, I commented on the filler symbol paper when Xixidu posted about it and I don’t think that’s a good comparison.
>It seems you are arguing that anything that presents like it is conscious implies that it is conscious.
No? That’s definitely not what I’m arguing.
>But what ultimately matters is what this thing IS, not how it became in that way. If, this thing internalized that conscious type of processing from scratch, without having it natively, then resulting mind isn’t worse than the one that evolution engineered with more granularity. Doesn’t matter if this human was assembled atom by atom on molecular assembler, it’s still a conscious human.
Look, here I’m talking about pathways to acquire that “structure” inside you. Not outlook of it.
OK. I guess I had trouble parsing this. Esp. “without having it natively”.
My understanding of your point is now that you see consciousness from “hardware” (“natively”) and consciousness from “software” (learned in some way) as equal. Which kind of makes intuitive sense as the substrate shouldn’t matter.
Corollary: A social system (a corporation?) should also be able to be conscious if the structure is right.