Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)