It would be more accurate to say that I did not explicitly cite all the facts that went into my conclusion, as a result, in part, of relying on a presumed shared background. (Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
What evidence is there for paperclips being sentient?
The fact that they exhibit deep structural similarities with the ultimate purpose of existence.
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
(Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
Under a self-serving definition that doesn’t actually enclose a helpful portion of conceptspace, yes.
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
??? That’s like asking, Would you value a User:JGWeissman which was not conscious, but was identical to you in every observable way?
So, you believe that the basic properties of paperclips imply sentience? Is an object which was made of plastic and statically shaped so that it could hold together many sheets of paper, also necessarily sentient?
If you’re going to use an unusual definition of a word like that, it’s usually a good idea to make that clear up front, so that you don’t get into this kind of pointless argument.
“Sentient” doesn’t have a standard functional definition for topics like this. It’s more of a search for an intended region of conceptspace and I think mine matches up with what humans would find useful after significant reflection.
Even if that’s the case, there’s little to no overlap between your definition and the one(s) we usually use, and there was no obvious way for us to figure out what you meant, or even that you were using a non-overlapping definition, without guessing.
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)
In other words, what’s so great about real paperclips? The answer would involve a thorough analysis of your values and careful modification to maintain numerous desiderata, which I believe would result in you regarding real paperclips as great; it’s not something I can briefly explain here.
Let’s work together to better understand each others values so that we both converge on our reflective equilibria!
It would be more accurate to say that I did not explicitly cite all the facts that went into my conclusion, as a result, in part, of relying on a presumed shared background. (Sentience is related to behavior and the causes of behavior, and humans of all stages of development have similar neural structures involved in the causation of their behavior.)
Would you value an object which was not sentient, but was made of metal and statically shaped so that it could hold together many sheets of paper?
Under a self-serving definition that doesn’t actually enclose a helpful portion of conceptspace, yes.
??? That’s like asking, Would you value a User:JGWeissman which was not conscious, but was identical to you in every observable way?
So, you believe that the basic properties of paperclips imply sentience? Is an object which was made of plastic and statically shaped so that it could hold together many sheets of paper, also necessarily sentient?
If it’s plastic, it’s not a paperclip.
I didn’t ask if it is a paperclip, I asked if it is sentient.
??? This again. “And I didn’t ask if it was User:JGWeissman, I asked if it is sentient.”
Paperclips are sentient. User:JGWeissman is sentient. Plastic “paperclips” are not paperclips. Therefore, _____ .
I feel like I’m running the CLIP first-meeting protocol with a critically-inverted clippy here!
Granting that humans and paperclips are sentient doesn’t imply that no other things are sentient.
How are you defining ‘sentient’, anyway?
True.
sentient(X) = “structured such that X is, or could converge on through self-modification, the ultimate purpose of existence”
Not a perfect definition, but a lot better than, “X responds to its environment, and an ape brain is wired to like X”.
If you’re going to use an unusual definition of a word like that, it’s usually a good idea to make that clear up front, so that you don’t get into this kind of pointless argument.
“Sentient” doesn’t have a standard functional definition for topics like this. It’s more of a search for an intended region of conceptspace and I think mine matches up with what humans would find useful after significant reflection.
Even if that’s the case, there’s little to no overlap between your definition and the one(s) we usually use, and there was no obvious way for us to figure out what you meant, or even that you were using a non-overlapping definition, without guessing.
Given sentience’s open status, each party’s definition should not be expected to be given in detail until the discussion starts to hinge on such details, and that is when I gave it.
I also dispute that there is little to no overlap—have you thought about my definition, and does it pass the test of correctly classifying the things you deem sentient and non-sentient in canonical cases?
It seems to me that the discussion started to hinge on that as soon as you claimed that paperclips are sentient, or when JGWeisman started talking about the ability to react to the environment at the very latest.
Given that I don’t believe that there’s an ultimate purpose of existence, your definition doesn’t properly parse at all. If I use my usual workaround for such cases and parse it as if you’d said “structured such that X is, or could converge on through self-modification, the “ultimate purpose of existence”, however the speaker defines “ultimate purpose of existence”″, it still doesn’t match how I use the word ‘sentience’, nor how I see it used by most speakers. (You may be thinking of the word ‘sapience’, though even that’s not exactly a match.)
Neither conclusion about the sentience of plastic pseudo-paperclips makes this a valid syllogism. I am not sure what your point is.
What about “plastic ‘paperclips’ aren’t necessarily sentient”, ape?
To be clear, this is the answer you endorse?
What is special about metal, that metal in a certain shape is sentient, but plastic in the same shape is not?
In other words, what’s so great about real paperclips? The answer would involve a thorough analysis of your values and careful modification to maintain numerous desiderata, which I believe would result in you regarding real paperclips as great; it’s not something I can briefly explain here.
Let’s work together to better understand each others values so that we both converge on our reflective equilibria!