So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
Sure, that makes perfect sense. I haven’t really given this a whole lot of thought; you are getting the fresh start. :)
The self in self-referential isn’t implied to be me or you or any form of “I”. Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.
A “want” can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps “goal” would work better in the end.
The main points I am working with:
An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?)
A non-intelligent entity can become intelligent
Some entities have the ability to change, add, or remove goals
These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?)
The “original” goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence.
Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven’t really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.
Sure, that makes perfect sense. I haven’t really given this a whole lot of thought; you are getting the fresh start. :)
The self in self-referential isn’t implied to be me or you or any form of “I”. Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.
A “want” can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps “goal” would work better in the end.
The main points I am working with:
An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?)
A non-intelligent entity can become intelligent
Some entities have the ability to change, add, or remove goals
These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?)
The “original” goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence.
Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven’t really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.