My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.
I’ve been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.
It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?
Moving alone doesn’t count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn’t begging the question.
Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.
Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.
I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.
I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.
1) The pencil kept going to the same spot as if it had a “goal”
2) The pencil was able to respond to “obstacles” in ways not predicted by my original simply theory of pencil behavior.
I believe that I would say the pencil is more intelligent if it could pass through more “complicated” obstacles.
Here are some of my basic problems
1) What is a “goal” beyond what my intuition says
2) Similarly what is an “obstacle”
3) And what is “complicated”
I have some sense that “obstacle” is related to reducing the probability that the goal will be reached
I have some s that complicated has to do with the degree to which the probability is reduced.
A control system has two inputs (called its “perception” and “reference”) and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.
What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.
The answers to your questions are:
A “goal” is the reference input of a control system.
An “obstacle” is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.
“Complicated” means “I don’t (yet) understand this.”
And a thought: “Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely.”
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?
Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
I don’t want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I’d start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words “now we just have to scale it up”, if I was working on AGI I wouldn’t bother mentioning it until I had a demo of a level that would scare Eliezer.
Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.
If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn’t consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent.
Just my two cents.
I don’t know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, “want.” I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want.
At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don’t know anything about the field so this could have already been tried and failed.
Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.
The problem with the self-referential from my perspective is that it presumes a self.
It seems to me that ideas like “I” and “want” graph humanness on to other objects.
So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
Sure, that makes perfect sense. I haven’t really given this a whole lot of thought; you are getting the fresh start. :)
The self in self-referential isn’t implied to be me or you or any form of “I”. Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.
A “want” can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps “goal” would work better in the end.
The main points I am working with:
An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?)
A non-intelligent entity can become intelligent
Some entities have the ability to change, add, or remove goals
These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?)
The “original” goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence.
Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven’t really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.
If you don’t mind a slightly mathy article, I thought Legg & Hutter’s Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.
Thoughts about intelligence.
My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.
I’ve been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.
It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?
Moving alone doesn’t count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn’t begging the question.
Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.
Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.
I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.
I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.
1) The pencil kept going to the same spot as if it had a “goal”
2) The pencil was able to respond to “obstacles” in ways not predicted by my original simply theory of pencil behavior.
I believe that I would say the pencil is more intelligent if it could pass through more “complicated” obstacles.
Here are some of my basic problems
1) What is a “goal” beyond what my intuition says
2) Similarly what is an “obstacle”
3) And what is “complicated”
I have some sense that “obstacle” is related to reducing the probability that the goal will be reached
I have some s that complicated has to do with the degree to which the probability is reduced.
Thoughts? Suggestions for readings?
You are talking about control systems.
A control system has two inputs (called its “perception” and “reference”) and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.
What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.
The answers to your questions are:
A “goal” is the reference input of a control system.
An “obstacle” is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.
“Complicated” means “I don’t (yet) understand this.”
Suggestions for readings.
And a thought: “Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely.”
-- William James, “The Principles of Psychology”
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?
Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don’t personally think we’ll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that’s what you were really after in the original post, but it’s a subjective beast. OTOH, if it is “mere” complex behavior we’re after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether “minds” is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don’t know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
Maybe this is a more general formulation?
I don’t want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I’d start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words “now we just have to scale it up”, if I was working on AGI I wouldn’t bother mentioning it until I had a demo of a level that would scare Eliezer.
Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.
1. LessWrong, passim.
2. Marcus Hutter’s Compression Prize.
3. AIXItl and the Gödel machine.
If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn’t consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent.
Just my two cents.
I don’t know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, “want.” I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want.
At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don’t know anything about the field so this could have already been tried and failed.
Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.
The problem with the self-referential from my perspective is that it presumes a self.
It seems to me that ideas like “I” and “want” graph humanness on to other objects.
So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
Sure, that makes perfect sense. I haven’t really given this a whole lot of thought; you are getting the fresh start. :)
The self in self-referential isn’t implied to be me or you or any form of “I”. Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.
A “want” can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps “goal” would work better in the end.
The main points I am working with:
An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?)
A non-intelligent entity can become intelligent
Some entities have the ability to change, add, or remove goals
These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?)
The “original” goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence.
Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven’t really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.
Some food for philosophical thought, an oil drop that “solves” a maze.
TL;DR it follows a chemical gradient due to it changing surface tension.
I’d read something on the intentional stance.
If you don’t mind a slightly mathy article, I thought Legg & Hutter’s Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.
So if something is capable, contrary to expectations, of achieving a constant state despite varying conditions, it’s probably intelligent?
I guess that in space, everything is intelligent.