I’m looking forward to seeing more from you on this. NLP has a couple of bits and bobs of control theory in it, most notably the foundational ideas that the way to get a person to change (or any other result) is to be more flexible in your behavior than any other part of the system, and that you need to be able to measure yourself relative to a well-defined outcome. Even Robert Fritz’s “creative process” books emphasize a concept of structural tension, which is the distance between a goal state and reality. My thoughts-into-action video is based on initiating internal measurement of the distance between a clean desk and a messy one, then standing back and letting the control system do its job.
Btw, while it isn’t necessary for a control system to predict, remember, or model anything, in humans predictive modeling is an important part of the control system nonetheless. (See e..g the experiments that show humans can detect probability patterns without even having conscious awareness.)
Actually, risk homeostasis is another good example of a human control system that requires a predictive model in order to establish a set-point… heck, I imagine you can’t even catch a ball unless you can predict where it’s going to be.
Interesting anecdote: I recently read a Wired article about perception that mentions a professional pickpocket (entertainer/magician) who found that the way to have your hands be quicker than someone’s eyes is to move your hands in a curve—because if you move in a straight line, the person’s eyes go to where your hands are going to be, rather than tracking where they are.
You could view all of these things as simply setting goals for a control system, but I find Hawkins’ HTM model of the cortex more compelling from an evolutionary point of view. A design based on predictive memory control systems being “all the way down” is easier to evolve than one that has to have a bunch of collaborating components to produce the same behaviors, whereas a HTM-based cortex can just get bigger and add more layers. And at the early end of the evolutionary chain, incrementally adding memory/prediction to existing control systems is an equally incremental win—i.e., “easy to evolve”.
I imagine you can’t even catch a ball unless you can predict where it’s going to be.
Mmm… does “you” mean a person or does “you” mean anything? Catching a ball can easily be done without predicting its final location and was discussed in a different thread.
Mmm… does “you” mean a person or does “you” mean anything? Catching a ball can easily be done without predicting its final location and was discussed in a different thread.
That depends on what you mean by “predict”. I don’t mean a conscious prediction, I just mean a model that tells you how to get there. Even if that model is an algorithm, it’s still a prediction.
Consider the ball player who runs to catch the ball, and then realizes he’s not going to make it and stops trying. How is that not a prediction?
I just mean a model that tells you how to get there.
Oh, okay. I misunderstood what you meant.
Consider the ball player who runs to catch the ball, and then realizes he’s not going to make it and stops trying. How is that not a prediction?
That has little to do with what I was talking about. Something that “predicts” by thinking “If I am not holding the ball, move closer” has no concept of being able to “make it” to the landing spot. It couldn’t care less where the ball ends up. All it needs to know is if it is currently holding the ball and how to get closer. The “how to get closer” is the predictor.
That has little to do with what I was talking about. Something that “predicts” by thinking “If I am not holding the ball, move closer” has no concept of being able to “make it” to the landing spot. It couldn’t care less where the ball ends up. All it needs to know is if it is currently holding the ball and how to get closer. The “how to get closer” is the predictor.
As I said, I understand you can make a control system that works that way. I’m just saying that humans don’t appear to work that way, and possibly cortically-driven behaviors in general (across different species) don’t work that way either.
Edit to add: see also the Memory-prediction Framework page on Wikipedia, for more info on feed-forward predictive modeling in the neocortex, e.g.:
The central concept of the memory-prediction framework is that bottom-up inputs are matched in a hierarchy of recognition, and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generate predictions of subsequent expected inputs.
I’m just saying that humans don’t appear to work that way, and possibly cortically-driven behaviors in general (across different species) don’t work that way either.
Yeah, this makes sense and that is why I asked the question about who “you” was.
Mmm… does “you” mean a person or does “you” mean anything?
I’m looking forward to seeing more from you on this. NLP has a couple of bits and bobs of control theory in it, most notably the foundational ideas that the way to get a person to change (or any other result) is to be more flexible in your behavior than any other part of the system, and that you need to be able to measure yourself relative to a well-defined outcome. Even Robert Fritz’s “creative process” books emphasize a concept of structural tension, which is the distance between a goal state and reality. My thoughts-into-action video is based on initiating internal measurement of the distance between a clean desk and a messy one, then standing back and letting the control system do its job.
Btw, while it isn’t necessary for a control system to predict, remember, or model anything, in humans predictive modeling is an important part of the control system nonetheless. (See e..g the experiments that show humans can detect probability patterns without even having conscious awareness.)
Actually, risk homeostasis is another good example of a human control system that requires a predictive model in order to establish a set-point… heck, I imagine you can’t even catch a ball unless you can predict where it’s going to be.
Interesting anecdote: I recently read a Wired article about perception that mentions a professional pickpocket (entertainer/magician) who found that the way to have your hands be quicker than someone’s eyes is to move your hands in a curve—because if you move in a straight line, the person’s eyes go to where your hands are going to be, rather than tracking where they are.
You could view all of these things as simply setting goals for a control system, but I find Hawkins’ HTM model of the cortex more compelling from an evolutionary point of view. A design based on predictive memory control systems being “all the way down” is easier to evolve than one that has to have a bunch of collaborating components to produce the same behaviors, whereas a HTM-based cortex can just get bigger and add more layers. And at the early end of the evolutionary chain, incrementally adding memory/prediction to existing control systems is an equally incremental win—i.e., “easy to evolve”.
Mmm… does “you” mean a person or does “you” mean anything? Catching a ball can easily be done without predicting its final location and was discussed in a different thread.
That depends on what you mean by “predict”. I don’t mean a conscious prediction, I just mean a model that tells you how to get there. Even if that model is an algorithm, it’s still a prediction.
Consider the ball player who runs to catch the ball, and then realizes he’s not going to make it and stops trying. How is that not a prediction?
Oh, okay. I misunderstood what you meant.
That has little to do with what I was talking about. Something that “predicts” by thinking “If I am not holding the ball, move closer” has no concept of being able to “make it” to the landing spot. It couldn’t care less where the ball ends up. All it needs to know is if it is currently holding the ball and how to get closer. The “how to get closer” is the predictor.
As I said, I understand you can make a control system that works that way. I’m just saying that humans don’t appear to work that way, and possibly cortically-driven behaviors in general (across different species) don’t work that way either.
Edit to add: see also the Memory-prediction Framework page on Wikipedia, for more info on feed-forward predictive modeling in the neocortex, e.g.:
Yeah, this makes sense and that is why I asked the question about who “you” was.