This post is an example of how words can go wrong. Richard hasn’t clearly specified what this ‘model’ or ‘implicit model’ stuff is, yet for the whole post he repeats again and again that it’s not in control systems. What is the content of this assertion? If I accept it, or if I reject it, how is this belief going to pay its rent? What do I anticipate differently?
Can anything be ‘model’? How do I know that there is a model somewhere?
The word itself is so loaded that without additionally specifying what you mean, it can be used only to weakly suggest, not strongly assert a property.
Any property you see in a system is actually in your interpretation of the system, in its semantics (you see a map, not the territory, this is not a pipe). Interpretation and the procedure of establishing it given a system are sometimes called a ‘model’ of the system, this is a general theme in what is usually meant by a model. Interpretation doesn’t need to happen in anyone’s head, it may exist in another system, for example in a computer program, or it can be purely mathematical, arising formally from the procedure that specifies how to build it.
In this sense, to call something a model is to interpret it as an interpretation of something else. Even a rock may be said to be a model of the universe, under the right interpretation, albeit a very abstract model, not useful at all. Of course, you can narrow down this general theme to assert that rocks can’t model the universe, in particular because they can’t simulate certain properties, or because your interpretation procedure breaks down when you present it with a rock. But you actually have to state the meaning of your terms in the cases like this, hopefully with a definition-independent goal to accomplish by finally getting the message through.
This is exactly why I tried to restate the situation in terms of the more precise concept of “mutual information” in Richard’s last topic, although I guess I was a bit vague at points as to how it works.
So in the context of Bayesian inference, and rationality in general, we should start with:
“A controller has a model (explicit or implicit) of it’s environment iff there is mutual information between the controller and the environment.”
This statement is equivalent to:
“A controller has a model (explicit or implicit) of its environment iff, given the controller, you require a shorter message to describe its environment (than if you could not reference the controller).”
From that starting point, the question is easier to answer. Take the case of the thermostat. If the temperature sensor is considered part of this controller, then yes, it has a model of its environment. Why? Because if you are given the sensor reading, you can more concisely describe the environment: “That reading, plus a time/amplitude shift.”
Richard puts a lot of emphasis on how cool it is that the thermostat doesn’t need to know if the sun is shining. This point can be rephrased as:
“A controller does not need to have mutual information with all of its environment to work.” Or,
“Learning a controller, and the fact that it works, does not suffice to tell you everything about its environment.”
I think that statement sums up what Richard is trying to say here.
And of course you can take this method further and discuss the mutual information between a) the controller, b) the output, c) the environment. That is, do a) and b) together suffice to tell you c)?
My belief will pay rent as follows: I no longer expect by default to find computers inside any mechanism that exhibits complex behavior. For clarity let me rephrase the discussion, substituting some other engineering concept in place of “model”.
RichardKennaway: Hey guys, I found this nifty way of building robots without using random access memory!
Vladimir_Nesov: WTF is “random access memory”? Even a rock could be said to possess it if you squint hard enough. Your words are meaningless. Here, study this bucket of Eliezer’s writings.
The substitution is not equivalent; people are more likely to agree whether something contains “random access memory” than whether it contains “a model”.
I think philosophers could easily blur the definition of “random access memory”, they just didn’t get around to it yet. A competent engineer can peek inside a device and tell you whether it’s running a model of its surroundings, so the word “model” does carry some meaning regardless of what philosophers say. If you want a formal definition, we could start with something like this: does the device contain independent correlata for independent external concepts of interest?
There are signals within the control system that are designed to relate to each other in the same way as do corresponding properties of the world outside. That is what a model is.
Is this definition inadequate? To me it seems to capture (up to English language precision) what it means to have a control system with a model in it.
This is very broad definition, the flexibility hiding in the word ‘corresponding’, and in the choice of properties to model. In a thermostat, for example, the state of thermometer, together with the fact that its readings correspond to the temperature of the world outside, seems to satisfy this definition (one signal, no internal structure). This fact is explicitly denied in the article, but without clear explanation as to why. A more strict definition will of course be able to win this argument.
I was going to say that any stateful controller has a model — the state constitutes the model — but reading this comment made me realize that that would be arguing about whether the tree makes a sound.
What I think Richard is denying is that a thermostat models certain things. It’s not that it doesn’t have a model, but it’s a model in the same sense that a line is a conic section (degenerate). It does not predict the future, it does not remember the past, and there is nothing in it that resembles Bayesian probabilities. It knows what the temperature is, but the latter is wired directly into the next action. Thought tends to involve a few intermediate stages.
The belief will pay its rent as follows: previously, whenever I saw a mechanism that exhibited complex and apparently goal-directed behavior, I expected to find a computer inside. Now I don’t.
Also, I could dismantle the “rational reasoning” with the same ease as you dismantled “model”. How do you tell if a system contains reasoning? Ooh, it’s all-so-subjective.
This post is an example of how words can go wrong. Richard hasn’t clearly specified what this ‘model’ or ‘implicit model’ stuff is, yet for the whole post he repeats again and again that it’s not in control systems. What is the content of this assertion? If I accept it, or if I reject it, how is this belief going to pay its rent? What do I anticipate differently?
Can anything be ‘model’? How do I know that there is a model somewhere?
The word itself is so loaded that without additionally specifying what you mean, it can be used only to weakly suggest, not strongly assert a property.
Any property you see in a system is actually in your interpretation of the system, in its semantics (you see a map, not the territory, this is not a pipe). Interpretation and the procedure of establishing it given a system are sometimes called a ‘model’ of the system, this is a general theme in what is usually meant by a model. Interpretation doesn’t need to happen in anyone’s head, it may exist in another system, for example in a computer program, or it can be purely mathematical, arising formally from the procedure that specifies how to build it.
In this sense, to call something a model is to interpret it as an interpretation of something else. Even a rock may be said to be a model of the universe, under the right interpretation, albeit a very abstract model, not useful at all. Of course, you can narrow down this general theme to assert that rocks can’t model the universe, in particular because they can’t simulate certain properties, or because your interpretation procedure breaks down when you present it with a rock. But you actually have to state the meaning of your terms in the cases like this, hopefully with a definition-independent goal to accomplish by finally getting the message through.
This is exactly why I tried to restate the situation in terms of the more precise concept of “mutual information” in Richard’s last topic, although I guess I was a bit vague at points as to how it works.
So in the context of Bayesian inference, and rationality in general, we should start with:
“A controller has a model (explicit or implicit) of it’s environment iff there is mutual information between the controller and the environment.”
This statement is equivalent to:
“A controller has a model (explicit or implicit) of its environment iff, given the controller, you require a shorter message to describe its environment (than if you could not reference the controller).”
From that starting point, the question is easier to answer. Take the case of the thermostat. If the temperature sensor is considered part of this controller, then yes, it has a model of its environment. Why? Because if you are given the sensor reading, you can more concisely describe the environment: “That reading, plus a time/amplitude shift.”
Richard puts a lot of emphasis on how cool it is that the thermostat doesn’t need to know if the sun is shining. This point can be rephrased as:
“A controller does not need to have mutual information with all of its environment to work.” Or,
“Learning a controller, and the fact that it works, does not suffice to tell you everything about its environment.”
I think that statement sums up what Richard is trying to say here.
And of course you can take this method further and discuss the mutual information between a) the controller, b) the output, c) the environment. That is, do a) and b) together suffice to tell you c)?
EDIT: Some goofs
My belief will pay rent as follows: I no longer expect by default to find computers inside any mechanism that exhibits complex behavior. For clarity let me rephrase the discussion, substituting some other engineering concept in place of “model”.
RichardKennaway: Hey guys, I found this nifty way of building robots without using random access memory!
Vladimir_Nesov: WTF is “random access memory”? Even a rock could be said to possess it if you squint hard enough. Your words are meaningless. Here, study this bucket of Eliezer’s writings.
The substitution is not equivalent; people are more likely to agree whether something contains “random access memory” than whether it contains “a model”.
I think philosophers could easily blur the definition of “random access memory”, they just didn’t get around to it yet. A competent engineer can peek inside a device and tell you whether it’s running a model of its surroundings, so the word “model” does carry some meaning regardless of what philosophers say. If you want a formal definition, we could start with something like this: does the device contain independent correlata for independent external concepts of interest?
He wrote at the top,
Is this definition inadequate? To me it seems to capture (up to English language precision) what it means to have a control system with a model in it.
This is very broad definition, the flexibility hiding in the word ‘corresponding’, and in the choice of properties to model. In a thermostat, for example, the state of thermometer, together with the fact that its readings correspond to the temperature of the world outside, seems to satisfy this definition (one signal, no internal structure). This fact is explicitly denied in the article, but without clear explanation as to why. A more strict definition will of course be able to win this argument.
I was going to say that any stateful controller has a model — the state constitutes the model — but reading this comment made me realize that that would be arguing about whether the tree makes a sound.
What I think Richard is denying is that a thermostat models certain things. It’s not that it doesn’t have a model, but it’s a model in the same sense that a line is a conic section (degenerate). It does not predict the future, it does not remember the past, and there is nothing in it that resembles Bayesian probabilities. It knows what the temperature is, but the latter is wired directly into the next action. Thought tends to involve a few intermediate stages.
Scholastics strikes again!
The belief will pay its rent as follows: previously, whenever I saw a mechanism that exhibited complex and apparently goal-directed behavior, I expected to find a computer inside. Now I don’t.
Also, I could dismantle the “rational reasoning” with the same ease as you dismantled “model”. How do you tell if a system contains reasoning? Ooh, it’s all-so-subjective.