Richard’s post is similar to something I was thinking about a few months ago. I tried to attack the problem of AI by looking at very simple systems that can be said to accomplish “goals” without all the fancy stuff that people typically think they have to put in AI, and and asking how that works.
For example, a mass hanging by a spring: it moves the mass back to its equilibrium position without doing the things listed in 2). But here, Richard is asking an easier question in 4), since he’s asking about systems that are specifically designed to track some reference, rather than systems that happen to do it as a consequence of their other properties.
In that case, the answer (about how an arational system accomplishes the goals of rationality) is pretty simple: the system has been physically set up in a way that exploits the laws of nature to create mutual information between the system and its environment. If you view Bayescraft as a way to increase the mutual information between yourself (hopefully meaning the brain part!) and your environment, then the system is in fact doing that, so it is not arational. Its design implements Bayesian inference.
In the case of the thermostat, the temperature sensor, via heat transfer, becomes entangled with its environment, a natural process that happens to have an isomorphism to the Bayes Theorem. Then, something else senses the reading, causing another set of effents that determines what temperature air to blow out.
The next question is why this mutual information is such that it keeps the temperature within a specific range, rather than making it spiral out of control. The answer to that part, as others have mentioned, is that the person who set up the system, chose rules that happened to work. That required another kind of entanglement with the environment, which does not need to be done again during the operation of the thermostat.
Well, as long as the assumptions it’s based on don’t change too much...
Richard’s post is similar to something I was thinking about a few months ago. I tried to attack the problem of AI by looking at very simple systems that can be said to accomplish “goals” without all the fancy stuff that people typically think they have to put in AI, and and asking how that works.
For example, a mass hanging by a spring: it moves the mass back to its equilibrium position without doing the things listed in 2). But here, Richard is asking an easier question in 4), since he’s asking about systems that are specifically designed to track some reference, rather than systems that happen to do it as a consequence of their other properties.
In that case, the answer (about how an arational system accomplishes the goals of rationality) is pretty simple: the system has been physically set up in a way that exploits the laws of nature to create mutual information between the system and its environment. If you view Bayescraft as a way to increase the mutual information between yourself (hopefully meaning the brain part!) and your environment, then the system is in fact doing that, so it is not arational. Its design implements Bayesian inference.
In the case of the thermostat, the temperature sensor, via heat transfer, becomes entangled with its environment, a natural process that happens to have an isomorphism to the Bayes Theorem. Then, something else senses the reading, causing another set of effents that determines what temperature air to blow out.
The next question is why this mutual information is such that it keeps the temperature within a specific range, rather than making it spiral out of control. The answer to that part, as others have mentioned, is that the person who set up the system, chose rules that happened to work. That required another kind of entanglement with the environment, which does not need to be done again during the operation of the thermostat.
Well, as long as the assumptions it’s based on don’t change too much...