I think I tried to solve a similar problem before: that of looking at the simplest possible stable control system and seeing how I can extract the system’s “beliefs” and “values” that result in it remaining stable. Then, see if I can find a continuous change between the structure of that system, and a more complex system, like a human.
For example, consider the simple spring-mass-damper system. If you move it from its equlibrium position xe, it will return. What do the concepts of “belief” and “value” map onto here? For beliefs, I used the concept of mutual information: what about the system could you look at to learn whether the mass is not at xe? How does the system know it’s not at xe?
The information is contained in the force the spring exerts. However, this is also the determinant of which direction it moves the spring, it’s “value”. So it looks like the beliefs and values are fully-mixed: the same thing that tells you what it believes, tells you what it does. In that case, at what point, in the structural transition from the spring to intelligent agents, does the distinction between values and beliefs start to form, if at all?
Incidently, I only just now read Vladimir_Nesov’s post because previously I hadn’t bothered to make the equations readable, since they don’t render properly in the browser I used. “Beware trivial inconveniences”, indeed!
(Separate post because of the different issues and to avoid tl;dr.)
I think I tried to solve a similar problem before: that of looking at the simplest possible stable control system and seeing how I can extract the system’s “beliefs” and “values” that result in it remaining stable. Then, see if I can find a continuous change between the structure of that system, and a more complex system, like a human.
For example, consider the simple spring-mass-damper system. If you move it from its equlibrium position xe, it will return. What do the concepts of “belief” and “value” map onto here? For beliefs, I used the concept of mutual information: what about the system could you look at to learn whether the mass is not at xe? How does the system know it’s not at xe?
The information is contained in the force the spring exerts. However, this is also the determinant of which direction it moves the spring, it’s “value”. So it looks like the beliefs and values are fully-mixed: the same thing that tells you what it believes, tells you what it does. In that case, at what point, in the structural transition from the spring to intelligent agents, does the distinction between values and beliefs start to form, if at all?
Incidently, I only just now read Vladimir_Nesov’s post because previously I hadn’t bothered to make the equations readable, since they don’t render properly in the browser I used. “Beware trivial inconveniences”, indeed!
(Separate post because of the different issues and to avoid tl;dr.)