It might be interesting to put on top of this theory something that is dealing more with utilities, or something similar. Since this theory is basically a calculus of what agents could do, it seems likely that we could say interesting things by putting on top of it analysis of what agents should do.
I don’t think that’s what you had in mind, but one reason I am interested in learning more about Cartesian Frames is that I think that they might prove useful for formalizing the locality of goals. Basically, the idea is to capture whether the goal followed by a system is really about its inputs, or if it is about the state of the world.
One way to understand this distinction is through wireheading. For example, I consider my own goals as about the world, because I wouldn’t want to wirehead to believe that I accomplished them. Whereas having the goal of always being happy means being completely okay with wireheading, and so having a goal about my input instead of what truly happens in the world.
Intuitively, this distinction seem to depend on how the boundaries are drawn between the system/agent and the environment, as well as the interface. Which is where I draw a possible connection with Cartesian Frames. But I’m not sure if it is possible to use them for that purpose.
Preferences and goals
It might be interesting to put on top of this theory something that is dealing more with utilities, or something similar. Since this theory is basically a calculus of what agents could do, it seems likely that we could say interesting things by putting on top of it analysis of what agents should do.
I don’t think that’s what you had in mind, but one reason I am interested in learning more about Cartesian Frames is that I think that they might prove useful for formalizing the locality of goals. Basically, the idea is to capture whether the goal followed by a system is really about its inputs, or if it is about the state of the world.
One way to understand this distinction is through wireheading. For example, I consider my own goals as about the world, because I wouldn’t want to wirehead to believe that I accomplished them. Whereas having the goal of always being happy means being completely okay with wireheading, and so having a goal about my input instead of what truly happens in the world.
Intuitively, this distinction seem to depend on how the boundaries are drawn between the system/agent and the environment, as well as the interface. Which is where I draw a possible connection with Cartesian Frames. But I’m not sure if it is possible to use them for that purpose.