It looks like the uncertainty about your own actions in other possible worlds is entirely analogous to uncertainty about mathematical facts: in both cases, the answer is in denotation of the structure you already have at hand, so it doesn’t seem like the question about your own actions should be treated differently from any other logical question.
(The following is moderately raw material and runs a risk of being nonsense, I don’t understand it well enough.)
One perspective that wasn’t mentioned and that I suspect may be important is considering interaction between different processes (or agents) as working by the same mechanism as common partial histories between alternative versions of the same agent. If you can have logical knowledge about your own actions in other possible states that grow in time and possibilities from your current structure, the same treatment can be given to possible states of the signal you send out, in either time-direction, that is to consequences of actions and observations. One step further, any knowledge (properly defined) you have at all about something else gives the same power of mutual coordination with that something, as the common partial history gives to alternative or at-different-times versions of yourself.
This problem seems deeply connected to logic and theoretical computer science, in particular models of concurrency.
By the way, you say “partial histories of sense data and actions”. I try considering this problem in time-reversible dynamic, it adds a lot of elegance, and there actions are not part of history, but more like something that is removed from history. State of the agent doesn’t accumulate from actions and observations, instead it’s added to by observations and taken away from by actions. The point at which something is considered observation or action and not part of agent’s state is itself rather arbitrary, and both can be seen as points of shifting the scope on what is considered part of agent. (This doesn’t have anything agent-specific, and is more about processes in general.)
Everything you said sounds correct, except the last bit, which is just unclear to me. I’d welcome a demonstration (or formal definition) some day:
By the way, you say “partial histories of sense data and actions”. I try considering this problem in time-reversible dynamic, it adds a lot of elegance, and there actions are not part of history, but more like something that is removed from history. State of the agent doesn’t accumulate from actions and observations, instead it’s added to by observations and taken away from by actions. The point at which something is considered observation or action and not part of agent’s state is itself rather arbitrary, and both can be seen as points of shifting the scope on what is considered part of agent. (This doesn’t have anything agent-specific, and is more about processes in general.)
It looks like the uncertainty about your own actions in other possible worlds is entirely analogous to uncertainty about mathematical facts: in both cases, the answer is in denotation of the structure you already have at hand, so it doesn’t seem like the question about your own actions should be treated differently from any other logical question.
(The following is moderately raw material and runs a risk of being nonsense, I don’t understand it well enough.)
One perspective that wasn’t mentioned and that I suspect may be important is considering interaction between different processes (or agents) as working by the same mechanism as common partial histories between alternative versions of the same agent. If you can have logical knowledge about your own actions in other possible states that grow in time and possibilities from your current structure, the same treatment can be given to possible states of the signal you send out, in either time-direction, that is to consequences of actions and observations. One step further, any knowledge (properly defined) you have at all about something else gives the same power of mutual coordination with that something, as the common partial history gives to alternative or at-different-times versions of yourself.
This problem seems deeply connected to logic and theoretical computer science, in particular models of concurrency.
By the way, you say “partial histories of sense data and actions”. I try considering this problem in time-reversible dynamic, it adds a lot of elegance, and there actions are not part of history, but more like something that is removed from history. State of the agent doesn’t accumulate from actions and observations, instead it’s added to by observations and taken away from by actions. The point at which something is considered observation or action and not part of agent’s state is itself rather arbitrary, and both can be seen as points of shifting the scope on what is considered part of agent. (This doesn’t have anything agent-specific, and is more about processes in general.)
Everything you said sounds correct, except the last bit, which is just unclear to me. I’d welcome a demonstration (or formal definition) some day:
Just curious, did you get the name “ambient control” from ambient calculi?