I’m confused by your comment, but I’ll try to answer anyway.
As an agent in environment, you can consider the environment in behavioral semantics: environment is an equivalence class of all the things that behave the same as what you see. Instead of minimal model, this gives a maximal model. Everything about territory remains black box, except the structure imposed by the way you see the territory, by the way you observe things, perform actions, and value strategies. This dissolves the question about what the territory “really is”.
Your answer strikes me as unsatisfactory: if we apply it to humans, we lose interest in electricity, atoms, quarks etc. An agent can opt to dig deeper into reality to find the base-level stuff, or it can “dissolve the question” and walk away satisfied. Why would you want to do the latter?
The agent has preferences over these black boxes (or strategies that instantiate them), and digging deeper may be a good idea. To get rid of the (unobservable) structure in environment, the preferences for the elements of environment have to be translated in terms of preferences over the whole situations. The structure of environment becomes the structure of preferences over the black boxes.
Two models can behave the same as what you’ve seen so far, but diverge in future predictions. Which model should you give greater weight to? That’s the question I’m asking.
The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior
In behavioral interpretation, you weight observations, or effects of possible strategies (on observations/actions), not the way territory is. The base level is the agent, and rules of its game with environment. Everything else describes the form of this interaction, and answers the questions not about the underlying reality, but about how the agent sees it. If the distinction you are making doesn’t reach the level of influencing what the agent experiences, it’s absent from this semantics: no weighting, no moving parts, no distinction at all.
For a salient example: if the agent in the same fixed internal state is instantiated multiple times both in the same environment at the same time, and at different times, or even in different environments, with different probabilities for some notion of that, all of these instances and possibilities together go under one atomic black-box symbol for the territory corresponding to that state of the agent, with no internal structure. The structure however can be represented in preferences for strategies or sets of strategies for the agent.
Vladimir, are you proposing this “behavioral interpretation” for an AI design, or for us too? Is this an original idea of yours? Can you provide a link to a paper describing it in more detail?
There are many similarities (or dualities) between algebras and coalgebras which are often useful as guiding principles. But one should keep in mind that there are also significant differences between algebra and coalgebra. For example, in a computer science setting, algebra is mainly of interest for dealing with finite data elements – such as finite lists or trees – using induction as main definition and proof principle. A key feature of coalgebra is that it deals with potentially infinite data elements, and with appropriate state-based notions and techniques for handling these objects. Thus, algebra is about construction, whereas coalgebra is about deconstruction – understood as observation and modification.
A rule of thumb is: data types are algebras, and state-based systems are coalgebras. But this does not always give a clear-cut distinction. For instance, is a stack a data type or does it have a state? In many cases however, this rule of thumb works: natural numbers are algebras (as we are about to see), and machines are coalgebras. Indeed, the latter have a state that can be observed and modified.
[...]
Initial algebras (in Sets) can be built as so-called term models: they contain everything that can be built from the operations themselves, and nothing more. Similarly, we saw that final coalgebras consist of observations only.
I’m confused by your comment, but I’ll try to answer anyway.
As an agent in environment, you can consider the environment in behavioral semantics: environment is an equivalence class of all the things that behave the same as what you see. Instead of minimal model, this gives a maximal model. Everything about territory remains black box, except the structure imposed by the way you see the territory, by the way you observe things, perform actions, and value strategies. This dissolves the question about what the territory “really is”.
Your answer strikes me as unsatisfactory: if we apply it to humans, we lose interest in electricity, atoms, quarks etc. An agent can opt to dig deeper into reality to find the base-level stuff, or it can “dissolve the question” and walk away satisfied. Why would you want to do the latter?
The agent has preferences over these black boxes (or strategies that instantiate them), and digging deeper may be a good idea. To get rid of the (unobservable) structure in environment, the preferences for the elements of environment have to be translated in terms of preferences over the whole situations. The structure of environment becomes the structure of preferences over the black boxes.
Two models can behave the same as what you’ve seen so far, but diverge in future predictions. Which model should you give greater weight to? That’s the question I’m asking.
The current best answer we know seems to be to write each consistent hypothesis in a formal language, and weight longer explanations inverse exponentially, renormalizing such that your total probability sums to 1. Look up aixi, universal prior
In behavioral interpretation, you weight observations, or effects of possible strategies (on observations/actions), not the way territory is. The base level is the agent, and rules of its game with environment. Everything else describes the form of this interaction, and answers the questions not about the underlying reality, but about how the agent sees it. If the distinction you are making doesn’t reach the level of influencing what the agent experiences, it’s absent from this semantics: no weighting, no moving parts, no distinction at all.
For a salient example: if the agent in the same fixed internal state is instantiated multiple times both in the same environment at the same time, and at different times, or even in different environments, with different probabilities for some notion of that, all of these instances and possibilities together go under one atomic black-box symbol for the territory corresponding to that state of the agent, with no internal structure. The structure however can be represented in preferences for strategies or sets of strategies for the agent.
Vladimir, are you proposing this “behavioral interpretation” for an AI design, or for us too? Is this an original idea of yours? Can you provide a link to a paper describing it in more detail?
I’m generalizing/analogizing from the stuff I read on coalgebras, and in this case I’m pretty sure the idea makes sense, it’s probably explored elsewhere. You can start here, or directly from Introduction to Coalgebra: Towards Mathematics of States and Observations (PDF).