A decision-making algorithm can only care about things accessible in its mind. The “actual world” is not one of them.
The purpose of this post is not to defend realism, and I think that it would take me far afield to do so now. For example, on my view, the agent is not identical to its decision-making algorithm, if that is to be construed as saying that the agent is purely an abstract mathematical entity. Rather, the agent is the actual implementation of that algorithm. The universe is not purely an algorithm. It is an implementation of that algorithm. Not all algorithms are in fact run.
I haven’t given any reasons for the position that I just stated. But I hope that you can recognize it as a familiar position, however incoherent it seems to you. Do you need any more explanation to understand the viewpoint that I’m coming from in the post?
The actual world is not epistemically accessible to the agent. It’s a useless concept for its decision-making algorithm. An ontology (logic of actions and observations) that describes possible worlds and in which you can interpret observations, is useful, but not the actual world.
An ontology is not a “logic of actions and observations” as I am using the term. I am using it in the sense described in the Stanford Encyclopedia of Philosophy.
At any rate, what I’m calling the ontology is not part of the decision theory. I consider different ontologies that the agent might think in terms of, but I am explicit that I am not trying to change how the UDT itself works when I write, “I suggest an alternative conception of a UDT agent, without changing the UDT formalism.”
The purpose of this post is not to defend realism, and I think that it would take me far afield to do so now. For example, on my view, the agent is not identical to its decision-making algorithm, if that is to be construed as saying that the agent is purely an abstract mathematical entity. Rather, the agent is the actual implementation of that algorithm. The universe is not purely an algorithm. It is an implementation of that algorithm. Not all algorithms are in fact run.
I haven’t given any reasons for the position that I just stated. But I hope that you can recognize it as a familiar position, however incoherent it seems to you. Do you need any more explanation to understand the viewpoint that I’m coming from in the post?
The actual world is not epistemically accessible to the agent. It’s a useless concept for its decision-making algorithm. An ontology (logic of actions and observations) that describes possible worlds and in which you can interpret observations, is useful, but not the actual world.
An ontology is not a “logic of actions and observations” as I am using the term. I am using it in the sense described in the Stanford Encyclopedia of Philosophy.
At any rate, what I’m calling the ontology is not part of the decision theory. I consider different ontologies that the agent might think in terms of, but I am explicit that I am not trying to change how the UDT itself works when I write, “I suggest an alternative conception of a UDT agent, without changing the UDT formalism.”
I give up.