The “rightness” and “actual world” properties you ascribe to this opaque box don’t seem to be actually present.
They aren’t present as part of what we must know to predict the agent’s actions. They are part of a “stance” (like Dennett’s intentional stance) that we can use to give a narrative framework within which to understand agent’s motivation. What you are calling a black box isn’t supposed to be part of the “view” at all. Instead of a black box, there is a socket where a particular program vector and “preference vector” , together with the UDT formalism, can be plugged in.
ETA: The reference to a “‘preference vector’ ” was a misreading of Wei Dai’s post on my part. What I (should have) meant was the utility function U over world-evolution vectors .
They aren’t present as part of what we must know to predict the agent’s actions. They are part of a “stance” (like Dennett’s intentional stance) that we can use to give a narrative framework within which to understand agent’s motivation. What you are calling a black box isn’t supposed to be part of the “view” at all. Instead of a black box, there is a socket where a particular program vector and “preference vector” , together with the UDT formalism, can be plugged in.
ETA: The reference to a “‘preference vector’ ” was a misreading of Wei Dai’s post on my part. What I (should have) meant was the utility function U over world-evolution vectors .
I don’t understand this.