Strange attitude towards the physical world can be reframed as caring only about some abstract world that happens to resemble the physical world in some ways. A chess AI could be said to be acting on some specific physical chessboard within the real world and carefully avoiding all concern about everything else, but it’s more naturally described as acting on just the abstract chessboard, nothing else. I think values/preference (for some arbitrary agent) should be not just about probutility upon the physical world, but should also specify which world they are talking about, so that different agents are not just normatively disagreeing about relative value of events, but about which worlds are worth caring about (not just possible worlds within some space of nearby possible worlds, but fundamentally very different abstract worlds), and therefore what kinds of events (from which sample spaces) ought to serve as semantics for possible actions, before their value can be considered.
A world model (such as an LLM with frozen weights) is already an abstraction, its data is not the same as the physical world itself, but it’s coordinated with the physical world to some extent, similarly to how an abstract chessboard is coordinated with a specific physical chessboard in the real world (learning is coordination, adjusting the model so that the model and the world have more shared explanations for their details). Thus acting within an abstract world given by a world model (as opposed to within the physical world itself) might be a useful framing for systematically ignoring some aspects of the physical world, and world models could be intentionally crafted to emphasize particular aspects.
Strange attitude towards the physical world can be reframed as caring only about some abstract world that happens to resemble the physical world in some ways. A chess AI could be said to be acting on some specific physical chessboard within the real world and carefully avoiding all concern about everything else, but it’s more naturally described as acting on just the abstract chessboard, nothing else. I think values/preference (for some arbitrary agent) should be not just about probutility upon the physical world, but should also specify which world they are talking about, so that different agents are not just normatively disagreeing about relative value of events, but about which worlds are worth caring about (not just possible worlds within some space of nearby possible worlds, but fundamentally very different abstract worlds), and therefore what kinds of events (from which sample spaces) ought to serve as semantics for possible actions, before their value can be considered.
A world model (such as an LLM with frozen weights) is already an abstraction, its data is not the same as the physical world itself, but it’s coordinated with the physical world to some extent, similarly to how an abstract chessboard is coordinated with a specific physical chessboard in the real world (learning is coordination, adjusting the model so that the model and the world have more shared explanations for their details). Thus acting within an abstract world given by a world model (as opposed to within the physical world itself) might be a useful framing for systematically ignoring some aspects of the physical world, and world models could be intentionally crafted to emphasize particular aspects.