Consider adding to the paper a high-level/simplified description of the environments for which the following sentence from the abstract applies: “We prove that for most prior beliefs one might have about the agent’s reward function [...] one should expect optimal policies to seek power in these environments.” (If it’s the set of environments in which “the “vast majority” of RSDs are only reachable by following a subset of policies” consider clarifying that in the paper). It’s hard (at least for me) to infer that from the formal theorems/definitions.
It isn’t the size of the object that matters here, the key considerations are structural. In this unrolled model, the unrolled state factors into the (action history) and the (world state). This is not true in general for other parts of the environment.
My “unrolling trick” argument doesn’t require an easy way to factor states into [action history] and [the rest of the state from which the action history can’t be inferred]. A sufficient condition for my argument is that the complete action history could be inferred from every reachable state. When this condition fulfills, the environment implicitly contains an action log (for the purpose of my argument), and thus the POWER (IID) of all the states is equal. And as I’ve argued before, this condition seems plausible for sufficiently complex real-world-like environments. BTW, any deterministic time-reversible environment fulfills this condition, except for cases where multiple actions can yield the same state transition (in which case we may not be able to infer which of those actions were chosen at the relevant time step).
It’s easier to find reward functions that incentivize a given action sequence if the complete action history can be inferred from every reachable state (and the easiness depends on how easy it is to compute the action history from the state). I don’t see how this fact relates to instrumental convergence supposedly disappearing for “most objectives” [EDIT: when using a simplicity prior over objectives; otherwise, instrumental convergence may not apply regardless]. Generally, if an action log constitutes a tiny fraction of the environment, its existence shouldn’t affect properties of “most objectives” (regardless of whether we use the uniform prior or a simplicity prior).
Consider adding to the paper a high-level/simplified description of the environments for which the following sentence from the abstract applies: “We prove that for most prior beliefs one might have about the agent’s reward function [...] one should expect optimal policies to seek power in these environments.” (If it’s the set of environments in which “the “vast majority” of RSDs are only reachable by following a subset of policies” consider clarifying that in the paper). It’s hard (at least for me) to infer that from the formal theorems/definitions.
My “unrolling trick” argument doesn’t require an easy way to factor states into [action history] and [the rest of the state from which the action history can’t be inferred]. A sufficient condition for my argument is that the complete action history could be inferred from every reachable state. When this condition fulfills, the environment implicitly contains an action log (for the purpose of my argument), and thus the POWER (IID) of all the states is equal. And as I’ve argued before, this condition seems plausible for sufficiently complex real-world-like environments. BTW, any deterministic time-reversible environment fulfills this condition, except for cases where multiple actions can yield the same state transition (in which case we may not be able to infer which of those actions were chosen at the relevant time step).
It’s easier to find reward functions that incentivize a given action sequence if the complete action history can be inferred from every reachable state (and the easiness depends on how easy it is to compute the action history from the state). I don’t see how this fact relates to instrumental convergence supposedly disappearing for “most objectives” [EDIT: when using a simplicity prior over objectives; otherwise, instrumental convergence may not apply regardless].
Generally, if an action log constitutes a tiny fraction of the environment, its existence shouldn’t affect properties of “most objectives” (regardless of whether we use the uniform prior or a simplicity prior).Ditto :)