How important are MDPs for AGI (Safety)?

I don’t think finite-state MDPs are a particularly powerful conceptual tool for designing strong RL algorithms. I’ll consider the case of no function approximation first.

It is certainly easier to do RL in a finite-state MDP. The benefit of modeling an environment as a finite-state MDP, and then using an MDP-inspired RL algorithm, is that when the agent searches for plans to follow, it doesn’t evaluate the same plans twice.

Instead, it caches the (approximate) “value” for each possible “state”, and then if a plan would take it to a state that it has already evaluated, it doesn’t have to re-evaluate what the plan would be from that point on. It already knows, more or less, how much utility it could get thereafter. Compare that to naïve approach of using a world-model to do full expectimax search at each timestep.

The model-the-environment-as-finite-state-MDP-then-do-dynamic-programming approach, or just “the MDP approach” for short, is, I think, all about not searching the same region of the planning search space twice. This is clearly a good thing, but I don’t think the MDP approach in RL contains much more conceptual progress toward AGI than that. If I were to try to do a pre-natum of a fairly advanced RL agent, that is, if I tried to anticipate a response to “things went well; why did that happen?”, my guess would be that a big part of the answer would be:

It avoids searching much of the planning search space even once, certainly not twice.

The MDP approach with function approximation is more powerful, depending on how good the function approximation is. There’s no upper bound on how good the MDP approach with function approximation could be, because buried inside the function approximation (whether that’s approximation of the value, or the optimal policy, or both) could be some clever RL algorithm that does most of the work on its own. A good function approximator that is able to generate accurate predictions of the value and/​or the optimal policy might appear to us to “generalize” well across “similar states”. But it’s not clear to me to what extent it is a useful abstraction to say that the function approximator thinks in terms of the agent bouncing around a set of states that it classifies as more or less similar to each other.

I don’t mean to say that the MDP approach is useless. I’m certainly not against using a TD-style update instead of a full Monte Carlo rollout for training a function approximator; it’s better than not using one and effectively searching parts of planning search space many times over. I just don’t think it’s a hugely big deal conceptually.

I think this is one small, very disputable argument against defaulting to a finite-state MDP formalism in AGI safety work. A natural alternative is to consider the agent’s entire interaction history as the state, and suppose that the agent is still somehow using clever, efficient heuristics for approximating expectimax planning, with or without built-in methods for caching plans that have already been evaluated. None of this says that there’s any cost to using a finite-state MDP formalism for AGI safety work, only that the benefits don’t seem so great as to make it a “natural choice”.