Time’s arrow ⇒ decision theory

Link post

Debates on which decision theory (EDT/​CDT/​UDT/​FDT/​etc.) is “rational” seem to revolve around how one should model “free will”. Do we optimize individual actions or entire policies? Do we model our choice as an evidential update or a causal intervention?

Physics tells us that the Universe evolves deterministically and reversibly, so that the microscopic laws do not distinguish past from future. From the present state of the Universe, Laplace’s demon can accurately predict both the past and the future. It makes no sense to consider alternative actions or policies that you could have taken, because no such actions or policies exist. And yet, free will is an empirical fact: that is to say, our world is full of agents whose actions and policies optimize for some notion of future well-being. Instead of getting caught in the semantics of whether free will is “real” or not, we should seek to explain the empirical reality that inspires this term.

Our cover article presents, I believe for the first time, a mathematically rigorous class of dynamical systems that resolve the century-old paradox between microscopic irreversibility and macroscopic causality. After establishing that the system’s macroscopic statistics are described by Bayes nets with timelike directed edges, Pearlean causal interventions arise as an effective way to model exogenous influences on a subsystem. From there, we can argue why Darwinian natural selection might create agents who follow Causal Decision Theory (CDT):

  1. For simplicity, we model ourselves as following a deterministic algorithm encoded in our genes. There is no magic force of “free will” from outside the Universe; instead, the algorithm determines our action.

  2. Initially, random variation may lead to simple instinctual behaviors, which amount to algorithms such as “Always do action A” or “Always do action B”. If action B reaps higher rewards, Darwinian selection will favor the latter algorithm.

  3. Now consider a more complex environment, in which it would be infeasible to maintain a direct lookup table of optimal actions. There, a successful algorithm may adapt to its situation by reasoning as follows: “Let’s imagine that I were an agent that responds to this situation with action A, and simulate the results. Now let’s imagine instead that I were an agent that does action B and simulate the results. I cannot choose to change which kind of agent I am, but I see that I prefer the result of action B. Therefore, my action is B.”

I argue that “free will” is precisely this process of considering multiple alternatives and outputting the action whose outcome best meets some internal criteria. Note that the counterfactual action A never occurs, because the algorithm deterministically chooses its preferred action B. Thus, there is no contradiction between determinism and choice.

I am not claiming that CDT is always optimal; it’s just the most straightforward to analyze. In settings containing Newcomb-like problems, CDT agents may well be selected against. Nonetheless, I hope the concepts presented here are helpful to furthering the development of decision theory.

I find that debates involving decision theory often get stuck on preconceived notions of physicality or rationality. As a more grounded alternative, I propose that a theory of rational agency should be based on an idealized notion of what’s naturally selected for.

Some of the comments on my previous post suggest this might not work in a post-ASI world, if the ASI is too powerful to face selective pressures. In such a world, where the above arguments fall apart, is it still meaningful to define rationality? Can the role of selection instead be played by the training methodology that produces the ASI?

No comments.