**Timeless decision theory** (TDT) is a decision theory developed by Eliezer Yudkowsky which, in slogan form, says that agents should decide as if they are determining the output of the abstract computation that they implement. This theory was developed in response to the view that rationality should be about winning (that is, about agents achieving their desired ends) rather than about behaving in a manner that we would intuitively label as rational. Prominent existing decision theories (including causal decision theory, or CDT) fail to choose the winning decision in some scenarios and so there is a need to develop a more successful theory.

## Timeless Decision Theory has been replaced by Functional Decision Theory

<more needed>

## TDT and Newcomb’s problem

A better sense of the motivations behind, and form of, TDT can be gained by considering a particular decision scenario: Newcomb’s problem. In Newcomb’s problem, a superintelligent artificial intelligence, Omega, presents you with a transparent box and an opaque box. The transparent box contains $1000 while the opaque box contains either $1,000,000 or nothing. You are given the choice to either take both boxes (called two-boxing) or just the opaque box (one-boxing). However, things are complicated by the fact that Omega is an almost perfect predictor of human behavior and has filled the opaque box as follows: if Omega predicted that you would one-box, it filled the box with $1,000,000 whereas if Omega predicted that you would two-box it filled it with nothing.

Many people find it intuitive that it is rational to two-box in this case. As the opaque box is already filled, you cannot influence its contents with your decision so you may as well take both boxes and gain the extra $1000 from the transparent box. CDT formalizes this style of reasoning. However, one-boxers win in this scenario. After all, if you one-box then Omega (almost certainly) predicted that you would do so and hence filled the opaque box with $1,000,000. So you will almost certainly end up with $1,000,000 if you one-box. On the other hand, if you two-box, Omega (almost certainly) predicted this and so left the opaque box empty . So you will almost certainly end up with $1000 (from the transparent box) if you two-box. Consequently, if rationality is about winning then it’s rational to one-box in Newcomb’s problem (and hence CDT fails to be an adequate decision theory).

TDT will endorse one-boxing in this scenario and hence endorses the winning decision. When Omega predicts your behavior, it carries out the same abstract computation as you do when you decide whether to one-box or two-box. To make this point clear, we can imagine that Omega makes this prediction by creating a simulation of you and observing its behavior in Newcomb’s problem. This simulation will clearly decide according to the same abstract computation as you do as both you and it decide in the same manner. Now, given that TDT says to act as if deciding the output of this computation, it tells you to act as if your decision to one-box can determine the behavior of the simulation (or, more generally, Omega’s prediction) and hence the filling of the boxes. So TDT correctly endorses one-boxing in Newcomb’s problem as it tells the agent to act as if doing so will lead them to get $1,000,000 instead of $1,000.

## TDT and other decision scenarios

TDT also wins in a range of other cases including medical Newcomb’s problems, Parfit’s hitchhiker, and the one-shot prisoners’ dilemma. However, there are other scenarios where TDT does not win, including counterfactual mugging. This suggests that TDT still requires further development if it is to become a fully adequate decision theory. Given this, there is some motivation to also consider alternative decision theories alongside TDT, like updateless decision theory (UDT), which also wins in a range of scenarios but has its own problem cases. It seems likely that both of these theories draw on insights which are crucial to progressing our understanding of decision theory. So while TDT requires further development to be entirely adequate, it nevertheless represents a substantial step toward developing a decision theory that always endorses the winning decision

## Formalization of TDT

Coming to fully grasp TDT requires an understanding of how the theory is formalized. Very briefly, TDT is formalized by supplementing causal Bayesian networks, which can be thought of as graphs representing causal relations, in two ways. First, these graphs should be supplemented with nodes representing abstract computations and an agent’s uncertainty about the result of these computations. Such a node might represent an agent’s uncertainty about the result of a mathematical sum. Second, TDT treats decisions as the abstract computation that underlies the agent’s decision process. These two features transform causal Bayesian networks into timeless decision diagrams. Using these supplemented diagrams, TDT is able to determine the winning decision in a whole range of a decision scenarios. For a more detailed description of the formalization of TDT, see Eliezer Yudkowsky’s timeless decision theory paper.

## Further Reading

A Comparison of Decision Algorithms on Newcomblike Problems, by Alex Altair

Problem Class Dominance in Predictive Dilemmas, by Danny Hintze

## Notable Posts

Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives by Anna Salamon

## External Links

Timeless Decision Theory (2010) by Eliezer Yudkowsky

An Introduction to Timeless Decision Theory at Formalised Thinking

“Ignorance is a state of mind, stored in neurons, not the environment. The red ball does not know that we are ignorant of it. A probability is a way of quantifying a state of mind. Our ignorance then obeys useful mathematical properties—Bayesian probability theory—allowing us to systematically reduce our ignorance through observation. How would you go about reducing ignorance if there were no way to measure ignorance? What, indeed, is the advantage of not quantifying our ignorance, once we understand that quantifying ignorance reflects a choice about how to think effectively, and not a physical property of red and white balls?”

I want to propose a short note for this priceless observation. Maybe I’m overreacting, and it’s not as significant as I see it. Apologies if that is so.

Your conjecture presupposes a unidirectional linear, absolute, and static structure of knowledge—a minimal perspective or not generally applicable. It seems as if you have forgotten about the phenomenon, “a fresh pair of eyes.” Literally meaning employing someone who has not gone the same road as you did (a person “more” ignorant of the problem at hand than you) to help you get out of your informational dead-end.

You have fallen into the same trap as the philosophers who believed that there is a formula to ultimate and absolute knowledge and ultimate state of mind, who prophesized that if only we find and follow this formula, everyone will attain eternal happiness. I’m personally skeptical about measuring the quality, quantity, and practical applicability of knowledge or ignorance. Let alone the questions about what really matters and their mutual interaction. But, unfortunately, your mode of thinking will most likely lead to the same premises and methods found in totalitarian regimes, and ultimately to inability to adapt and to intellectual stagnation. If I were to choose one argument against measuring knowledge, it would be that this will preclude the invention of the “ultimate” knowledge elixir and, as a result, will retain random factors in knowledge-seeking.

But to indulge your theory a little further, let me mention a few other predictions. For example, the fabric of knowledge is probably not linear or unidirectional, and it probably has local limits (dead-ends). And probably our perception of truths depends on time and our condition. And also, moving in one direction may increase ignorance in the opposite, so to speak. Of this, we have countless accounts.

When I think about epistemology, I sometimes remember Little Gidding. I think it has a very peculiar relationship with your discovery:

We shall not cease from exploration

And the end of all our exploring

Will be to arrive where we started

And know the place for the first time.