RSS

Up­date­less De­ci­sion Theory

TagLast edit: 13 Jan 2024 7:42 UTC by Mateusz Bagiński

Motivation

Updateless Decision Theory (UDT) is a decision theory meant to deal with a fundamental problem in the existing decision theories: the need to treat the agent as a part of the world in which it makes its decisions. In contrast, in the most common decision theory today, Causal Decision Theory (CDT), the deciding agent is not part of the world model—its decision is the output of the CDT, but the agent’s decision in the world context is “magic”: in the moment of deciding, no causal links feed into its chosen action. It acts as though its decision was causeless, as in some dualist free-will theories.

Getting this issue right is critical in building a self-improving artificial general intelligence, as such an AI must analyze its own behavior and that of a next generation that it may build.

Updateless Decision Theory was invented by Wei Dai and first described in Towards a New Decision Theory.

See also

Content

UDT specifies that the optimal agent is the one with the best algorithm—the best mapping from observations to actions—across a probability distribution of all world-histories. (“Best” here, as in other decision theories, means one that maximizes a utility/​reward function.)

This definition may seem trivial, but in contrast, CDT says that an agent should choose the best *option* at any given moment, based on the effects of that action. As in Judea Pearl’s definition of causality, CDT ignores any causal links inbound to the decider, treating this agent as an uncaused cause. The agent is unconcerned about what evidence its decision may provide about the agent’s own mental makeup—evidence which may suggest that the agent will make suboptimal decisions in other cases.

Evidential Decision Theory is the other leading decision theory today. It says that the agent should make the choice for which the expected utility, as calculated with Bayes’ Rule, is the highest. EDT avoids CDT’s pitfalls, but has its own flaw: It ignores the distinction between causation and correlation. In CDT, the agent is an uncaused cause, and in EDT, the converse: It is caused, but not a cause.

One valuable insight from EDT is reflected in “UDT 1.1” (see the article by McAllister in references), a variant of UDT in which the agent takes into account that some of its algorithm (mapping from observations to actions) may be prespecified and not entirely in its control, so that it has to gather evidence and draw conclusions about part of its own mental makeup. The difference between UDT 1.0 and 1.1 is that UDT 1.1 iterates over policies, whereas UDT 1.0 iterates over actions.

Both UDT and Timeless Decision Theory (TDT) make decisions on the basis of what you would have pre-committed to. The difference is that UDT asks what you would have pre-committed to without the benefit of any observations you have made about the universe, while TDT asks what you would have pre-committed to given all information you’ve observed so far. This means that UDT pays in Counterfactual Mugging, while TDT does not.

UDT is very similar to Functional Decision Theory (FDT), but there are differences. FDT doesn’t include the UDT1.1 fix and Nate Soares states: “Wei Dai doesn’t endorse FDT’s focus on causal-graph-style counterpossible reasoning; IIRC he’s holding out for an approach to counterpossible reasoning that falls out of evidential-style conditioning on a logically uncertain distribution”. Rob Bensinger says that he’s heard UDT described as “FDT + a theory of anthropics”.

Since it is formalised using input-output maps instead of in terms of situations, it allows us to make predictions about what an agent would do given input representing an inconsistent situation, which can be important when dealing with perfect predictors.

Logical Uncertainty

A robust theory of logical uncertainty is essential to a full formalization of UDT. A UDT agent must calculate probabilities and expected values on the outcome of its possible actions in all possible worlds—sequences of observations and its own actions. However, it does not know its own actions in all possible worlds. (The whole point is to derive its actions.) On the other hand, it does have some knowledge about its actions, just as you know that you are unlikely to walk straight into a wall the next chance you get. So, the UDT agent models itself as an algorithm, and its probability distribution about what it itself will do is an important input into its maximization calculation.

Logical uncertainty is an area which has not yet been properly formalized, and much UDT research is focused on this area.

Blog posts

Relevant Comments

In addition to whole posts on UDT, there are also a number of comments which contain important information, often on less relevant posts.

External links

Towards a New De­ci­sion Theory

Wei Dai13 Aug 2009 5:31 UTC
83 points
148 comments6 min readLW link

Tor­ture vs. Dust vs. the Pre­sump­tu­ous Philoso­pher: An­thropic Rea­son­ing in UDT

Wei Dai3 Sep 2009 23:04 UTC
36 points
29 comments2 min readLW link

The Ab­sent-Minded Driver

Wei Dai16 Sep 2009 0:51 UTC
45 points
150 comments3 min readLW link

Why (and why not) Bayesian Up­dat­ing?

Wei Dai16 Nov 2009 21:27 UTC
35 points
26 comments2 min readLW link

What Are Prob­a­bil­ities, Any­way?

Wei Dai11 Dec 2009 0:25 UTC
48 points
89 comments2 min readLW link

Ex­plicit Op­ti­miza­tion of Global Strat­egy (Fix­ing a Bug in UDT1)

Wei Dai19 Feb 2010 1:30 UTC
55 points
38 comments2 min readLW link

What is Wei Dai’s Up­date­less De­ci­sion The­ory?

AlephNeil19 May 2010 10:16 UTC
52 points
69 comments7 min readLW link

Another at­tempt to ex­plain UDT

cousin_it14 Nov 2010 16:52 UTC
69 points
56 comments2 min readLW link

Where do self­ish val­ues come from?

Wei Dai18 Nov 2011 23:52 UTC
67 points
62 comments2 min readLW link

List of Prob­lems That Mo­ti­vated UDT

Wei Dai6 Jun 2012 0:26 UTC
42 points
11 comments1 min readLW link

An im­ple­men­ta­tion of modal UDT

Benya_Fallenstein11 Feb 2015 6:02 UTC
8 points
0 comments1 min readLW link

Up­date­less­ness and Son of X

Scott Garrabrant4 Nov 2016 22:58 UTC
17 points
8 comments3 min readLW link

UDT as a Nash Equilibrium

cousin_it6 Feb 2018 14:08 UTC
18 points
17 comments1 min readLW link

UDT can learn an­thropic probabilities

cousin_it24 Jun 2018 18:04 UTC
54 points
10 comments3 min readLW link

A Short Note on UDT

Chris_Leong8 Aug 2018 13:27 UTC
11 points
9 comments1 min readLW link

“UDT2” and “against UD+ASSA”

Wei Dai12 May 2019 4:18 UTC
50 points
7 comments7 min readLW link

Con­cep­tual Prob­lems with UDT and Policy Selection

abramdemski28 Jun 2019 23:50 UTC
61 points
16 comments9 min readLW link

FDT defects in a re­al­is­tic Twin Pri­son­ers’ Dilemma

Sylvester Kollin15 Sep 2022 8:55 UTC
37 points
1 comment26 min readLW link

FDT is not di­rectly com­pa­rable to CDT and EDT

Sylvester Kollin29 Sep 2022 14:42 UTC
36 points
8 comments21 min readLW link

Log­i­cal De­ci­sion The­o­ries: Our fi­nal failsafe?

Noosphere8925 Oct 2022 12:51 UTC
−7 points
8 comments1 min readLW link
(www.lesswrong.com)

An ex­pla­na­tion of de­ci­sion theories

metachirality1 Jun 2023 3:42 UTC
20 points
4 comments5 min readLW link

Open-minded updatelessness

10 Jul 2023 11:08 UTC
65 points
21 comments12 min readLW link

UDT shows that de­ci­sion the­ory is more puz­zling than ever

Wei Dai13 Sep 2023 12:26 UTC
195 points
51 comments1 min readLW link

Disen­tan­gling four mo­ti­va­tions for act­ing in ac­cor­dance with UDT

Julian Stastny5 Nov 2023 21:26 UTC
33 points
3 comments7 min readLW link

Up­date­less­ness doesn’t solve most problems

Martín Soto8 Feb 2024 17:30 UTC
124 points
43 comments12 min readLW link

The lat­tice of par­tial updatelessness

Martín Soto10 Feb 2024 17:34 UTC
21 points
4 comments5 min readLW link