Let’s Discuss Functional Decision Theory
I’ve just finished reading through Functional Decision Theory: A New Theory of Rationality, but there are some rather basic questions that are left unanswered since it focused on comparing it to Casual Decision Theory and Evidential Decision Theory:
How is Functional Decision Theory different from Timeless Decision Theory? All I can gather is that FDT intervenes on the mathematical function, rather than on the agent. What problems does it solve that TDT can’t? (Apparently it solves Mechanical Blackmail with an imperfect predictor and so it should also be able to solve Counterfactual Mugging?)
How is it different from Updateless decision theory? What’s the simplest problem in which they give different results?
Functional Decision Theory seems to require counterpossibilities, where we imagine that a function output a result that is different from what it outputs. It further says that this is a problem that isn’t yet solved. What approaches have been tried so far? Further, what are some key problems within this space?
Your third question is the most interesting one. Many variants of UDT require logical counterfactuals, but nobody knows how to make them work reliably. MIRI folks are currently looking into logical inductors, exploration and logical updatelessness, so maybe Abram or Alex or Scott could explain these ideas. I’ve done some past work on formalizing logical counterfactuals in toy settings, like proof search or provability logic, which is quite crisp and probably the easiest way to approach them.
Quoting from: https://intelligence.org/files/DeathInDamascus.pdf
So given Jessicata’s comment that functional is basically updateless reformulated, this PDF does a good job of clarifying its differences from timeless (look at page 4). Basically, timeless updates on the current situation, while updateless doesn’t. So in Counterfactual Mugging, timeless completely ignores the heads case if you can see that the coin came up tails, while updateless promotes the same decision before it learned this information, which is the input-output mapping with highest expected value.
Also, Counterfactuals, thick and thin has some interesting content related to logical counterfactuals, though it isn’t really an introductory post as it assumes you understand certain mathematical concepts.
They’re the same thing, it’s just a branding change.
[EDIT: never mind I was wrong] Come to think of it, I don’t know why did the FDT paper did not make any reference to UDT or its main inventor Wei Dai (which I infer based on a cursory ctrl-f in the paper).
Nate says: “The main datapoint that Rob left out: one reason we don’t call it UDT (or cite Wei Dai much) is that Wei Dai doesn’t endorse FDT’s focus on causal-graph-style counterpossible reasoning; IIRC he’s holding out for an approach to counterpossible reasoning that falls out of evidential-style conditioning on a logically uncertain distribution. (FWIW I tried to make the formalization we chose in the paper general enough to technically include that possibility, though Wei and I disagree here and that’s definitely not where the paper put its emphasis. I don’t want to put words in Wei Dai’s mouth, but IIRC, this is also a reason Wei Dai declined to be listed as a co-author.)”
I actually think it’s a downgrade. It doesn’t include the fix that Wei calls UDT1.1, quantifying over all possible observation-action maps, instead of quantifying over possible actions for the observation you’ve actually received. The FDT paper has a footnote saying the fix would only matter for multi-agent problems, which is wrong. All my posts about UDT assume the fix as a matter of common sense.
Nate says: “You may have a scenario in mind that I overlooked (and I’d be interested to hear about it if so), but I’m not currently aware of a situation where the 1.1 patch is needed that doesn’t involve some sort of multi-agent coordination. I’ll note that a lot of the work that I (and various others) used to think was done by policy selection is in fact done by not-updating-on-your-observations instead. (E.g., FDT agents refuse blackmail because of the effects this has in the world where they weren’t blackmailed, despite how their observations say that that world is impossible.)”
Say there’s some logical random variable O you’re going to learn, which is either 0 or 1, with a prior 50% probability of being 1. After knowing the value of this variable, you take action 0 or 1. Some predictor doesn’t know the value of this variable, but does know your source code. This predictor predicts P(you take action 1 | O = 0) and P(you take action 1 | O = 1). Your utility only depends on these predictions; specifically, it is P(you take action 1 | O = 0) − 100(P(you take action 1 | O = 0)-P(you take action 1 | O = 1))^2.
This is a continuous coordination problem, and CDT-like graph intervention isn’t guaranteed to solve it, while policy selection is.
I think the 1.1 patch is needed to solve problems with coordination/amnesia/prediction, and moreover these are all the same set of problems.
Coordination: two people wake up in rooms painted different colors (red and blue). Each is asked to choose a button (A or B). If they choose different buttons, both get $100. One possible winning strategy is red->A, blue->B.
Amnesia: on two consecutive days, you wake up with amnesia in rooms painted different colors and need to choose a button. If you choose different buttons on different days, you get $100. Winning strategy is same as above.
Prediction: you wake up in a room painted either red or blue and are asked to choose a button. At the same time, a predictor predicts what you would do if the room color was different. If that would lead to you choosing a different button, you get $100. Winning strategy is same as above.
Your comment here makes it sound like the FDT paper said “the difference between UDT 1.1 and UDT 1.0 isn’t important, so we’ll just endorse UDT 1.0”, where what the paper actually says is:
I don’t know why it claims the difference only crops up in multi-agent dilemmas, if that’s wrong.
Yes, good catch.
It has on the page 2: “Ideas reminiscent of FDT have been explored by many, including Spohn (2012), Meacham (2010), Yudkowsky (2010), Dai (2009), Drescher (2006), and Gauthier (1994).”
My model is that ‘FDT’ is used in the paper instead of ‘UDT’ because:
The name ‘UDT’ seemed less likely to catch on.
The term ‘UDT’ (and ‘modifier+UDT’) had come to refer to a bunch of very different things over the years. ‘UDT 1.1’ is a lot less ambiguous, since people are less likely to think that you’re talking about an umbrella category encompassing all the ‘modifier+UDT’ terms; but it’s a bit of a mouthful.
I’ve heard someone describe ‘UDT’ as “FDT + a theory of anthropics”—i.e., it builds in the core idea of what we’re calling “FDT” (“choose by imagining that your (fixed) decision function takes on different logical outputs”), plus a view to the effect that decisions+probutilities are what matter, and subjective expectations don’t make sense. Having a name for the FDT part of the view seems useful for evaluating the subclaims separately.
The FDT paper introduces the FDT/UDT concept in more CDT-ish terms (for ease of exposition), so I think some people have also started using ‘FDT’ to mean something like ‘variants of UDT that are more CDT-ish’, which is confusing given that FDT was originally meant to refer to the superset/family of UDT-ish views. Maybe that suggests that researchers feel more of a need for new narrow terms to fill gaps, since it’s less often necessary in the trenches to crisply refer to the superset.
I don’t suppose you could be clearer about how anthropics works on in FDT? Like are there any write-ups of how it solves any of the traditional anthropic paradoxes? Plus why don’t subjective expectations make any sense?
I read it several months ago and was wondering the same thing. Although it’s more than a branding change, FDT is much clearer put and much easier to understand.
Thanks heaps, that really makes it much less confusing!