[Question] Decisions with Non-Logical Counterfactuals: request for input

A lightning-fast recap of counterfactuals in decision theory: generally when facing a decision problem, a natural approach is to consider questions of the form “what would happen if I took action ?”. This particular construction has ambiguity in the use of the word “I” to refer to both the real and counterfactual agent, which is the root of the 5-and-10 problem. The Functional Decision Theory of Yudkowsky and Soares (2017) provides a more formal version of this construction, but suffers from (a more explicit version of) the same ambiguity by taking as its counterfactuals logical statements about its own decision function, some of which must necessarily be false. This poses an issue for an embedded agent implementing FDT, which, knowing itself to be part of its environment, might presumably therefore determine the full structure of its own decision function, and subsequently run into trouble when it can derive a contradiction from its counterlogical assumptions.

Soares and Levenstein (2017) explore this issue but defer it to future development of “a non-trivial theory of logical counterfactuals” (p. 1). I’ve been trying to explore a different approach: replacing the core counterfactual question of “what if my algorithm had a different output?” with “what if an agent with a different algorithm were in my situation?”. Yudkowsky and Soares (2017) consider a related approach, but dismiss it:

In attempting to avoid this dependency on counterpossible conditionals, one might suggest a variant FDT′ that asks not “What if my decision function had a different output?” but rather “What if I made my decisions using a different decision function?” When faced with a decision, an FDT′ agent would iterate over functions from some set , consider how much utility she would achieve if she implemented that function instead of her actual decision function, and emulate the best . Her actual decision function is the function that iterates over , and .

However, by considering the behavior of FDT′ in Newcomb’s problem, we see that it does not save us any trouble. For the predictor predicts the output of , and in order to preserve the desired correspondence between predicted actions and predictions, FDT′ cannot simply imagine a world in which she implements instead of ; she must imagine a world in which all predictors of predict as if behaved like —and then we are right back to where we started, with a need for some method of predicting how an algorithm would behave if (counterpossibly) behaved different from usual. (p. 7)

I think it should be possible avoid this objection by eliminating from consideration entirely. By only considering a separate agent that implements directly, we don’t need to imagine predictors of attempting to act as though were - we can just imagine predictors of instead.

More concretely, we can consider an embedded agent , with a world-model of type (a deliberately-vague container for everything the agent believes about the real world, constructed by building up from sense-data and axiomatic background assumptions by logical inference), and a decision function of type (interpreted as “given my current beliefs about the state of the world, what action should I take?”). Since is embedded, we can assume contains (as a background assumption) a description of , including the world-model (itself) and the decision function . For a particular possible action , we can simultaneously construct a counterfactual world-model and a counterfactual decision function such that maps to , and otherwise invokes (with some diagonalisation-adjacent trickery to handle the necessary self-references), and is the result of replacing all instances of in with . is a world-model similar to , but incrementally easier to predict, since instead of the root agent , it contains a counterfactual agent with world-model and decision function , which by construction immediately takes action . Given further a utility function of type , we can complete a description of as “return the that maximizes ”.

I have a large number of words of draft-thoughts exploring this idea in more detail, including a proper sketch of how to construct and , and an exploration of how it seems to handle various decision problems. In particular seems to give an interesting response to Newcomb-like problems (particularly the transparent variant), where it is forced to explicitly consider the possibility that it might be being simulated, and prescribes only a conditional one-boxing in the presence of various background beliefs about why it might be being simulated (and is capable of two-boxing without self-modification if it believes itself to be in a perverse universe where two-boxers predictably do better than one-boxers). But that seems mostly secondary to the intended benefit, which is that none of the reasoning about the counterfactual agents requires counterlogical assumptions—the belief that returns action needn’t be assumed, but can be inferred by inspection since is a fully defined function that could literally be written down if required.

Before I try to actually turn those words into something postable, I’d like to make sure (as far as I can) that it’s not an obviously flawed idea /​ hasn’t already been done before /​ isn’t otherwise uninteresting, by running it past the community here. Does anyone with expertise in this area have any comments on the viability of this approach?