# Two Major Obstacles for Logical Inductor Decision Theory

In this post, I describe two major obstacles for logical inductor decision theory: **untaken actions are not observable** and **no updatelessness for computations**. I will concretely describe both of these problems in a logical inductor framework, but I believe that both issues are general enough to transcend that framework.

**Obstacle 1: Untaken Actions are not Observable**

Consider the following formalization of the 5 and 10 problem:

Let is a logical inductor. Let be an agent which uses this logical inductor, to output either 5 or 10 as follows. The utility function for agent is simply , and the source code for agent is given by

Ideally, we would be able to say something like . Unfortunately, this is not the case. There exists a logical inductor such that . Consider a construction of a logical inductor similar to the one in the paper, but for which there is a single trader that starts with most of the wealth. This trader spends all of its wealth on conditional contracts forcing and . Note that the bets made conditioned on are accurate, while the bets made conditioned on do not matter, since the condition will be false. (No other trader will have enough wealth to substantially change the expectations). This trader will therefore lose no money, and be able to do the same thing again next round. (This is assuming that the value of A_n and U_n are computed in time for round n+1 of the deductive process. If this is not the case, we could do the same trick on a subsequence with this property.)

This same phenomenon has been observed in many other contexts. The spurious counterfactuals that can arise in proof based systems are another manifestation of the same problem.

One attempt at a fix is epsilon exploration. (The analogous fix in the proof based world is the chicken rule) Here, you take every possible action with probability . Then, when conditioning on taking an action you normally wouldn’t take, you will have some data on what happened when simpler versions of yourself randomly explored and took that action. The epsilon exploration version of the above agent is

This agent uses pseudorandomness the explore, and does in fact converge to choosing 10 all but epsilon proportion or the time (The lower density of taking 5 is at most ). This fix has major problems. The obvious problem is that taking a bad action with probability epsilon could be disastrous for an agent that makes many different decisions.

There is a larger problem with this approach. There are now two different ways you could take any given action. You could take that action because it produces the highest expected utility, or you could take that action because your exploration clause was triggered. These two different ways of taking the action could be very different. Trivially, they could have different runtimes. More importantly, they could effect the extent to which your action is predictable.

This problem will, for example, cause a logical inductor EDT agent to defect in a prisoner’s dilemma against a similar power agent that is trying to imitate it. If such an agent were to start out cooperating, random defection will be uncorrelated with the opponent’s prediction. Thus the explored defection will be profitable, and the agent will learn to defect. The opponent will learn this and start predicting defection more and more, but in the long run, the agent view this as independent with its action.

To see concretely how epsilon exploration fails, consider a different utility function , and the same agent above with exploration. This agent gets a reward for taking the 10, but a much bigger reward for being predicted to take the 5. This is basically Newcomb’s problem, but the predictor is no more powerful than the agent, and in fact has the same epistemic state as the agent.

The problem is that in the , is a constant that the agent has arbitrary access to. It is not changed by whether the agent takes the 5 or the 10 when exploring, since does not know if the agent will explore.

Thus, if the agent almost always took the 5, the rare occasions where it took the 10 would produce more utility, and similarly if the agent almost always took the 10, the rare occasions where it took the 5 would produce less utility. The agent will converge to almost always taking the 10, just like last time, and will miss out on the large reward for being predicted to take the 5.

**Obstacle 2: No Updatelessness for Computations**

The second obstacle is related to the counterfactual mugging problem. There is an agent , and a predictor . The predictor is much more powerful than the agent, except there is a bit that the agent knows, but the predictor does not. The predictor’s beliefs about are uniform. Again, the agent can take 5 or 10, However if b=1, then the agent will get a large reward if predicts that conditioned on , the agent takes the 5.

We will formalize this by giving the agent and the predictor two different logical inductors. The agent will use and the predictor will use . There will be a sequence of bits , which are sentences mutually independent with . They will be given truth values independently, uniformly at random. will use a deductive process knows the value of for , and further will be inexplicable by traders that have oracle access to for on day . (Technically, logical inductors have computable deductive processes and traders, but it is not a problem to modify the definition to give oracle access to the traders and the program that computes the deductive process.)

will be a very powerful logical inductor, but will receive the bits much slower. will know all the bits with , for some fast growing function . will also have perfectly accurate counterfactuals about what will do if fed various fake values for .

The utility function will be .

First, observe that if were not yet known by the agent, everyone would assign probability ^{1}⁄_{2} to , and this problem would be similar to the Newcomblike problem above. However, by making very powerful, we eliminated the difficulties above. The predictor can predict when the agent will explore, so exploring into 5 is rewarded and exploring into 10 is punished, and the trader would converge to almost always taking 10. However, by adding the hidden bit, we created new problems.

In particular, if , the agent has no control over what it would do if , and if , the agent does not care about what it is predicted to do. This agent will again converge to taking the almost always, and miss out on the large reward (if ) for being predicted to take the 5 if . Ideally, the agent will take the 10 if , and take the 5 if .

Although this problem may seem contrived, it is very important. This kind of thing actually does show up all the time. If you know do not know a secret, it might be a good idea to keep plausible deniability that you know a secret. This might incur a social cost, which you are willing to pay, since it causes you to act the same way regardless of whether or not you know a secret, and thus cause yourself to counterfactually be able to keep the secret better if you had one. Poker is all about this phenomenon.

More importantly, this problem needs to be understood for reflective stability. If an agent does not know the value of yet, but knows that it will take the 10 either way, the agent might want to commit to taking the 5. This is a failure of reflective stability. The agent would prefer to modify to use a different decision theory. The fact that this happens even in theory is a bad sign for any decision theory, and is an especially bad sign for our ability to understand the output of that decision theory.

In a Bayesian framework, this would solved using Updateless Decision Theory. The agent would not update on its observation of . It would instead use its prior about to choose a policy, a function from its observation, , to its action . This strategy would work, and the agent would take the 10 only if .

Unfortunately, we do not know how to combine this strategy with logical uncertainty. The beliefs of a logical inductor do not look like bayesian beliefs where you can go back to your prior. (Universal Inductors were an attempt to do this, but they do not work for this purpose.)

- Embedded Agency (full-text version) by 15 Nov 2018 19:49 UTC; 183 points) (
- Responses to apparent rationalist confusions about game / decision theory by 30 Aug 2023 22:02 UTC; 140 points) (
- The Credit Assignment Problem by 8 Nov 2019 2:50 UTC; 98 points) (
- Conceptual Problems with UDT and Policy Selection by 28 Jun 2019 23:50 UTC; 61 points) (
- What Decision Theory is Implied By Predictive Processing? by 28 Sep 2020 17:20 UTC; 56 points) (
- FixDT by 30 Nov 2023 21:57 UTC; 55 points) (
- Policy Alignment by 30 Jun 2018 0:24 UTC; 50 points) (
- How to do conceptual research: Case study interview with Caspar Oesterheld by 14 May 2024 15:09 UTC; 46 points) (
- Alignment Newsletter #15: 07/16/18 by 16 Jul 2018 16:10 UTC; 42 points) (
- CDT=EDT=UDT by 13 Jan 2019 23:46 UTC; 39 points) (
- Asymptotic Decision Theory (Improved Writeup) by 27 Sep 2018 5:17 UTC; 39 points) (
- The Many Faces of Infra-Beliefs by 6 Apr 2021 10:43 UTC; 30 points) (
- Training goals for large language models by 18 Jul 2022 7:09 UTC; 28 points) (
- MIRI’s 2017 Fundraiser by 7 Dec 2017 21:47 UTC; 27 points) (
- How to do conceptual research: Case study interview with Caspar Oesterheld by 14 May 2024 15:09 UTC; 26 points) (EA Forum;
- What are some concrete problems about logical counterfactuals? by 16 Dec 2018 10:20 UTC; 26 points) (
- A Rationality Condition for CDT Is That It Equal EDT (Part 1) by 4 Oct 2018 4:32 UTC; 21 points) (
- MIRI’s 2017 Fundraiser by 1 Dec 2017 13:45 UTC; 19 points) (
- 1 Nov 2018 18:26 UTC; 19 points) 's comment on Decision Theory by (
- Logical Uncertainty and Functional Decision Theory by 10 Jul 2018 23:08 UTC; 15 points) (
- When EDT=CDT, ADT Does Well by 25 Oct 2018 5:03 UTC; 13 points) (
- 16 Feb 2018 1:07 UTC; 12 points) 's comment on Two Types of Updatelessness by (
- Passing Troll Bridge by 25 Feb 2018 8:21 UTC; 11 points) (
- 12 Aug 2018 7:11 UTC; 11 points) 's comment on Decisions are not about changing the world, they are about learning what world you live in by (
- Smoking Lesion Steelman by 2 Jul 2017 2:17 UTC; 9 points) (
- 30 May 2017 6:47 UTC; 8 points) 's comment on Futarchy Fix by (
- 28 May 2023 20:09 UTC; 7 points) 's comment on Conditional Prediction with Zero-Sum Training Solves Self-Fulfilling Prophecies by (
- MIRI 2017 Fundraiser and Strategy Update by 1 Dec 2017 20:06 UTC; 6 points) (EA Forum;
- 20 Jun 2023 23:37 UTC; 6 points) 's comment on Language Models can be Utility-Maximising Agents by (
- Acknowledgements & References by 14 Dec 2019 7:04 UTC; 6 points) (
- Density Zero Exploration by 17 Aug 2017 0:43 UTC; 4 points) (
- An Approach to Logically Updateless Decisions by 21 May 2017 23:02 UTC; 3 points) (
- Musings on Exploration by 3 Apr 2018 2:15 UTC; 1 point) (
- Predictable Exploration by 25 Oct 2017 0:47 UTC; 0 points) (
- A Difficulty With Density-Zero Exploration by 27 Mar 2018 1:03 UTC; 0 points) (
- Demons from the 5&10verse! by 28 Mar 2023 2:41 UTC; -3 points) (

I’ll just note that in a modal logic or halting oracle setting you don’t need the chicken rule, as we found in this old post: https://agentfoundations.org/item?id=4 So it seems like at least the first problem is about the approximation, not the thing being approximated.

Yeah, the 5 and 10 problem in the post actually can be addressed using provability ideas, in a way that fits in pretty natually with logical induction. The motivation here is to work with decision problems where you can’t prove statements A=a→U=u for agent A, utility function U, action a, and utility value u, at least not with the amount of computing power provided, but you want to use inductive generalizations instead. That isn’t necessary in this example, so it’s more of an illustration.

To say a bit more, if you make logical inductors propositionally consistent, similarly to what is done in this post, and make them assign things that have been proven already probability 1, then they will work on the 5 and 10 problem in the post.

It would be interesting if there was more of an analogy to explore between the provability oracle setting and the inductive setting, and more ideas could be carried over from modal UDT, but it seems to me that this is a different kind of problem that will require new ideas.

In obstacle 1, is the example of a trader that has accumulated most of the wealth representative of the fundamental difficulty, or are there other ways that the naive decision theory fails? If it is representative, would it be possible to modify the logical inductor such that when facing a decision, traders are introduced with sufficient wealth betting on all outcomes such that their probability is at least epsilon, and forcing the problematic trader to lose it’s wealth (making sure that all decisions start from a position of thinking that each action could be taken with at least probability epsilon, rather than forcing that this is the outcome of the decision)?

It’s hard to analyze the dynamics of logical inductors too precisely, so we often have to do things that feel like worst-case analysis, like considering an adversarial trader with sufficient wealth. I think that problems that show up from this sort of analysis can be expected to correspond to real problems in superintelligent agents, but that is a difficult question. The malignancy of the universal prior is part of the reason.

As to your proposed solution, I don’t see how it would work. Scott is proposing that the trader makes conditional contracts, which are in effect voided if the event that they are conditional on doesn’t happen, so the trader doesn’t actually lose anything is this case. (It isn’t discussed in this post, but conditional contracts can be built out of the usual sort of bets, with payoffs given by the definition of conditional probability.) So, in order to make the trader lose money, the events need to happen sometimes, not just be expect to happen with some nonnegligable probability by the market.