# My Current Take on Counterfactuals

I’ve felt like the problem of counterfactuals is “mostly settled” (modulo some math working out) for about a year, but I don’t think I’ve really communicated this online. Partly, I’ve been waiting to write up more formal results. But other research has taken up most of my time, so I’m not sure when I *would* get to it.

So, the following contains some “shovel-ready” problems. If you’re convinced by my overall perspective, you may be interested in pursuing some of them. I think these directions have a high chance of basically solving the problem of counterfactuals (including logical counterfactuals).

Another reason for posting this rough write-up is to get feedback: am I missing the mark? Is this not what counterfactual reasoning is about? Can you illustrate remaining problems with decision problems?

I expect this to be *much more difficult to read* than my usual posts. It’s a brain-dump. I make a lot of points which I have not thought through sufficiently. Think of it as a frozen snapshot of a work in progress.

# Summary.

I can Dutch-book any agent whose subjective counterfactual expectations don’t equal their

*conditional*expectations. I conclude that counterfactual expectations should equal conditional probabilities. IE, evidential decision theory (EDT) gives the correct counterfactuals.*However*, the Troll Bridge problem is real and concerning: EDT agents are doing silly things here.Fortunately, there appear to be ways out. One way out is to maintain that

*subjective counterfactual expectations should equal conditional expectations*while also maintaining a distinction between those two things: counterfactuals are not*computed from*conditionals. As we shall see, this allows us to ensure that the two are always equal in*real*situations, while strategically allowing them to differ in some*hypothetical*situations (such as Troll Bridge). This seems to solve all the problems!However, I have not yet concretely constructed any way out. I include detailed notes on open questions and potential avenues for success.

# What does it mean to pass Troll Bridge?

It’s important to briefly clarify the nature of the goal. First of all, Troll Bridge really relies on the agent “respecting logic” in a particular sense.

Learning agents who don’t reason logically, such as RL agents, can be fooled by a troll who punishes exploration. However, that doesn’t seem to get at the point of Troll Bridge. It just seems like an unfair problem, then.

**The first test of “real Troll Bridge” is:**does the example make crossing totally impossible? Or does it depend on the agent’s starting beliefs?If the probabilistic troll bridge only concluded that you won’t cross

*if you have a sufficiently high probability that PA is inconsistent*, this would come off as totally reasonable rather than insane. It’s a refusal to cross*regardless of prior*which seems so problematic.If an RL agent starts out thinking crossing is good, then this impression will continue to be reinforced (if the troll is only punishing exploration). Such an example only shows that

*some*RL agents cannot cross, due to the combination of low prior expectation that crossing is good, and exploration being punished. This is unfortunate, and illustrates a problem with exploration if exploration can be detected by the environment; but it is not as severe a problem as Troll Bridge proper.

**The second test**is whether we can strengthen the issue to 100% failure*while still having a sense that the agent has a way out, if only it would take it*. In the RL example, we can strengthen the exploration-punisher by also punishing*unjustified crossing*more generally, in the sense of crossing without an empirical history of successful crosses to generalize from. If your prior suggests crossing is bad, you’ll be punished every time you cross because it’s exploration. If your prior suggests crossing is good, you’ll be punished every time you cross because your pro-crossing stance is not empirically justified. So no agents are rewarded for crossing. This passes the first test of “real Troll Bridge”. But there is no longer any sense that the agent is crazy. In original Troll Bridge, there’s a strong intuition that “the agent could just cross, and all would be well”. Here, there is nothing the agent can possibly do.

Secondly, an agent could reason logically but with some looseness. This can fortuitously block the Troll Bridge proof. However, the approach seems worryingly unprincipled, because we can “improve” the epistemics by tightening the relationship to logic, and get a decision-theoretically much worse result.

The problem here is that we have some epistemic principles which suggest tightening up is good (it’s free money; the looser relationship doesn’t lose much, but it’s a dead-weight loss), and no epistemic principles pointing the other way. So it feels like an unprincipled exception: “being less dutch-bookable is generally better, but hang loose in this one case, would you?”

Naturally, this approach is still very interesting, and could be pursued further—especially if we could give a more principled reason to keep the observance of logic loose in this particular case. But this isn’t the direction this document will propose. (Although you

*could*think of the proposals here as giving more principled reasons to let the relationship with logic be loose, sort of.)So here, we will be interested in solutions which “solve troll bridge” in the stronger sense of getting it right while fully respecting logic. IE, updating to probability 1 (/0) when something is proven (/refuted).

There is another “easy” way to pass Troll Bridge, though: just be CDT. (By CDT, I don’t mean classical causal decision theory—I mean decision theory which uses *any* notion of counterfactuals, be it based on physical causality, logical causality, or what-have-you.)

# The Subjective Theory of Counterfactuals

Sam presented Troll Bridge as an argument in favor of CDT. For a long time, I regarded this argument with skepticism: yes, CDT allows us to solve it, but what is logical causality?? What are the correct counterfactuals?? I was incredulous that we could get real answers to such questions, so I didn’t accept CDT as a real answer.

I gradually came to realize that Sam didn’t see us as needing all that. For him, counterfactuals were simply a more general framework, a generality which happened to be needed to encode what humans see as the correct reasoning.

Look at probability theory as an analogy.

If you were trying to invent the probability axioms, you would be led astray if you thought too much about what the “objectively correct” beliefs are for any given situation. Yes, there is a very interesting question of what the prior should be, in Bayesianism. Yes, there are things we can say about “good” and “bad” probability distributions for many cases. However, it was important that at some point someone sat down and worked out the theory of probability under the assumption that those questions were *entirely subjective*, and the only *objective* things we can say about probabilities are the basic coherence constraints, such as P(~A)=1-P(A), etc.

Along similar lines, the subjectivist theory of counterfactuals holds that *we have been led astray* by looking too hard for some kind of correct procedure for taking logical counterfactuals. Instead, starting from the assumption that a very broad range of counterfactuals can be subjectively valid, we should seek the few “coherence” constraints which distinguish rational counterfactual beliefs from irrational.

In this perspective, getting Troll Bridge right isn’t particularly difficult. The Troll Bridge argument is blocked at the step where [the agent proves that crossing implies bad stuff] implies [the agent doesn’t cross]. The agent’s *counterfactual* expected value for crossing can still be high, even if it has proven that crossing is bad. Counterfactuals have a lot of freedom to be different from what you might expect, so they don’t have to respect proofs in that way.

Following the analogy to probability theory, we still want to know what the axioms *are. *How are rational counterfactual beliefs constrained?

I’ll call the minimalist approach “Permissive CDT”, because it makes a strong claim that “almost any” counterfactual reasoning can be subjectively valid (ie, rational):

## Permissive CDT

What I’ll call “Permissive CDT” (PCDT) has the following features:

There is a basic counterfactual conditional, C(A|B).

This counterfactual conditional obeys the axiom C(A|B)&B → A.

There may be additional axioms, but they are weak enough to allow 2-boxing in Newcomb as subjectively valid.

There is no chicken rule or forced exploration rule; agents always take the action which looks counterfactually best.

Note that *this isn’t totally crazy*. C(A|B)&B → A means that counterfactuals had better take the actual world to the actual world. This means a counterfactual hypothesis sticks its neck out, and can be disproven if B is true (so, if B is an action, we can *make* it true in order to test).

Note that I’ve excluded exploration from PCDT. This means we can’t expect as strong of a learning result as we might otherwise. However, *with* exploration, we would eventually take disastrous actions. For example, if there was a destroy-the-world button, the agent would eventually press it. So, we probably don’t want to force exploration just for the sake of better learning guarantees!

Instead, we want to use “VOI-exploration”. This just means: PCDT naturally chooses some actions which are suboptimal in the short term, due to the long-term value of information. (This is just a fancy way of saying that it’s worthwhile to do experiments sometimes.) To vindicate this approach, we would want some sort of VOI-exploration result. For example, we may be able to prove that PCDT successfully learns under some restrictive conditions (EG, if it knows no actions have catastrophic consequences). Or, even better, we could characterize *what it can learn* in the absence of nice conditions (for example, that it explores everything except actions it thinks are too risky).

* I claim PCDT is wrong. *I think it’s important to set it up properly in order to

*check*that it’s wrong, since belief in some form of CDT is still widespread, and since it’s actually a pretty plausible position. But I think my Dutch Book argument is fairly damning, and (as I’ll discuss later) I think there are other arguments as well.

Sam advocated the PCDT-like direction for some time, but eventually, he came to agree with my Dutch Book argument. So, I think Sam and I are now mostly on the same page, favoring a version of subjective counterfactuals which requires more EDT-like expectations.

There are a number of formal questions about PCDT which would be useful to answer. This is part of the “shovel-ready” work I promised earlier. The list is quite long; I suggest skipping to the next section on your first read-thru.

**Question: further axioms for PCDT?** What else might we want to assume about the basic counterfactuals? What can’t we assume? (What assumptions would force EDT-like behavior, clashing with the desideratum of allowing 2-boxing? What assumptions lead to EDT-like failure in Troll Bridge? What assumptions allow/disallow sensible logical counterfactuals? What assumptions force inconsistency?)

We can formalize PCDT in logical induction, by adding a basic counterfactual to the language.

We might want counterfactuals to give sensible probability distributions on sentences, so and are mutually exclusive and jointly exhaustive. But we definitely don’t want too many “basic logic” assumptions like that, since it could make counterlogicals undefinable, leading us back to problems taking counterfactuals when we know our own actions.

**Question: Explore PCDT and Troll Bridge.** Examine which axioms for PCDT are compatible with successfully passing Troll Bridge (in the sense sketched earlier).

**Question: Learning theory for PCDT?** A learning theory is important for a theory of counterfactuals, because it gives us a story about why we might expect counterfactual reasoning to be correct/useful. If we have strong guarantees about how counterfactual reasoning will come to reflect the true consequences of actions according to the environment, then we can trust counterfactual reasoning. Again, we can formalize PCDT via logical induction. What, then, does it learn? PCDT should have a learning theory somewhat similar to InfraBayes, in the sense of relying on VOI exploration instead of forcing exploration with an explicit mechanism.

Some sort of no-traps assumption is needed.

Weak version: assume that the logical inductor is 1-epsilon confident that the environment is trap-free, or something along those lines (and also assume that the true environment

*is*trap-free).Strong version: don’t assume anything about the initial beliefs. Show that the agent ends up exploring sufficiently

*if it believes it’s safe to do so;*that is, show that belief in traps is in some sense the*only*obstacle to exploring enough (and therefore learning). (Provided that the discount rate is near enough to 1, of course; and provided the further learnability assumptions I’ll mention below.)Stretch goal: deal with human feedback about whether there are traps, venturing into alignment theory rather than just single-agent decision theory.

See later section: applications to alignment.

Some sort of “good feedback” assumption is needed, ensuring that the agent gets enough information about the utility.

The most obvious thing is to assume an RL environment with discounting, much like the learning theory of InfraBayes.

It might be interesting to generalize further, having a broader class of utility functions, with feedback which narrows down the utility incrementally. RL is just a special case of this where the discounting rate determines the amount that any given prefix narrows down the ultimate utility.

Generalizing even further, it could be interesting to abandon the utility as a function of history at all, and instead rely only on subjective expectations (like the Orthodox Case Against Utility Functions suggests).

I suspect this is good for the “stretch goal” mentioned previously, of dealing with human feedback about whether there are traps. See later section: applications to alignment.

Some sort of no-newcomblike assumption is probably needed.

This is very similar to how Sam’s tiling result required a no-newcomb assumption, and asymptotic decision theory required a no-newcomb assumption.

In other words, a “CDT=EDT” assumption. (A Newcomblike problem is precisely one where CDT and EDT differ.)

Like the no-traps assumption, there are two different ways to try and do this:

Weaker: assume the logical inductor is very confident that the environment isn’t Newcomblike.

Stronger: don’t assume anything, but show that the agent ends up differentiating between hypotheses

*to the extent it’s possible to do so*. Newcomblike hypotheses make payoffs of actions themselves depend on what actions are taken, and so, are impossible to distinguish via experiment alone. But it should be possible to show that, based on VOI exploration, the agent*eliminates the eliminable hypotheses*, and distinguishes between the rest based on subjective plausibility; and () this is the best we can hope to do.**according to PCDT, but not according to me**

Realizability?

There should be a good result assuming realizability, at least. And perhaps that’s enough for a start—it would still be an improvement in our philosophical understanding of counterfactuals, particularly when combined with other results in the research program I’m outlining here.

But I’m also suspicious that there’s

*some*degree of ability to deal with unrealizable cases.The right assumption has to do with there being a trader capable of tracking the environment sufficiently.

Unlike Bayes or InfraBayes, we don’t have to worry about hypotheses competing; any predictively useful constraint on beliefs will be learned.

Bayes “has to worry” in the sense that non-realizable cases can create oscillation between hypotheses, due to there not being a unique best hypothesis for predicting the environment. (This might harm decision theory, by oscillating between competing but incompatible strategies.)

InfraBayes doesn’t seem to have

*that*worry, since it applies to non-realizable cases. (Or does it? Is there some kind of non-oscillation guarantee? Or is non-oscillation part of what it means for a set of environments to be learnable—IE it can oscillate in some cases?) But InfraBayesian learning is still a one-winner type system, in that we don’t learn*all*applicable partial models; only the*most useful*converges to probability 1.Logical induction, on the other hand, guarantees that

*all*applicable partial models are learned (in the sense of*finitely many violations*). But, to what extent can we translate this to a decision-theoretic learning result?

As an example, I think it should be possible to learn to use a source of randomness in rock-paper-scissors against someone who can perfectly predict your decision, but not the extra randomness.

I’m imagining discretized choices, so there’s a finite number of options of the form “½ ½ 0”, “1 0 0”, etc.

If the adversary were only trying to do the best in each round individually, this is basically a multi-armed bandit problem, where the button “play ⅓ ⅓ ⅓” has the best payoff. But we also need to show that the adversary can’t use a long-con strategy to mislead the learning.

I think one possible proof is to consider a trader who predicts that every option will be at best as good as ⅓ ⅓ ⅓ on average, in the long term. If this trader does poorly, then the adversary must be doing a poor job. If this trader does well, then (because we learn the payoff of ⅓ ⅓ ⅓ correctly for sure) the agent must converge to playing ⅓ ⅓ ⅓. So, either way, the agent must eventually do at least as well as the optimal strategy.

**Question: tiling theory for PCDT?**

It seems like this would admit some version of Sam’s tiling result and/or Diffractor’s tiling sketch.

As with the other results, a major motivation (from where I’m currently sitting) is to show that PCDT is worse than more EDT-like alternatives. I strongly suspect that tiling results for PCDT will be more restrictive than for the decision theory I’m going to advocate for later,

*precisely because*tiling must require a no-newcomb type restriction. PCDT, faced with the possibility of encountering a Newcomblike problem at some point,*should absolutely*self-modify in some way.Also, the tiling result necessarily excludes updateless-type problems, such as counterfactual mugging. None of the proposals considered here will deal with this.

This concludes the list of questions about PCDT. As I mentioned earlier, PCDT is being presented in detail primarily to contrast with my real proposal.

But, before I go into that, I should discuss *another* theory I don’t believe: what I see as the “opposite” of PCDT. My real view will be a hybrid of the two.

# The Inferential Theory of Counterfactuals

The inferential theory is what I see as the core intuition behind EDT.

The intuition is this: *we should reason about the consequences of actions in the same way that we reason about information which we add to our knowledge.*

Another way of putting this is: *hypothetical reasoning and counterfactual reasoning are one and the same*. By *hypothetical *reasoning, I mean temporarily adding something to the set of things you know, in order to see what would follow.

In classical Bayesianism, we add new information to our knowledge by performing a Bayesian update. Hence, the inferential theory says that we Bayes-update on possible actions to examine their consequences.

In logic, adding new information means adding a new axiom from which we can derive consequences. So in proof-based EDT (aka MUDT), we examine what we could prove if we added an action to our set of axioms.

So the inferential theory gives us a way of constructing a version of EDT from a variety of epistemic theories, not just Bayesianism.

**I think the inferential theory is probably wrong.**

Almost any version of the inferential theory will imply getting Troll Bridge wrong, just like proof-based decision theory and Bayesian EDT get it wrong. That’s because the inference **[action ****a**** implies bad stuff] & [action ****a****] | **** [bad stuff]** is valid. So the Troll Bridge argument is likely to go through.

Jessica Taylor talks about something she calls “counterfactual nonrealism”, which sounds a lot like what I’m calling the subjective theory of counterfactuals. However, she appears to *also* wrap up the inferential theory in this one package. I’m surprised she views these theories as being so close. I think they’re starting from very different intuitions. Nonetheless, I do think what we need to do is combine them.

# Walking the Line Between CDT and EDT

So, I’ve claimed that PCDT is wrong, because any departure from EDT (and thus the inferential theory) is dutch-book-able. Yet, I’ve also claimed that the inferential theory is itself wrong, due to Troll Bridge. So, what do I think is right?

Well, one way of putting it is that *counterfactual reasoning should match hypothetical reasoning in the real world, but shouldn’t necessarily match it hypothetically.*

This is precisely what we need in order to block the Troll Bridge argument. (At least, that’s one way to block the argument—there are other steps in the argument we could block.)

As a simple proof of concept, consider a CDT whose counterfactual expectations for crossing and not crossing *just so happen* to be the same as its evidential expectations, namely, cross = +10, not cross = 0. This isn’t Dutch-bookable, since the counterfactuals and conditionals agree.

In the Troll Bridge hypothetical, we *prove* that [cross]->[U=-10]. This will make the *conditional* expectations poor. But this *doesn’t have to change the counterfactuals*. So (within the hypothetical), the agent can cross anyway. And crossing gets +10. So, the Lobian proof doesn’t go through. Since the proof doesn’t go through, the conditional expectations can *also* consistently expect crossing to be good; so, we never *really* see a disparity between counterfactual expectation and conditional expectation.

Now, you might be thinking: couldn’t the troll use the disparity between counterfactual and conditional expectation as its trigger to blow up the bridge? I claim not: the troll would, then, be punishing anyone who made decisions in a way different from EDT. Since we know EDT doesn’t cross, it would be obvious that no one should cross. So we lose the sense of a dilemma in such a version of the problem.

OK, but how do we accomplish this? Where does the nice coincidence between the counterfactuals and evidential reasoning come from, if there’s no internal logic requiring them to be the same?

My intuition is that we want something similar to PCDT, but with more constraints on the counterfactuals. I’ll call this **restrictive**** counterfactual decision theory (RCDT):**

RCDT should have extra constraints on the counterfactual expectations, sufficient to

*guarantee that the counterfactuals we eventually learn will be in line with the conditional probabilities we eventually learn*. IE, the two asymptotically approach each other (at least in circumstances where we have good feedback; probably not otherwise).The constraints

*should not*force them to be exactly equal at all times.*In particular*, the constraints**must**force counterfactuals to “respect logic” in the sense that would force failure on Troll Bridge. For example, If implies , then a proof that crossing the bridge is bad could stop us from crossing it. We can’t let RCDT do that.**not**

To build intuition, let’s consider how PCDT and RCDT learn in a Newcomblike problem.

Let’s say we’re in a Newcomb problem where the small box contains $1 and the big box may or may not contain $10, depending on whether a perfect predictor believes that the agent will 1-box.

Suppose our PCDT agent starts out mainly believing the following counterfactuals (using C() for counterfactual expectations, counting just the utility of the current round):

C(U|1-box) = 10 * P(1-box)

C(U|2-box) = 1 + 10 * P(1-box)

In other words, the classic physical counterfactuals. I’ll call this hypothesis PCH for physical causality hypothesis.

We also have a trader who thinks the following:

C(U|1-box) = 10

C(U|2-box) = 1

I’ll call this LCH for logical causality hypothesis.

Now, if the agent’s *overall* counterfactual expectation (including value for future rounds, which includes exploration value) is quite different for the two actions, then the logical inductor should be quite clear on which action the agent will take (the one with higher utility); and if that’s so, then LCH and PCH will agree quite closely on the expected utility of said action. (They’ll just disagree about the *other* action.) So little can be learned on such a round. As long as PCH is dominating things, that action would *have *to be 2-boxing, since an agent who mostly believes PCH would only ever 1-box for the VOI—and there’s no VOI here, since in this scenario the two hypotheses agree.

But that all seems well and good—no complaints from me.

Now suppose instead that the overall counterfactual expectation is *quite similar *for the two actions, to the point where traders have trouble predicting which action will be taken.

In that case, LCH and PCH have quite different expectations:

(even though the x-axis is a boolean variable, I drew lines that are connected across two sides so that one can easily see how the uncertain case is the average of the two certain cases.)

The significant difference in expectations between LCH and PCH in the uncertain case makes it look as if we can learn something. We don’t know which action the agent will actually take, but we do know that LCH ends up being correct about the value of whatever action is taken. So it looks like PCH traders should lose money.

However, that’s not the case.

Because we have a “basic counterfactual” proposition for what would happen if we 1-box *and* what would happen if we 2-box, and *both of those propositions stick around*, LCH’s bets about what happens in either case *both* matter. This is unlike conditional bets, where if we 1-box, then bets conditional on 2-boxing disappear, refunded, as if they were never made in the first place.

When the logical inductor observes that the agent 1-boxes, and sees the +10, the expected value of *that* counterfactual payoff must move to +10 (since counterfactuals on what actually happens must match what actually happens). However, the *other* counterfactual—the one on 2-boxing—moves to * +11,* because PCH is still the dominant belief; the agent learned that it indeed 1-boxed, so, it now believes that it

*would have*received 11 by 2-boxing.

Since 2-boxing is a counterfactual scenario which we’ll never get any solid feedback on, the belief about what reward we could have gotten can stay around 11 forever. Any money LCH bet on a payoff of 1 from 2-boxing is probably lost for good.

So it doesn’t appear that LCH traders can make a profit.

**Question: Verify/explore this.**

Can PCDT really favor PCH forever? Is there no LCH strategy which could make profit?

Does LCH necessarily bleed money, or are there versions of LCH which break even here? Does PCH become increasingly confident, or does it just remain stable?

Can we give a rational justification for this behavior?

EG, at one point, Sam claimed that this was a perfectly reasonable epistemic state for an agent who thinks Omega is rewarding it with +10

*for exploring*(because the only way it gets the +10 from 1-boxing is if it does so for VOI, since it would never do so for the sake of the value it gets in that round), but*would not reward it similarly if the action were taken for its own sake*(because on non-exploration rounds, the agent thinks the value of 1-boxing would be 0).It’s clear that learning is impossible under such a scenario. Yet, this does not necessarily justify a learning rule which lets PCH dominate over LCH forever; we want to be able to learn 5&10 correctly by simply experimenting, and PCH is essentially stopping us from doing that here.

I’m interested in further thoughts on this. If we adopt a learning rule which

*doesn’t*favor PCH in the scenario given (IE, straight Newcomb), does it then cause pathological behavior in Sam’s scenario (exploration-rewarding Omega)? If so, how concerning is this? Are there alternative arguments in favor of the PCDT learning behavior that I’m calling pathological?

**Question: how can we avoid this pathology?**

Option 1: More constraints on counterfactuals.

Is there some way to add axioms to the PCDT counterfactuals, to make them (A) learn LCH, while still (B) passing Troll Bridge?

Option 2: Something more like conditional bets.

The behavior I labelled “want” is like conditional bet behavior.

However, standard conditional bets won’t quite do it.

If we make decisions by looking at the conditional probabilities of a logical inductor (as defined by the ratio formula), then these will be responsive to proofs of action->payoff, and therefore, subject to Troll Bridge.

What we want to do is look at conditional bets

*instead of*raw conditional probabilities, with the hope of escaping the Troll Bridge argument while keeping expectations empirically grounded.Normally conditional bets A|B are constructed by betting on A&B, but also hedging against ¬B by betting on that, such that a win on ¬B exactly cancels the loss on A&B.

In logical induction, such conditional bets must be

*responsive*to a proof of B->A; that is, since B->A means ¬(B&¬A), the bet on A&B must now be worth what a bet on just B would be worth. More importantly, a bet on B&¬A must be worthless, making the conditional probability zero.

So I see this as a challenge to make “basic” conditional bets, rather than conditional bets derived from boolean combinations as above, and make them such that they aren’t responsive to proofs of B->A like this.

Intuitively, a conditional bet A|B is just a contract which pays out given A&B, but which is refunded if ¬B.

I think something like this has even been worked out for logical induction at some point, but Scott and Sam and I weren’t able to quickly reconstruct it when we talked about this.

(And it was likely responsive to proofs of A->B.)

A notion of conditional bet which isn’t responsive to A->B isn’t totally crazy, I claim.

In many cases, the inductor might learn that A->B makes B|A a really good bet.

But if crossing the bridge never results in a bad outcome in reality, it should be possible to maintain an exception for cross->bad.

Philosophically, this is a radical rejection of the ratio formula for conditional probability.

Rejection of the ratio formula has been discussed in the philosophical literature, in part to allow for conditioning on probability-zero events. EG, the Lewis axioms for conditional probabilities.

As far as I’ve seen, philosophers still endorse the ratio formula

*when it meaningfully applies*, ie, when you’re not dividing by zero. It’s just rejected*as the definition*of conditional probability, since the ratio formula isn’t well-defined in some cases where the conditional probabilities do seem well-defined.However, I suspect the rejection of the inference from A->B to B|A constitutes a more radical departure than usual.

We most likely still need the chicken rule here, unlike with basic counterfactuals.

The

*desired behavior*we’re after here, in order to give LCH an advantage, is to nullify bets in the case that their conditions turn out false. This doesn’t seem compatible with usable conditionals on zero-probability events.(But it would be nice if this turned out otherwise!)

At first it might sound like using chicken rule spells doom for this approach, since chicken rule is the original perpetrator in Troll Bridge. But I think this is not the case.

In the step in Troll Bridge where the agent examines its own source code to see why it might have crossed, we see that the chicken rule triggered

*or*the agent had higher conditional-contract expectation on crossing. So it’s possible that the agent crosses for entirely the right reason, blocking the argument from going through.We could try making the troll punish chicken-rule crossing

*or*crossing based on conditional-contract expectations which differ from the true conditional probabilities; but this seems exactly like the case we examined for PCDT. Crossing because crossing looks like a good idea in some sort of expectation is*a good reason to cross*; if we deny the agent this possibility, then it just looks like an impossible problem. The troll would just be blowing up the bridge for anyone who doesn’t agree with EDT; but EDT doesn’t cross.

If this modified-conditional-bet option works out, to what extent does this vindicate the inferential theory of counterfactuals?

Option 3: something else?

**Question: Performance on other decision problems?**

It’s not updateless, so obviously, it can’t get everything. But, EG, how does it do on XOR?

**Question: What is the learning theory of the working options?** How do they compare?

If we can get a version which favors LCH in the iterated Newcomb example, then the learning theory should be much like what I outlined for PCDT, with the exception of the no-newcomb clause.

It would be great to get a really good picture of the differences in optimality conditions for the different alternatives. EG, PCDT can’t learn when it’s suspicious of Newcomblike situations. But perhaps RCDT can’t seriously maintain the hypothesis that Omega is tricking it on exploration rounds specifically (as Sam conjectured at one point), while PCDT can; so there may be some class of situations where, though neither can learn, PCDT has an advantage in terms of being able to perform well if it wins the correct-belief lottery.

**Question: What is the tiling theory of the working options?** How do they compare?

Like PCDT, RCDT should have some tiling proof along the lines of Sam’s tiling result and/or Diff’s tiling sketch.

Again, it would be interesting to get a really good comparison between the options.

I suspect that PCDT has really poor tiling in Newcomblike situations, whereas RCDT does not. I really want that result, to show the strength of RCDT on tiling grounds.

# Applications to Alignment

Remember how Sam’s tiling theorem requires feedback on counterfactuals? That’s implausible for a stand-alone agent, since you don’t get to see what happens for untaken actions. But if we consider an agent getting feedback from a human, suddenly it becomes plausible.

However, human feedback does have some limitations.

It should be sparse. A human doesn’t want to give feedback on every counterfactual for every decision. But the human could focus attention on counterfactual expectations which look very wrong.

Humans aren’t great at giving reward-type feedback. (citation: I’ve heard this from people at CHAI, I think.)

Humans are even worse at giving full-utility feedback.

This would require humans to evaluate what’s likely to happen in the future, from a given state.

So, we have to come up with feedback models which could work for humans.

Simpler models like non-sparse human feedback on (counterfactual) rewards could still be developed for the sake of incremental progress, of course.

One model I think is unrealistic but interesting: humans providing better and better bounds for overall expected utility. This is similar to providing rewards (because a reward bounds the utility), but also allows for providing some information about the future (humans might be able to see that a particular choice would destroy such and such future value).

Approval feedback is easier for humans to give, although approval learning doesn’t so much allow the agent to use its own decision theory (and especially, its own world model and planning).

Obviously some of Vanessa’s work provides relevant options to consider.

As Vanessa has pointed out, this can help deal with traps (provided the supervisor has good information about traps in some sense). This is obviously a major factor in the theory, since traps are part of what blocks nice learning-theoretic results.

I would like to consider options which

**allow for human value uncertainty.**One model of particular interest to me is

. This has several interesting features.**modeling the human as a logical inductor**The feedback given at any one time does not need to be accurate in any sense, because logical inductors can be terrible at first.

If utility is a LUV, there can be no explicit utility function at all.

This in-effect allows for uncomputable utility functions, such as the one in the procrastination paradox, as I discussed in

*an orthodox case against utility functions*.

The convergence behavior can be thought of as a model of human philosophical deliberation.

It would be super cool to have a combined alignment+tilling result.

I don’t particularly expect this to solve wireheading or human maniputalion; it’ll have to mostly operate under the assumption that feedback has not been corrupted.

# Why Study This?

I suspect you might be wondering what the value of this is, in contrast to a more InfraBayesian approach. I think a substantial part of the motivation for me is just that I am curious to see how LIDT works out, especially with respect to these questions relating to CDT vs EDT. However, I think I can give some reasons why this approach might be necessary.

Radical Probabalism and InfraBayes are plausibly

*two orthogonal dimensions of generalization for rationality*. Ultimately we want to generalize in both directions, but to do that, working out the radical-probabilist (IE logical induction) decision theory in more detail might be necessary.The payoff in terms of alignment results for this approach might give some benefits which can’t be gotten the other way, thanks to the study of subjectively valid LUV expectations which don’t correspond to any (computable) explicit utility function. How could a pure InfraBayes approach align with a user who has LUV values?

This approach offers insights into the big questions about counterfactuals which are at best only implicit in the InfraBayes approach.

The VOI exploration insight is the same for both of them, but it’s possible that the theory is easier to work out in this case. I think learnability here can be stated in terms of a no-trap assumption and (for PCDT) a no-newcomb assumption. AFAIK the conditions for learnability in the InfraBayes case are still pretty wide open.

I don’t know how to talk about the CDT vs EDT insight in the InfraBayes world.

The way PCDT seems to pathologically fail at learning in Newcomb, and the insight about how we have to learn in order to succeed.

Perhaps more importantly, the Troll Bridge insights. As I mentioned in the beginning, in order to meaningfully solve Troll Bridge, it’s necessary to “respect logic” in the right sense. InfraBayes doesn’t do this, and it’s not clear how to get it to do so.

# Conclusion

Provided the formal stuff works out, this might be “all there is to know” about counterfactuals from a purely decision-theoretic perspective.

This wouldn’t mean we’re done with embedded agent theory. However, I think things factor basically as follows:

Decision Theory

Counterfactuals

Classic newcomb’s problem.

5&10.

Troll Bridge.

Death in Damascus.

...

Logical Updatelessness

XOR Blackmail

Transparent Newcomb

Parfit’s Hitchhiker

Counterfactual Mugging

...

Multiagent Rationality

Prisoner’s Dilemma

Chicken

...

I’ve expressed many reservations about logical updatelessness in the past, and it may create serious problems for multiagent rationality, but it still seems like the best hope for solving the class of problems which includes XOR Blackmail, Transparent Newcomb, Parfit’s Hitchhiker, and Counterfactual Mugging.

If the story about counterfactuals in this post works out, and the above factoring of open problems in decision theory is right, then we’d “just” have logical updatelessness and multiagent rationality left.

- 13 Apr 2021 18:34 UTC; 10 points) 's comment on Dutch-Booking CDT: Revised Argument by (

I only skimmed this post for now, but a few quick comments on links to infra-Bayesianism:

It’s true that these questions still need work, but I think it’s rather clear that something like “there are no traps” is a sufficient condition for learnability. For example, if you have a finite set of “episodic” hypotheses (i.e. time is divided into episodes, and no states is preserved from one episode to another), then a simple adversarial bandit algorithm (e.g. Exp3) that treats the hypotheses as arms leads to learning. For a more sophisticated example, consider Tian et al which is formulated in the language of game theory, but can be regarded as an infra-Bayesian regret bound for infra-MDPs.

True, but IMO the way to incorporate “radical probabilism” is via what I called Turing RL.

I’m not sure what precisely you mean by “CDT vs EDT insight” but our latest post might be relevant: it shows how you can regard infra-Bayesian hypotheses as joint beliefs about observations

andactions, EDT-style.Is there a way to operationalize “respecting logic”? For example, a specific toy scenario where an infra-Bayesian agent would fail due to not respecting logic?

“Respect logic” means either (a) assigning probability one to tautologies (at least, to those which can be proved in some bounded proof-length, or something along those lines), or, (b) assigning probability zero to contradictions (again, modulo boundedness). These two properties should be basically equivalent (ie, imply each other) provided the proof system is consistent. If it’s inconsistent, they imply different failure modes.

My contention isn’t that infra-bayes could fail due to not respecting logic. Rather, it’s simply not obvious whether/how it’s possible to make an interesting troll bridge problem for something which doesn’t respect logic. EG, the example I mentioned of a typical RL agent—the obvious way to “translate” Troll Bridge to typical RL is for the troll to blow up the bridge if and only if the agent takes an exploration step. But, this isn’t sufficiently like the original Troll Bridge problem to be very interesting.

By no means do I mean to indicate that there’s an argument that agents have to “respect logic” buried somewhere in this write-up (or the original troll-bridge writeup, or my more recent explanation of troll bridge, or any other posts which I linked).

If I want to argue such a thing, I’d have to do so separately.

And, in fact, I don’t think I want to argue that an

agentis defective if it doesn’t “respect logic”. I don’t think I can pull out a decision problem it’ll do poorly on, or such.I

a little bitwant to argue that adecision theoryis less revealing if it doesn’trepresentan agent as respecting logic, because Itend to thinklogical reasoning is an important part of an agent’s rationality. EG, a highly capable general-purpose RL agent should be interpretable as using logical reasoning internally, even if we can’t see that in the RL algorithm which gave rise to it. (In which case you might want to ask how the RL agent avoids the troll-bridge problem, even though the RL algorithm itself doesn’t seem to give rise to any interesting problem there.)As such, I find it quite plausible that InfraBayes and other RL algorithms end up handling stuff like Troll Bridge just fine

without giving us insight into the correct reasoning, because they eventually kick out any models/hypotheses which fail Troll Bridge.Whether it’s necessary to “gain insight” into how to solve Troll Bridge (as an agent which respects some logic internally), rather than merely

solveit (by providing learning algorithms which have good guarantees), is separate question. I won’t claim this has a high probability of being a necessary kind of insight (for alignment). I will claim it seems like a pretty important question to answer for someone interested in counterfactual reasoning.I don’t think Turing RL addresses radical probabilism at all, although it plausibly addresses a major motivating force for being interested in radical probabilism, namely logical uncertainty.

From a radical-probabilist perspective, the complaint would be that Turing RL still uses the InfraBayesian update rule, which might not always be necessary to be rational (the same way Bayesian updates aren’t always necessary).

Naively, it seems very possible to combine infraBayes with radical probabilism:

Starting from radical probabilism, which is basically “a dynamic market for beliefs”, infra seems

closeto the insight that prices can have a “spread”. (In the same way that interval probability isclose toInfraBayes, but not all the way).Starting from Infra, the question is how to add in the market aspect.

However, I’m not sure what formalism could unify these.

I guess we can try studying Troll Bridge using infra-Bayesian modal logic, but atm I don’t know what would result.

Ah, but there is a sense in which it doesn’t. The radical update rule is equivalent to updating on “secret evidence”. And in TRL we have such secret evidence. Namely, if we only look at the agent’s beliefs about “physics” (the environment), then they would be updated radically, because of secret evidence from “mathematics” (computations).

I agree that radical probabilism can be thought of as bayesian-with-a-side-channel, but it’s nice to have a more general characterization where the side channel is black-box, rather than an explicit side-channel which we explicitly update on. This gives us a picture of the space of rational updates. EG, the logical induction criterion allows for a large space of things to count as rational. We get to argue for constraints on rational behavior by pointing to the existence of traders which enforce those constraints, while being agnostic about what’s going on inside a logical inductor. So we have this nice picture, where rationality is characterized by non-exploitability wrt a specific class of potential exploiters.

Here’s an argument for why this is an important dimension to consider:

Human value-uncertainty is not particularly well-captured by Bayesian uncertainty, as I imagine you’ll agree. One particular complaint is realizability: we have no particular reason to assume that human preferences are within any particular space of hypotheses we can write down.

One aspect of this can be captured by InfraBayes: it allows us to eliminate the realizability assumption, instead only assuming that human preferences fall within some set of constraints which we can describe.

However, there is another aspect to human preference-uncertainty: human preferences change over time. Some of this is irrational, but some of it is legitimate philosophical deliberation.

And, somewhat in the spirit of logical induction, humans do tend to eventually address the most egregious irrationalities.

Therefore, I tend to think that toy models of alignment (such as CIRL, DRL, DIRL) should model the human as a radical probabilist; not because it’s a perfect model, but because it constitutes a major incremental improvement wrt modeling what kind of uncertainty humans have over our own preferences.

Recognizing preferences as a thing which

naturallychanges over time seems, to me, to take a lot of the mystery out of human preference uncertainty. It’s hard to picture that I have some true platonic utility function. It’s much easier to interpret myself as having some preferences right now (which I still have uncertainty about, but which I have some introspective access of), but, also being the kind of entity who shifts preferences over time, and mostly in a way which I myself endorse. In some sense you can see me as converging to a true utility function; however, this “true utility function” is a (non-constructive) consequence of my process of deliberation, and the process of deliberation takes a primary role.I recognize that this isn’t exactly the same perspective captured by my first reply.

I’m not convinced this is the right desideratum for that purpose. Why should we care about exploitability by traders if making such trades is not actually possible given the environment and the utility function? IMO epistemic rationality is subservient to instrumental rationality, so our desiderata should be derived from the later.

Actually I am rather skeptical/agnostic on this. For me it’s fairly easy to picture that I have a “platonic” utility function, except that the time discount is dynamically inconsistent (not exponential).

I am in favor of exploring models of preferences which admit all sorts of uncertainty and/or dynamic inconsistency, but (i) it’s up to debate how much degrees of freedom we need to allow there and (ii) I feel that the case logical induction is the right framework for this is kinda weak (but maybe I’m missing something).

This does make sense to me, and I view it as a weakness of the idea. However, the productivity of dutch-book type thinking in terms of implying properties which seem appealing for other reasons speaks heavily in favor of it, in my mind. A formal connection to more pragmatic criteria would be great.

But also, maybe I can articulate a radical-probabilist position without any recourse to dutch books… I’ll have to think more about that.

I’m not sure how to double crux with this intuition, unfortunately. When I imagine the perspective you describe, I feel like it’s rolling all dynamic inconsistency into time-preference and ignoring the role of deliberation.

My claim is that there is a type of change-over-time which is due to boundedness, and which looks like “dynamic inconsistency” from a classical bayesian perspective, but which isn’t inherently dynamically inconsistent. EG, if you “sleep on it” and wake up with a different, firmer-feeling perspective, without any articulable thing you updated on. (My point isn’t to dogmatically insist that you haven’t updated on anything, but rather, to point out that it’s useful to have the perspective where we don’t need to suppose there was evidence which justifies the update as Bayesian, in order for it to be rational.)

It’s clear that you understand logical induction pretty well, so while I feel like you’re missing something, I’m not clear on what that could be.

I think maybe the more fruitful branch of this conversation (as opposed to me trying to provide an instrumental justification for radical probabilism, though I’m still interested in that) is the question of describing the human utility function.

The logical induction picture isn’t strictly at odds with a platonic utility function, I think, since we can consider the limit. (I only claim that this isn’t the best way to think about it in general, since Nature didn’t decide a platonic utility function for us and then design us such that our reasoning has the appropriate limit.)

For example, one case which to my mind argues in favor of the logical induction approach to preferences: the procrastination paradox. All you want to do is ensure that the button is pressed at some point. This isn’t a particularly complex or unrealistic preference for an agent to have. Yet, it’s unclear how to make computable beliefs think about this appropriately. Logical induction provides a theory about how to think about this kind of goal. (I haven’t thought much about how TRL would handle it.)

Agree or disagree: agents can sensibly pursue Δ2 objectives? And, do you think that question is cruxy for you?

I lean towards some kind of finitism or constructivism, and am skeptical of utility functions which involve unbounded quantifiers. But also, how does LI help with the procrastination paradox? I don’t think I’ve seen this result.

So, one point is that the InfraBayes picture still gives epistemics an important role: the kind of guarantee arrived at is a guarantee that you won’t do too much worse than the most useful partial model expects. So, we can think about generalized partial models which update by thinking longer in addition to taking in sense-data.

I suppose TRL can model this by observing what those computations would say, in a given situation, and using partial models which only “trust computation X” rather than having any content of their own. Is this “complete” in an appropriate sense? Can we always model a would-be radical-infrabayesian as a TRL agent observing what that radical-infrabayesian would think?

Even if true, there may be a significant computational complexity gap between just doing the thing vs modeling it in this way.

Yes, I’m pretty sure we have that kind of completeness. Obviously representing all hypotheses in this opaque form would give you poor sample and computational complexity, but you can do something midway: use black-box programs as components in your hypothesis but also have some explicit/transparent structure.

To further elaborate, this post discusses ways a Bayesian might pragmatically prefer non-Bayesian updates. Some of them don’t carry over, for sure, but I expect the general idea to translate: InfraBayesians need some unrealistic assumptions to reflectively justify the InfraBayesian update in contrast to other updates. (But I am not sure which assumptions to point out, atm.)

Yes, I think TRL captures this notion. You have some Knightian uncertainty about the world, and some Knightian uncertainty about the result of a computation, and the two are entangled.

Wow that’s exciting! Very interesting that you think that.

Now I feel like I should have phrased it more modestly, since it’s really “settled modulo math working out”, even though I feel fairly confident some version of the math should work out.

In order to do that, you have to think of doing that. (Seeing randomness might be hard—seeing ‘I have information I don’t think they have, and I don’t think they can read minds, so they can’t predict this’ makes more sense intuitively.) In practice, I don’t think people conduct exploration like this.

Similarly, I think a lot of agents consider possibilities after they encounter them at least once. This might help solve the cost of simulation/computation.

I’m glad this was highlighted.

(Side note: There’s an aspect to the notion of “causal counterfactual” that I think it’s worth distinguishing from what’s discussed here. This post seems to take causal counterfactuals to be a description of top-level decision reasoning. A different meaning is that causal counterfactuals refer to an aspiration / goal. Causal interventions are supposed to be interventions that “affect nothing but what’s explicitly said to be affected”. We could try to describe actions in this way, carefully carving out exactly what’s affected and what’s not; and we find that we can’t do this, and so causal counterfactuals aren’t, and maybe can’t possibly, be a good description (e.g. because of Newcomb-like problems). But instead we could view them as promises: if I manage to “do X and only X” then exactly such and such effects result. In real life if I actually do X there will be other effects, but they must result from me having done something other than just exactly X. This seems related to the way in which humans know how to express preferences data-efficiently, e.g. “just duplicate this strawberry, don’t do any crazy other stuff”.)

I’m not really sure what you’re getting at.

This seems like a really bad description to me. For example, suppose we have the causal graph x→y→z. We intervene on y. We don’t want to “affect nothing but y”—we affect z, too. But we don’t get to pick and choose; we couldn’t choose to affect x and y without affecting z.

So I’d rather say that we “affect nothing but what we intervene on and what’s

downstreamof what we intervened on”.Not sure whether this has anything to do with your point, though.

A fair clarification.

My point is very tangential to your post: you’re talking about decision theory as top-level naturalized ways of making decisions, and I’m talking about some non-top-level intuitions that could be called CDT-like. (This maybe should’ve been a comment on your Dutch book post.) I’m trying to contrast the aspirational spirit of CDT, understood as “make it so that there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we know about it”, with descriptive CDT, “there’s such a thing as ‘all of what’s downstream of what we intervened on’ and we can know about it”. Descriptive CDT is only sort of right in some contexts, and can’t be right in some contexts; there’s no fully general Arcimedean point from which we intervene.

We can make some things more CDT-ish though, if that’s useful. E.g. we could think more about how our decisions have effects, so that we have in view more of what’s downstream of decisions. Or e.g. we could make our decisions have fewer effects, for example by promising to later reevaluate some algorithm for making judgements, instead of hiding within our decision to do X also our decision to always use the piece-of-algorithm that (within some larger mental context) decided to do X. That is, we try to hold off on decisions that have downstream effects we don’t understand well yet.

I don’t understand this part. Your explanation of PCDT at least didn’t prepare me for it, it doesn’t mention betting. And why is the payoff for the counterfactual-2-boxing determined by the beliefs of the agent after 1-boxing?

And what I think is mostly independent of that confusion: I don’t think things are as settled.

I’m more worried about the embedding problems with the trader in dutch book arguments, so the one against CDT isn’t as decisive for me.

But how is the counterfactual supposed to actually think? I don’t think just having the agent unrevisably believe that crossing is counterfactually +10 is a reasonable answer, even if it doesn’t have any instrumental problems in this case. I think it ought to be possible to get something like “whether to cross in troll bridge depends only on what you otherwise think about PAs consistency” with some logical method. But even short of that, there needs to be some method to adjust your counterfactuals if they fail to really match you conditionals. And if we had an actual concrete model of counterfactual reasoning instead of a list of desiderata, it might be possible to make a troll based on the consistency of whatever is inside this model, as opposed to PA.

I also think there is a good chance the answer to the cartesian boundary problem won’t be “heres how to calculate where your boundary is”, but something else of which boundaries are an approximation, and then something similar would go for counterfactuals, and then there won’t be a counterfactual theory which respects embedding.

These later two considerations suggest the leftover work isn’t just formalisation.

Not sure how to best answer. I’m thinking of all this in an LIDT setting, so all learning occurs through traders making bets. The payoff for 2-boxing is dependent on beliefs after 1-boxing because

all share prices update every market dayand the “payout” for a share is essentially what you can sell it for. Similarly, if a trader buys a share of an undecidable sentence (let’s say, the consistency of PA) then the only “payoff” is whatever you can sell it for later, based on future market prices, because the sentence will never get fully decided one way or the other.My claim is: eventually, if you observe enough cases of “crossing” in similar circumstances, your expectation for “cross” should be consistent with the empirical history (rather than, say, −10 even though you’ve never experienced −10 for crossing). To give a different example, I’m claiming it is irrational to persist in thinking 1-boxing gets you less money in expectation, if your empirical history continues to show that it is better on average.

And I claim that

if there is a persistent disagreementbetween counterfactuals and evidential conditionals, then the agentwillin fact experimentally try crossing infinitely often, due to the value-of-information of testing the disagreement (that is, this will be the limiting behavior of reduced temporal discounting, under the assumption that the agent isn’t worried about traps).So the two

will indeedconverge (under those assumptions).The hope is that we can block the troll argument

completelyif proving B->A does not imply cf(A|B)=1, becauseno matter what predicate the troll uses, the inference from P to cf fails. So what we concretely need to do is give a version of counterfactual reasoning which lets cf(A|B) not equal 1 in some cases where B->A is proved.Granted, there could be some

otherproblematic argument. However, if my learning-theoretic ideas go through, this provides another safeguard: Troll Bridge is a case where the agent never learns the empirical distribution, due to refusing to observe a specific case. If we know this never happens (given the learnability conditions), then this blocks off a whole range of Troll-Bridge-like arguments.This is a sensible position. I think this is similar to Scott G’s take on my direction.

My argument would not be that the dutch book

shouldbe super compelling, but rather, that it appears we can do everything without questioningsomany assumptions.For example, Scott would argue that probability is for things spacelike separated from you, so we need a different concept for thinking about consequences of actions. My argument is not anything against Scott’s concrete reasons to cast doubt on the broad applicability of probabilistic thinking; rather, my argument is “look at all the things we can do with probabilistic reasoning” (at least, suitably generalized).

In particular, good learning-theoretic results can address concerns about decision-theoretic paradoxes; a convincing optimality result could and should systematically rule out a wide range of decision-theoretic paradoxes. So, if true, it could become difficult to motivate any additional worries about cartesian frames etc.