I think this is a strong argument that EDT agents shouldn’t do bayesian updates on empirical observations. I thought that it might still be ok to change your mind on the basis of logical arguments and reasoning (not empirical data or observations). But I think a very similar argument bites against that.
Example:
There’s 2 mathematical propositions, each of which you think have an independent 50% probability of being true: Y1 and Y2.
The proposition X = “Y1 and Y2”.
Presumably you assign 25% to X being true.
Let’s say you try to prove Y1 to be true, and succeed. You don’t have time to prove Y2. Naively, you’d know expect to assign 50% to X being true, or be willing to be on 1:1 odds.
However, let’s say that there are many copies of you across the universe, and equally many of them tried to prove statement Y1 and Y2. For simplicity, let’s say everyone who tried to prove a true statement succeeded, and no one had time to attempt to prove more than one statement.
Given an opportunity to bet on X being true, and thinking about your odds, you reason:
If X is true, then Y2 will be true (in addition to Y1).
So if X is true, and I bet on X, then everyone will bet on X and everyone will win. (Assuming that someone who proved Y2 is relevantly in the same position as me, so that my choosing to bet provides strong evidence that they will bet.)
If X is false, then Y2 will be fase.
So if X is false, and everyone in my position bets on X, in expectation just 1⁄2 people will lose. (The ones who proved Y1.)
So the stakes are twice as high if X is true than false.
Since I assign X a 50% chance (or 1:1) of being true, I will bet on (1:1) * (2:1) = (2:1) odds that X is true. I.e., from this perspective, the EV calculation becomes:
EV(bet on 2:1 odds) = 50% [that X is true] * 2 [people who win if X is true] * 1 [payout if X is true] + 50% [that X is false] * 1 [people who lose if X is false] * (-2) [payout if X is false] = 0.
This strategy will lose money in expectation.
From the ex-ante perspective, there are 4 equiprobable worlds where (Y1,Y2) have different truth values. In 1 of them, neither is true; in 2 of them, exactly 1 is true; and in 1 one of them, both are true. From the ex-ante perspective, there’s 2 people who prove their statement true when X is false; and 2 people who prove their statement true when X is true. If they all bet at 2:1 odds that X is true, they’ll lose money in expectation.
One difference from the empirical case in the post above is that you need to perceive yourself as correlated with people who proved a different statement than you did.
Edit 2026-04-20:
I significantly simplified the example above.
I want to flag that I think the argument against logical updates is somewhat weaker than the argument against empirical updates. In particular, this even more unappealing argument doesn’t apply to the logical case as far as I can tell. (And the toy example above — disjunction of logical statements where you’ve proven one — is more rare than unreliable empirical evidence of logical facts, as in the calculator example above.)
I think this is a strong argument that EDT agents shouldn’t do bayesian updates on empirical observations. I thought that it might still be ok to change your mind on the basis of logical arguments and reasoning (not empirical data or observations). But I think a very similar argument bites against that.
Example:
There’s 2 mathematical propositions, each of which you think have an independent 50% probability of being true: Y1 and Y2.
The proposition X = “Y1 and Y2”.
Presumably you assign 25% to X being true.
Let’s say you try to prove Y1 to be true, and succeed. You don’t have time to prove Y2. Naively, you’d know expect to assign 50% to X being true, or be willing to be on 1:1 odds.
However, let’s say that there are many copies of you across the universe, and equally many of them tried to prove statement Y1 and Y2. For simplicity, let’s say everyone who tried to prove a true statement succeeded, and no one had time to attempt to prove more than one statement.
Given an opportunity to bet on X being true, and thinking about your odds, you reason:
If X is true, then Y2 will be true (in addition to Y1).
So if X is true, and I bet on X, then everyone will bet on X and everyone will win. (Assuming that someone who proved Y2 is relevantly in the same position as me, so that my choosing to bet provides strong evidence that they will bet.)
If X is false, then Y2 will be fase.
So if X is false, and everyone in my position bets on X, in expectation just 1⁄2 people will lose. (The ones who proved Y1.)
So the stakes are twice as high if X is true than false.
Since I assign X a 50% chance (or 1:1) of being true, I will bet on (1:1) * (2:1) = (2:1) odds that X is true. I.e., from this perspective, the EV calculation becomes:
EV(bet on 2:1 odds) = 50% [that X is true] * 2 [people who win if X is true] * 1 [payout if X is true] + 50% [that X is false] * 1 [people who lose if X is false] * (-2) [payout if X is false] = 0.
This strategy will lose money in expectation.
From the ex-ante perspective, there are 4 equiprobable worlds where (Y1,Y2) have different truth values. In 1 of them, neither is true; in 2 of them, exactly 1 is true; and in 1 one of them, both are true. From the ex-ante perspective, there’s 2 people who prove their statement true when X is false; and 2 people who prove their statement true when X is true. If they all bet at 2:1 odds that X is true, they’ll lose money in expectation.
One difference from the empirical case in the post above is that you need to perceive yourself as correlated with people who proved a different statement than you did.
Edit 2026-04-20:
I significantly simplified the example above.
I want to flag that I think the argument against logical updates is somewhat weaker than the argument against empirical updates. In particular, this even more unappealing argument doesn’t apply to the logical case as far as I can tell. (And the toy example above — disjunction of logical statements where you’ve proven one — is more rare than unreliable empirical evidence of logical facts, as in the calculator example above.)