Somewhat against “just update all the way”

Sometimes, a person’s probability in a proposition over a long timespan follows a trajectory like this:

I.e. the person gradually finds the proposition more plausible over time, never really finding evidence against it, but at the same time they update very gradually, rather than having big jumps. For instance, someone might gradually be increasing their credence in AI risk being serious.

In such cases, I have sometimes seen rationalists complain that the updates are happening too slowly, and claim that they should notice the trend in updates, and “just update all the way”.

I suspect this sentiment is inspired by the principle of Conservation of expected evidence, which states that your current belief should equal the expectation of your future beliefs. And it’s an understandable mistake to make, because this principle surely sounds like you should just extrapolate a trend in your beliefs and update to match its endpoint.

Reasons to not update all the way

Suppose you start with a belief that either an AI apocalypse will happen, or someone at some random point in time will figure out a solution to alignment.

In that case, for each time interval that passes without a solution to alignment, you have some slight evidence against the possibility that a solution will be found (because the time span it can be solved in has narrowed), and some slight evidence in favor of an AI apocalypse. This makes your follow a pattern somewhat like the previous graph.

However, if someone comes up with a credible and well-proven solution to AI alignment, then that would (under your model) disprove the apocalypse, and your would go rapidly down:

So the continuous upwards trajectory in probability satisfies the conservation of expected evidence because the probable slight upwards movement is counterbalanced by an improbable strong downwards movement.

Reasons to update all the way

There may be a lot of other models for your beliefs, and some of those other models give reasons to update all the way. For instance in the case of AI doom, you might have something specific that you believe is a blocker for dangerous AIs, and if that specific thing gets disproven, you ought to update all the way. There are good reasons that the sequences warn against Fighting a rearguard action against the truth.

I just want to warn people not to force this perspective in cases where it doesn’t belong. I think it can be hard to tell from the outside whether others ought to update all the way, because it is rare for people to share their full models and derivations, and when they do share the models and derivations, it is rare for others to read them in full.

Thanks to Justis Mills for providing proofreading and feedback.