Translations Should Invert

Suppose that Alice is arguing for ZFC set theory as the proper foundation of mathematics. Bob is arguing for some specific intuitionistic type theory. Alice uses classical logic, and Bob uses intuitionistic logic; this is a big part of their difference in thinking, but not the whole disagreement, since set theory vs type theory is another part, and then (perhaps most importantly) the specific axioms each endorse.

The best way Bob can try to understand Alice is by translating her classical-logic statements into intuitionistic statements using Godel’s “double negation translation”, which works by sprinkling double negations throughout a sentence. This is classically equivalent to the original, but intuitionistically distinct. So Bob basically understands Alice as always speaking in the double negative (with some complications).

The best way Alice can try to understand Bob is by a similar translation in the other direction, which I can’t find right now, but it involves sprinkling “provability” predicates everywhere. So Alice basically understands Bob to always be talking about provability rather than simple truth (with some complications).

These translations work pretty well for allowing Alice and Bob to follow each other’s arguments in isolation. Alice and Bob can find no flaw with the translations as translations; Alice sees that Bob will never miss or add important implications (except by virtue of the other axioms Bob and Alice want to debate), and vice versa. So it might seem like Alice and Bob have found a good way to avoid debating classical logic vs intuitionistic logic, so that they can focus on debating set theory vs type theory.

However, it seems to me like there is a severe problem.

One danger is that the use of translations allows Alice and Bob to put off an important part of their debate, which is the debate between intuitionism and classical logic. But maybe this is OK.

The larger danger is that Alice and Bob cannot agree on what they are debating at any given time. Since their translations aren’t inverses of each other, when Bob wants to debate X, Alice translates it into Y, and Bob sees Alice as changing the topic to Z. So from each of their perspectives, the other person is always changing the topic, which feels like “avoiding the problem I’m trying to point to—refusing to deal with it, always insisting on dealing with some other, somewhat analogous, but more complex thing”.

Imagine Alice wants to argue for the axiom of choice, C. Bob translates this to C’ by adding double negations, and has some counterargument against C’. Alice translates this to C″ by adding provability predicates, and says, OK, that’s not exactly what I meant, but I can modify my argument as follows to show that you are wrong in this case also. Bob translates to C‴ and adapts his own argument as needed. They keep going back and forth, never quite agreeing on the topic of debate, each always satisfying themselves that they’ve responded to the other, but then “somehow” the other person comes up with a “new” concern...

It isn’t clear who should change behavior and how. They can both see that each other’s translations individually should work fine (so neither one can argue too strongly against the other’s translations.) But it does seem plausible to claim that they should settle on translations which invert each other.


The post is based on a real discussion I had, about decision theory. I was making an argument for a claim X in terms of pure actions, and the other person was translating to mixed strategies to argue against. My view is that the other person’s argument simply doesn’t work; but in order to re-apply my own argument to refute their take, I have to treat the choice of mixed strategy as itself being a “pure action” (ie you select a number from 0 to 1, which you put into a randomizing device, which then selects the 1st action for you with that probability, and otherwise does the 2nd).

But for the other person to analyze my new application of my argument, from their own perspective, they would have to again translate to mixed actions; so now the agent isn’t just settling on one number to say, but rather, selects a probability distribution over which numbers to say. (Effectively, we now have mixtures of mixtures.) Having done the translation, they re-state their argument against my claim X.

This could just iterate infinitely without going anywhere, even if both of us think we have an argument that is strong enough that we can always reply to the other person, no matter what they try to do to wriggle out of it.

Neither of us think the choice between modeling agents as deciding pure actions vs mixed is particularly critical, since it is so easy to translate back and forth between the two frames. But nonetheless it seems to be harming our ability to see how our arguments connect with each other.