Intuitively, this is very similar to previous approaches to salvaging decision theory (e.g. see mine here, but the whole thing is basically the same as playing chicken with the universe, which just corresponds to using a very low temperature).
I am still not able (/ don’t have the time) to closely follow these proofs or even the result statements. It looks to me like your goal is to formalize the basic intuitive arguments, and that the construction works by a similar diagonalization. If that’s not the case, it may be worth calling out the differences explicitly.
Yeah, what I’m doing here is more or less a formalisation of the ideas in your writeup, with the added technical complication that the “math intuition model” is nondeterministic so you need to use matrix counterfactuals. In order to get UDTish instead of CDTish behavior, I am going to make the agent select some sort of “logical policy” instead of action (i.e. something that reduces to a metathreat in a game theoretic setting).
Intuitively, this is very similar to previous approaches to salvaging decision theory (e.g. see mine here, but the whole thing is basically the same as playing chicken with the universe, which just corresponds to using a very low temperature).
I am still not able (/ don’t have the time) to closely follow these proofs or even the result statements. It looks to me like your goal is to formalize the basic intuitive arguments, and that the construction works by a similar diagonalization. If that’s not the case, it may be worth calling out the differences explicitly.
Yeah, what I’m doing here is more or less a formalisation of the ideas in your writeup, with the added technical complication that the “math intuition model” is nondeterministic so you need to use matrix counterfactuals. In order to get UDTish instead of CDTish behavior, I am going to make the agent select some sort of “logical policy” instead of action (i.e. something that reduces to a metathreat in a game theoretic setting).