I’m kind of scared of this approach because I feel unless you really nail everything there is going to be a gap that an attacker can exploit.
I think that not every gap is exploitable. For most types of biases in the prior, it would only promote simulation hypotheses with baseline universes conformant to this bias, and attackers who evolved in such universes will also tend to share this bias, so they will target universes conformant to this bias and that would make them less competitive with the true hypothesis. In other words, most types of bias affect both ϵ and δ in a similar way.
More generally, I guess I’m more optimistic than you about solving all such philosophical liabilities.
I think of this in contrast with my approach based on epistemic competitiveness approach, where the idea is not necessarily to identify these considerations in advance, but to be epistemically competitive with an attacker (inside one of your hypotheses) who has noticed an improvement over your prior.
I don’t understand the proposal. Is there a link I should read?
This is very similar to what I first thought about when going down this line. My instantiation runs into trouble with “giant” universes that do all the possible computations you would want, and then using the “free” complexity in the bridge rules to pick which of the computations you actually wanted.
So, you can let your physics be a dovetailing of all possible programs, and delegate to the bridge rule the task of filtering the outputs of only one program. But the bridge rule is not “free complexity” because it’s not coming from a simplicity prior at all. For a program of length n, you need a particular DFA of size Ω(n). However, the actual DFA is of expected size m with m≫n. The probability of having the DFA you need embedded in that is something like m!(m−n)!m−2n≈m−n≪2−n. So moving everything to the bridge makes a much less likely hypothesis.
I think that not every gap is exploitable. For most types of biases in the prior, it would only promote simulation hypotheses with baseline universes conformant to this bias, and attackers who evolved in such universes will also tend to share this bias, so they will target universes conformant to this bias and that would make them less competitive with the true hypothesis. In other words, most types of bias affect both ϵ and δ in a similar way.
More generally, I guess I’m more optimistic than you about solving all such philosophical liabilities.
I don’t understand the proposal. Is there a link I should read?
So, you can let your physics be a dovetailing of all possible programs, and delegate to the bridge rule the task of filtering the outputs of only one program. But the bridge rule is not “free complexity” because it’s not coming from a simplicity prior at all. For a program of length n, you need a particular DFA of size Ω(n). However, the actual DFA is of expected size m with m≫n. The probability of having the DFA you need embedded in that is something like m!(m−n)!m−2n≈m−n≪2−n. So moving everything to the bridge makes a much less likely hypothesis.