Thanks for writing this up! Your “amplified weak” version of the conjecture (with complexity bounds increasing exponentially in 1/ε) seems plausible to me. So if you could amplify the original (weak) conjecture to this unconditionally, it wouldn’t significantly decrease my credence in the principle. But it would be nice to have this bound on what the dependence on ε would need to be.
By “unconditionally”, do you mean without an additional assumption, such as Reduction-Regularity?
I actually have a result in the other direction (an older version of this post). That is, with a slightly stronger assumption, we could derive a better upper bound (on 1/ε) for the time-complexity of the verifier. The stronger assumption is just a stronger version of Reduction-Regularity, requiring Prob(f(C)∈¯¯¯¯¯V0|f(C)∈¯¯¯¯P)≥δ -- that is, ¯¯¯¯¯V0 replaces ¯¯¯¯L in the inequality. It could still lead to an exponential bound in 1/ε if the complexity of the reduction fm increases exponentially with m. In the version presented in this post, the exponential bound relies on the complexity of fm being O(|C|γ) for a fixed γ, regardless of m, which is admittedly optimistic. If we assume this with the stronger version of Reduction-Regularity, then I think the upper bound for the time-complexity of the verifier would increase only linearly with m=O(log1/ε), which is a huge improvement (I can add something in the appendix about this if you are interested in the details). In the end, I opted for presenting the weakest assumption here, even if corresponding to a worse complexity bound.
Since there are only exponentially many circuits, having the time-complexity of the verifier grow only linearly with O(log1/ε) would mean that you could get a verifier that never makes mistakes. So (if I’m not mistaken) if you’re right about that, then the stronger version of reduction-regularity would imply that our conjecture is equivalent to NP = coNP.
I haven’t thought enough about the reduction-regularity assumption to have a take on its plausibility, but based on my intuition about our no-coincidence principle, I think it’s pretty unlikely to be equivalent to NP = coNP in a way that’s easy-ish to show.
For the time-complexity of the verifier to grow only linearly with O(log1/ε), we need also to assume the complexity of the reduction fm is O(|C|γ) for a fixed γ, regardless of m, which I admit is quite optimistic. If the time-complexity of the reduction fm is doubly exponential in m, the (upper bound of the) time-complexity of the verifier will be exponential in 1/ε, even with the stronger version of Reduction-Regularity.
Yes, by “unconditionally” I meant “without an additional assumption”. I don’t currently see why the Reduction-Regularity assumption ought to be true (I may need to think about it more).
Thanks for writing this up! Your “amplified weak” version of the conjecture (with complexity bounds increasing exponentially in 1/ε) seems plausible to me. So if you could amplify the original (weak) conjecture to this unconditionally, it wouldn’t significantly decrease my credence in the principle. But it would be nice to have this bound on what the dependence on ε would need to be.
By “unconditionally”, do you mean without an additional assumption, such as Reduction-Regularity?
I actually have a result in the other direction (an older version of this post). That is, with a slightly stronger assumption, we could derive a better upper bound (on 1/ε) for the time-complexity of the verifier. The stronger assumption is just a stronger version of Reduction-Regularity, requiring Prob(f(C)∈¯¯¯¯¯V0|f(C)∈¯¯¯¯P)≥δ -- that is, ¯¯¯¯¯V0 replaces ¯¯¯¯L in the inequality. It could still lead to an exponential bound in 1/ε if the complexity of the reduction fm increases exponentially with m. In the version presented in this post, the exponential bound relies on the complexity of fm being O(|C|γ) for a fixed γ, regardless of m, which is admittedly optimistic. If we assume this with the stronger version of Reduction-Regularity, then I think the upper bound for the time-complexity of the verifier would increase only linearly with m=O(log1/ε), which is a huge improvement (I can add something in the appendix about this if you are interested in the details). In the end, I opted for presenting the weakest assumption here, even if corresponding to a worse complexity bound.
Echoing Jacob, yeah, thanks for writing this!
Since there are only exponentially many circuits, having the time-complexity of the verifier grow only linearly with O(log1/ε) would mean that you could get a verifier that never makes mistakes. So (if I’m not mistaken) if you’re right about that, then the stronger version of reduction-regularity would imply that our conjecture is equivalent to NP = coNP.
I haven’t thought enough about the reduction-regularity assumption to have a take on its plausibility, but based on my intuition about our no-coincidence principle, I think it’s pretty unlikely to be equivalent to NP = coNP in a way that’s easy-ish to show.
For the time-complexity of the verifier to grow only linearly with O(log1/ε), we need also to assume the complexity of the reduction fm is O(|C|γ) for a fixed γ, regardless of m, which I admit is quite optimistic. If the time-complexity of the reduction fm is doubly exponential in m, the (upper bound of the) time-complexity of the verifier will be exponential in 1/ε, even with the stronger version of Reduction-Regularity.
Correcting myself: “doubly” just added above.
Yes, by “unconditionally” I meant “without an additional assumption”. I don’t currently see why the Reduction-Regularity assumption ought to be true (I may need to think about it more).