Exactly. It’s such a simple tweak that it’s embarrassing I haven’t noticed it in the first place. It’s just that in average complexity theory negligible errors seem much more popular. The price to pay is that some theorems require stronger assumptions namely something that had to be bounded by a polynomial now has to be bounded by a constant. On the other hand Theorems 9 demonstrates that there is no chance optimal predictors cover all of VerNP (for example) whereas quasi-optimal predictors might work. Also, I think that in VerNP there is a universal construction for uniform quasi-optimal predictors (as opposed to optimal predictors) similar to Levin’s universal search although I haven’t fleshed out the details yet (anyhow such a construction would be theoretically valid but highly inefficient in practice).
Exactly. It’s such a simple tweak that it’s embarrassing I haven’t noticed it in the first place. It’s just that in average complexity theory negligible errors seem much more popular. The price to pay is that some theorems require stronger assumptions namely something that had to be bounded by a polynomial now has to be bounded by a constant. On the other hand Theorems 9 demonstrates that there is no chance optimal predictors cover all of VerNP (for example) whereas quasi-optimal predictors might work. Also, I think that in VerNP there is a universal construction for uniform quasi-optimal predictors (as opposed to optimal predictors) similar to Levin’s universal search although I haven’t fleshed out the details yet (anyhow such a construction would be theoretically valid but highly inefficient in practice).