Yes—assuming what you described is a fixed algorithm T, the complexity of A is just a constant, and the universal distribution samples input A for T a constant fraction of the time, meaning that this still dominates the average case runtime of T.
More generally: the algorithm has to be fixed (uniform), it can’t be parameterized by the input size. The results of the paper are asymptotic.
Oh, I see; asymptotically, BB(6) is just O(1), and immediately halting is also O(1). I was real confused because their abstract said “the same order of magnitude,” which must mean complexity class in their jargon (I first read it as “within a factor of 10.”)
That average case=worst case headline is so wild. Consider a simple lock and key algorithm:
if input = A, run BB(6). else, halt.
Where A is some random number (K(A)~A).
Sure seems like worst case >> average case here. Anyone know what’s going on in their paper that disposes of such examples?
Yes—assuming what you described is a fixed algorithm T, the complexity of A is just a constant, and the universal distribution samples input A for T a constant fraction of the time, meaning that this still dominates the average case runtime of T.
More generally: the algorithm has to be fixed (uniform), it can’t be parameterized by the input size. The results of the paper are asymptotic.
Oh, I see; asymptotically, BB(6) is just O(1), and immediately halting is also O(1). I was real confused because their abstract said “the same order of magnitude,” which must mean complexity class in their jargon (I first read it as “within a factor of 10.”)