Yes—assuming what you described is a fixed algorithm T, the complexity of A is just a constant, and the universal distribution samples input A for T a constant fraction of the time, meaning that this still dominates the average case runtime of T.
More generally: the algorithm has to be fixed (uniform), it can’t be parameterized by the input size. The results of the paper are asymptotic.
Oh, I see; asymptotically, BB(6) is just O(1), and immediately halting is also O(1). I was real confused because their abstract said “the same order of magnitude,” which must mean complexity class in their jargon (I first read it as “within a factor of 10.”)
Yes—assuming what you described is a fixed algorithm T, the complexity of A is just a constant, and the universal distribution samples input A for T a constant fraction of the time, meaning that this still dominates the average case runtime of T.
More generally: the algorithm has to be fixed (uniform), it can’t be parameterized by the input size. The results of the paper are asymptotic.
Oh, I see; asymptotically, BB(6) is just O(1), and immediately halting is also O(1). I was real confused because their abstract said “the same order of magnitude,” which must mean complexity class in their jargon (I first read it as “within a factor of 10.”)