@ John: can you really not see the difference between “this is guaranteed to succeed”, vs. “this has only a tiny likelihood of failure”? Those aren’t the same statements.
“If you play the game this way”—but why would anyone want to play a game the way you’re describing? Why is that an interesting game to play, an interesting way to compare algorithms? It’s not about worst case in the real world, it’s not about average case in the real world. It’s about performance on a scenario that never occurs. Why judge algorithms on that basis?
As for predicting the random bits … Look, you can do whatever you want inside your algorithm. Your queries on the input bits are like sensors into an environment. Why can’t I place the bits after you ask for them? And then just move the 1 bits away from whereever you happened to decide to ask?
The point is, that you decided on a scenario that has zero relevance to the real world, and then did some math about that scenario, and thought that you learned something about the algorithms which is useful when applying them in the real world.
But you didn’t. Your math is irrelevant to how these things will perform in the real world. Because your scenario has nothing to do with any actual scenario that we see in deployment.
(Just as an example: you still haven’t acknowledged the difference between real random sources—like a quantum counter—vs. PRNGs—which are actually deterministic! Yet if I presented you with a “randomized algorithm” for the n-bit problem, which actually used a PRNG, I suspect you’d say “great! good job! good complexity”. Even though the actual real algorithm is deterministic, and goes against everything you’ve been ostensibly arguing during this whole thread. You need to understand that the real key is: expected (anti-)correlations between the deterministic choices of the algorithm, and the input data. PRNGs are sufficient to drop the expected (anti-)correlations low enough for us to be happy.)
@ John: can you really not see the difference between “this is guaranteed to succeed”, vs. “this has only a tiny likelihood of failure”? Those aren’t the same statements.
“If you play the game this way”—but why would anyone want to play a game the way you’re describing? Why is that an interesting game to play, an interesting way to compare algorithms? It’s not about worst case in the real world, it’s not about average case in the real world. It’s about performance on a scenario that never occurs. Why judge algorithms on that basis?
As for predicting the random bits … Look, you can do whatever you want inside your algorithm. Your queries on the input bits are like sensors into an environment. Why can’t I place the bits after you ask for them? And then just move the 1 bits away from whereever you happened to decide to ask?
The point is, that you decided on a scenario that has zero relevance to the real world, and then did some math about that scenario, and thought that you learned something about the algorithms which is useful when applying them in the real world.
But you didn’t. Your math is irrelevant to how these things will perform in the real world. Because your scenario has nothing to do with any actual scenario that we see in deployment.
(Just as an example: you still haven’t acknowledged the difference between real random sources—like a quantum counter—vs. PRNGs—which are actually deterministic! Yet if I presented you with a “randomized algorithm” for the n-bit problem, which actually used a PRNG, I suspect you’d say “great! good job! good complexity”. Even though the actual real algorithm is deterministic, and goes against everything you’ve been ostensibly arguing during this whole thread. You need to understand that the real key is: expected (anti-)correlations between the deterministic choices of the algorithm, and the input data. PRNGs are sufficient to drop the expected (anti-)correlations low enough for us to be happy.)