I wonder if the reason that polynomial time algorithms tend to be somewhat practical (not runtime n^100) is just that we aren’t smart enough to invent really necessarily complicated polynomial time algorithms.
Like, the obvious way to get n^100 is to nest 100 for loops. A problem which can only be solved in polynomial time by nesting 100 for loops (presumably doing logically distinct things that cannot be collapsed!) is a problem that I am not going to solve in polynomial time…
Reasons I deem more likely: 1. Selection effect: if it’s unfeasible you don’t work on it/don’t hear about it, in my personal experience n^3 is already slow 2. If in n^k k is high, probably you have some representation where k is a parameter and so you say it’s exponential in k, not that it’s polinomial
1: Not true, I hear about exponential time algorithms! People work on all sorts of problems only known to have exponential time algorithms.
2: Yes, but the reason k only shows up as something we would interpret as a parameter and not as a result of the computational complexity of an algorithm invented for a natural problem is perhaps because of my original point—we can only invent the algorithm if the problem has structure that suggests the algorithm, in which case the algorithm is collapsible and k can be separated out as an additional input for a simpler algorithm.
I think canonical high-degree polynomial problem is high-dimensional search. We usually don’t implement exact grid search because we can deploy Monte Carlo or gradient descent. I wonder if there is any hard lower bounds on approximation hardness for polynomial time problems.
I wonder if the reason that polynomial time algorithms tend to be somewhat practical (not runtime n^100) is just that we aren’t smart enough to invent really necessarily complicated polynomial time algorithms.
Like, the obvious way to get n^100 is to nest 100 for loops. A problem which can only be solved in polynomial time by nesting 100 for loops (presumably doing logically distinct things that cannot be collapsed!) is a problem that I am not going to solve in polynomial time…
Reasons I deem more likely:
1. Selection effect: if it’s unfeasible you don’t work on it/don’t hear about it, in my personal experience n^3 is already slow
2. If in n^k k is high, probably you have some representation where k is a parameter and so you say it’s exponential in k, not that it’s polinomial
1: Not true, I hear about exponential time algorithms! People work on all sorts of problems only known to have exponential time algorithms.
2: Yes, but the reason k only shows up as something we would interpret as a parameter and not as a result of the computational complexity of an algorithm invented for a natural problem is perhaps because of my original point—we can only invent the algorithm if the problem has structure that suggests the algorithm, in which case the algorithm is collapsible and k can be separated out as an additional input for a simpler algorithm.
I think canonical high-degree polynomial problem is high-dimensional search. We usually don’t implement exact grid search because we can deploy Monte Carlo or gradient descent. I wonder if there is any hard lower bounds on approximation hardness for polynomial time problems.
My guess would just be that its more of a strong law of small numbers type thing. Look at eg this.