What kind of example do you have in mind? Even for algorithmic problems with relatively small communities (and low $ invested) I think it’s still pretty rare to have big jumps like this (e..g measuring performance for values of n at the scale where it can be run on conventional communities). I’m thinking of domains like combinatorial optimization and constraint satisfaction, convex optimization, graph algorithms. In most cases I think you get to a pretty OK algorithm quite quickly and further progress is slow. Exceptions tend to be cases like “there was an algorithm that works well in practice but not the worst case” or “the old algorithm got an approximation ratio of 0.87 but the new one gets an approximation ratio of 0.92 and if you absolutely require 0.9 the new one is very much faster” or extremely niche problems. But my knowledge isn’t that deep.
(And maybe what I’d say quantitatively is that something like ~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements, with the number getting lower for stricter notions of “people care about” and the step size for “relatively smooth improvement” being defined by how many people are working on it.)
(I’m sure you know more than I do about algorithms.)
What kind of example do you have in mind?
~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements,
I’m thinking of the difference between insertion sort / bubble sort vs radix sort / merge sort.
(Knuth gives an interesting history here (Art of Programming Vol 3, section 5.5, p 383); apparently in 1890 the US census data was processed using the radix sorting algorithm running on a mechanical-electronic-human hybrid. There was an order-preserving card-stack merging machine in 1938. Then in 1945, von Neumann wrote down a merge sort, while independently Zuse wrote down an insertion sort.)
I guess we’re talking past each other because we’re answering different versions of “What is continuous in what?”. Performance on a task can be, and is, much more continuous in time than “ideas” are continuous in time, because translating ideas into performance on a task takes resources (money, work, more ideas). So I concede that what I said here:
I think often performance on a task gets jumps
was mostly incorrect, if we don’t count the part where
you get to a pretty OK algorithm quite quickly
So one question is, is TAI driven by ideas that will have a stage where they get to a pretty okay version quite quickly once the “idea” is there, or no, or what? Another question is, do you think “ideas” are discontinuous?
What kind of example do you have in mind? Even for algorithmic problems with relatively small communities (and low $ invested) I think it’s still pretty rare to have big jumps like this (e..g measuring performance for values of n at the scale where it can be run on conventional communities). I’m thinking of domains like combinatorial optimization and constraint satisfaction, convex optimization, graph algorithms. In most cases I think you get to a pretty OK algorithm quite quickly and further progress is slow. Exceptions tend to be cases like “there was an algorithm that works well in practice but not the worst case” or “the old algorithm got an approximation ratio of 0.87 but the new one gets an approximation ratio of 0.92 and if you absolutely require 0.9 the new one is very much faster” or extremely niche problems. But my knowledge isn’t that deep.
(And maybe what I’d say quantitatively is that something like ~10% of the log-space progress on problems people care about comes from big jumps vs 90% from relatively smooth improvements, with the number getting lower for stricter notions of “people care about” and the step size for “relatively smooth improvement” being defined by how many people are working on it.)
(I’m sure you know more than I do about algorithms.)
I’m thinking of the difference between insertion sort / bubble sort vs radix sort / merge sort.
(Knuth gives an interesting history here (Art of Programming Vol 3, section 5.5, p 383); apparently in 1890 the US census data was processed using the radix sorting algorithm running on a mechanical-electronic-human hybrid. There was an order-preserving card-stack merging machine in 1938. Then in 1945, von Neumann wrote down a merge sort, while independently Zuse wrote down an insertion sort.)
I guess we’re talking past each other because we’re answering different versions of “What is continuous in what?”. Performance on a task can be, and is, much more continuous in time than “ideas” are continuous in time, because translating ideas into performance on a task takes resources (money, work, more ideas). So I concede that what I said here:
was mostly incorrect, if we don’t count the part where
So one question is, is TAI driven by ideas that will have a stage where they get to a pretty okay version quite quickly once the “idea” is there, or no, or what? Another question is, do you think “ideas” are discontinuous?