But at the same time humans are able to construct intricate logical artifacts like the general number field sieve, which seems to require many more steps of longer inferential distance, and each step could only have been made by a small number of specialists in number theory or algebraic number theory available and thinking about factoring algorithms at the time. (Unlike the step in the OP, which seemingly anyone could have made.)
The space of possible inferential steps is very high-dimensional, most steps are difficult, and there’s no known way to strongly bias your policy towards making simple-but-useful steps. Human specialists, therefore, could at best pick a rough direction that leads to accomplishing some goal they have, and then attempt random steps roughly pointed in that direction. Most of those random steps are difficult. A human succeeds if the step’s difficulty is below some threshold, and fails and goes back to square one otherwise. Over time, this results in a biased-random-walk process that stumbles upon a useful application once in a while. If one then looks back, one often sees a sequence of very difficult steps that led to this application (with a bias towards steps at the very upper end of what humans can tackle).
In other words: The space of steps is more high-dimensional than human specialists are numerous, and our motion through it is fairly random. If one picks some state of human knowledge, and considers all directions in which anyone has ever attempted to move from that state, that wouldn’t produce a comprehensive map of that state’s neighbourhood. There’s therefore no reason to expect that all “low-hanging fruits” have been picked, because locating those low-hanging fruits is often harder than picking some high-hanging one.
At this point, I am not surprised by this sort of thing at all, only semi-ironically amused, but I’m not sure whether I can convey why it’s not surprising to me at all (although I surely would be surprised by this if somebody made it salient to me some 5 or 10 years ago).
Perhaps I just got inoculated by reading about people making breakthroughs with simple or obvious in-hindsight concepts or even hearing ideas from people that I thought were obviously relevant/valuable to have in one’s portfolio of models, even though for some reason I hadn’t had it until then, or at least it had been less salient to me than it should have.
Anders Sandberg said that he had had all the pieces of the Grabby Aliens model on the table and only failed to think of an obvious way to put them together.
One frame (of unclear value) I have for this kind of thing is that the complexity/salience/easiness-to-find of an idea before and after is different because, well, a bunch of stuff in the mind is different.
But at the same time humans are able to construct intricate logical artifacts like the general number field sieve, which seems to require many more steps of longer inferential distance, and each step could only have been made by a small number of specialists in number theory or algebraic number theory available and thinking about factoring algorithms at the time. (Unlike the step in the OP, which seemingly anyone could have made.)
Can you make sense of this?
Here’s a crack at it:
The space of possible inferential steps is very high-dimensional, most steps are difficult, and there’s no known way to strongly bias your policy towards making simple-but-useful steps. Human specialists, therefore, could at best pick a rough direction that leads to accomplishing some goal they have, and then attempt random steps roughly pointed in that direction. Most of those random steps are difficult. A human succeeds if the step’s difficulty is below some threshold, and fails and goes back to square one otherwise. Over time, this results in a biased-random-walk process that stumbles upon a useful application once in a while. If one then looks back, one often sees a sequence of very difficult steps that led to this application (with a bias towards steps at the very upper end of what humans can tackle).
In other words: The space of steps is more high-dimensional than human specialists are numerous, and our motion through it is fairly random. If one picks some state of human knowledge, and considers all directions in which anyone has ever attempted to move from that state, that wouldn’t produce a comprehensive map of that state’s neighbourhood. There’s therefore no reason to expect that all “low-hanging fruits” have been picked, because locating those low-hanging fruits is often harder than picking some high-hanging one.
Generally agree with the caveat that...
...the difficulty of a step is generally somewhat dependent on some contingent properties of a given human mind.
At this point, I am not surprised by this sort of thing at all, only semi-ironically amused, but I’m not sure whether I can convey why it’s not surprising to me at all (although I surely would be surprised by this if somebody made it salient to me some 5 or 10 years ago).
Perhaps I just got inoculated by reading about people making breakthroughs with simple or obvious in-hindsight concepts or even hearing ideas from people that I thought were obviously relevant/valuable to have in one’s portfolio of models, even though for some reason I hadn’t had it until then, or at least it had been less salient to me than it should have.
Anders Sandberg said that he had had all the pieces of the Grabby Aliens model on the table and only failed to think of an obvious way to put them together.
One frame (of unclear value) I have for this kind of thing is that the complexity/salience/easiness-to-find of an idea before and after is different because, well, a bunch of stuff in the mind is different.