Computational complexity may place strong limits on how much recursive self-improvement can occur, especially in a software context. See e.g. this prior discussion and this ongoing one. In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge. This would only make sense if we see a specific reason that all algorithms can’t exhibit superintelligent competence in the real world (as opposed to ability to solve randomly generated standard-form problems whose complexity can be analyzed by human mathematicians), but we don’t understand intelligence nearly enough to carry out such inferences.
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge.
This is not a good analogy at all. The probable scale of difference is what matters here. In this sort of context, we’re extremely far from physical limitations mattering, as one can see for example by the fact that Koomey’s law can continue for about forty years before hitting physical limits. (It will likely break down before then but that’s not the point.) In contrast, our understanding of the limits of computational complexity are in some respects stricter but weaker in other respects. The conjectured limits of for example strong versions of the exponential time hypothesis place much more severe limits on what can occur.
It is important to note here that these sorts of limits are relevant primarily in the context of a software only or primarily software only recursive self-improvement. For essentially the reasons you outline (the large amount of apparent room for physical improvement), it seems likely that this will not matter much for an AGI that has much in the way of ability to discover/construct new physical systems. (This does imply some limits in that form, but they are likely to be comparatively weak).
Computational complexity may place strong limits on how much recursive self-improvement can occur, especially in a software context. See e.g. this prior discussion and this ongoing one. In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
Why oh why do you still believe this? In my mind, this is strongly analogous to pointing out that there are physical limits on how intelligent an AI can get, which is true, but for all practical purposes irrelevant, since these limits are way above what humans can do, given our state of knowledge. This would only make sense if we see a specific reason that all algorithms can’t exhibit superintelligent competence in the real world (as opposed to ability to solve randomly generated standard-form problems whose complexity can be analyzed by human mathematicians), but we don’t understand intelligence nearly enough to carry out such inferences.
This is not a good analogy at all. The probable scale of difference is what matters here. In this sort of context, we’re extremely far from physical limitations mattering, as one can see for example by the fact that Koomey’s law can continue for about forty years before hitting physical limits. (It will likely break down before then but that’s not the point.) In contrast, our understanding of the limits of computational complexity are in some respects stricter but weaker in other respects. The conjectured limits of for example strong versions of the exponential time hypothesis place much more severe limits on what can occur.
It is important to note here that these sorts of limits are relevant primarily in the context of a software only or primarily software only recursive self-improvement. For essentially the reasons you outline (the large amount of apparent room for physical improvement), it seems likely that this will not matter much for an AGI that has much in the way of ability to discover/construct new physical systems. (This does imply some limits in that form, but they are likely to be comparatively weak).