It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly “become a singleton Power” do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster “up-arrow” space of mathematical complexity, and a machine of the same power bounded in growth of intelligence—by definition necessarily relevant—due to starvation for relevant novelty in its environment of interaction.
If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above—but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?
Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen’s Race continues ever faster.
It might be worthwhile to note that cogent critiques of the proposition that a machine intelligence might very suddenly “become a singleton Power” do not deny the inefficacies of the human cognitive architecture offering improvement via recursive introspection and recoding, nor do they deny the improvements easily available via hardware substitution and expansion of more capable hardware and I/O.
The do, however, highlight the distinction between a vastly powerful machine madly exploring vast reaches of a much vaster “up-arrow” space of mathematical complexity, and a machine of the same power bounded in growth of intelligence—by definition necessarily relevant—due to starvation for relevant novelty in its environment of interaction.
If, Feynman-like, we imagine the present state of knowledge about our world in terms of a distribution of vertical domains, like silos, some broader with relevance to many diverse facets of real-world interaction, some thin and towering into the haze of leading-edge mathematical reality, then we can imagine the powerful machine quickly identifying and making a multitude of latent connections and meta-connections, filling in the space between the silos and even somewhat above—but to what extent, given the inevitable diminishing returns among the latent, and the resulting starvation for the novel?
Given such boundedness, speculation is redirected to growth in ecological terms, and the Red Queen’s Race continues ever faster.