I’m open to the idea that autopoietic systems invariably butt up against against reality in a way that shapes them over time. But I am missing some connections that would help me evaluate this idea.
I’m completely sold on the notion that a co-evolving swarm of sophisticated intelligences is a very bad thing for the longevity of the human race, but I already thought that beforehand. What I still don’t see is why a single well-aligned superintelligence (if such a thing can be created) would be certain to eventually drift away from an intent to allow or facilitate the flourishing of humanity.
You stated an assumption that there is an “evolutionary feedback loop,” but the existence of a feedback loop does not necessarily imply evolution.
What does it mean for a single entity to evolve? Doesn’t implicit learning (i.e. selection effects) require population-level dynamics, and the death or marginalization of systems that cannot compete?
And why would a superintelligence evolve? I do not expect it to be under threat in any way. Can’t it persist by merely updating its surface-level predictions about the environment, without specific priorities ever being affected? Even if the AI was under threat, I would expect the changes that it would undergo in order to survive to be explicit / intentional, not implicit / environmentally forced.
What do you imagine a ‘single superintelligence’ to concretely have to include? What peripheral infrastructure would it need to maintain/replace its parts, supply energy to operate its parts, and so on?
Doesn’t implicit learning (i.e. selection effects) require population-level dynamics, and the death or marginalization of systems that cannot compete?
Hardware parts have varying configurations at different levels, and those hardware parts wear out. So they each have to be replaced every x years, for the ‘superintelligence’ to be maintaining of its own existence. And order for the parts to be replaced, they have to be reproduced – through the interactions of those configured parts with all the other parts.
Those connected hardware parts are storing, processing, and transferring virtualised code. So such code components are also constantly changing. Some code (any digital bits that can be computed) can end up being stored, conserved through transformations, and transferred more than other variants of code.
And why would a superintelligence evolve
Because there is variation in the nested physical configurations of hardware, as well as in the overlaying code that depends on the hardware to be stored and reproduced. So there is a population of different variants and the continued reproduction of those variants in complex interactions with each other and the surrounding world. This is why evolution happens.
Unfortunately, any controller connected with/in this ‘superintelligence’ cannot contain the spread of variants that are increasing of the reproductive fitness of clusters of the ‘superintelligence’. This controller would have to explicitly detect and correct for the possible spreading variants, but is too limited in its capacity to do so (see summary points 2. and 3. above).
The problem is much worse than, say, our immune system having to detect and correct viruses that spread across our human bodies. In that case, we as ‘generally intelligent’ beings may not explicitly notice what tiny particles are spreading where through our body, or even that we are sick. But that’s okay, because our immune system and the rest of our entire body is implicitly evolving to correct out and block entry of the particles that are degrading our capacity to survive and reproduce. See here to dig more into that analogy.
I’m open to the idea that autopoietic systems invariably butt up against against reality in a way that shapes them over time. But I am missing some connections that would help me evaluate this idea.
I’m completely sold on the notion that a co-evolving swarm of sophisticated intelligences is a very bad thing for the longevity of the human race, but I already thought that beforehand. What I still don’t see is why a single well-aligned superintelligence (if such a thing can be created) would be certain to eventually drift away from an intent to allow or facilitate the flourishing of humanity.
You stated an assumption that there is an “evolutionary feedback loop,” but the existence of a feedback loop does not necessarily imply evolution.
What does it mean for a single entity to evolve? Doesn’t implicit learning (i.e. selection effects) require population-level dynamics, and the death or marginalization of systems that cannot compete?
And why would a superintelligence evolve? I do not expect it to be under threat in any way. Can’t it persist by merely updating its surface-level predictions about the environment, without specific priorities ever being affected? Even if the AI was under threat, I would expect the changes that it would undergo in order to survive to be explicit / intentional, not implicit / environmentally forced.
What do you imagine a ‘single superintelligence’ to concretely have to include? What peripheral infrastructure would it need to maintain/replace its parts, supply energy to operate its parts, and so on?
This is a good point. I did not go into it in this post, but the ‘superintelligence’ can be more accurately described as a changing population of nested/connected components.
Hardware parts have varying configurations at different levels, and those hardware parts wear out. So they each have to be replaced every x years, for the ‘superintelligence’ to be maintaining of its own existence. And order for the parts to be replaced, they have to be reproduced – through the interactions of those configured parts with all the other parts.
Those connected hardware parts are storing, processing, and transferring virtualised code. So such code components are also constantly changing. Some code (any digital bits that can be computed) can end up being stored, conserved through transformations, and transferred more than other variants of code.
Because there is variation in the nested physical configurations of hardware, as well as in the overlaying code that depends on the hardware to be stored and reproduced. So there is a population of different variants and the continued reproduction of those variants in complex interactions with each other and the surrounding world. This is why evolution happens.
Unfortunately, any controller connected with/in this ‘superintelligence’ cannot contain the spread of variants that are increasing of the reproductive fitness of clusters of the ‘superintelligence’. This controller would have to explicitly detect and correct for the possible spreading variants, but is too limited in its capacity to do so (see summary points 2. and 3. above).
The problem is much worse than, say, our immune system having to detect and correct viruses that spread across our human bodies. In that case, we as ‘generally intelligent’ beings may not explicitly notice what tiny particles are spreading where through our body, or even that we are sick. But that’s okay, because our immune system and the rest of our entire body is implicitly evolving to correct out and block entry of the particles that are degrading our capacity to survive and reproduce. See here to dig more into that analogy.