I think the “singleton” case is generally not sufficiently analyzed in the literature. It is treated as something magical and not having an internal structure which could be discussed. A rationalist analysis would like to do better than that.
Nobody is asking what might be inside, should it still be a Minsky’s “society of mind”, and if so, what might the relationships be between various components of that “society of mind”, and so on.
In particular, how would it evolve its own internal structure, and its distribution of goals, and so on.
People seem to be hypnotized by it being an “all-powerful God”, this somehow prevents them from trying to think how it might work (given that the Universe will still be not fully known, there will still be quite a bit of value in open-endedness, in discovery, and so on).
But all this does not imply that we can rely upon stratification into individuals being the most likely default scenario.
Still, the bulk of the risk is self-destruction of the whole ecosystem of super-intelligent AIs together with everything else, regardless of how it is structured and stratified. A singleton is as likely to stumble into unsafe experiments in fundamental physics, as long as its internal critics are not strong enough.
An ecosystem of super-intelligent AIs (regardless of how it is structured and stratified) which is decent enough to navigate this main risk is not a bad starting point from the viewpoint of human interests as well. Something is sufficiently healthy within it if it can reliably avoid self-destruction, see my earlier note for more details, https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential
I think the “singleton” case is generally not sufficiently analyzed in the literature. It is treated as something magical and not having an internal structure which could be discussed. A rationalist analysis would like to do better than that.
Nobody is asking what might be inside, should it still be a Minsky’s “society of mind”, and if so, what might the relationships be between various components of that “society of mind”, and so on.
In particular, how would it evolve its own internal structure, and its distribution of goals, and so on.
People seem to be hypnotized by it being an “all-powerful God”, this somehow prevents them from trying to think how it might work (given that the Universe will still be not fully known, there will still be quite a bit of value in open-endedness, in discovery, and so on).
But all this does not imply that we can rely upon stratification into individuals being the most likely default scenario.
Still, the bulk of the risk is self-destruction of the whole ecosystem of super-intelligent AIs together with everything else, regardless of how it is structured and stratified. A singleton is as likely to stumble into unsafe experiments in fundamental physics, as long as its internal critics are not strong enough.
An ecosystem of super-intelligent AIs (regardless of how it is structured and stratified) which is decent enough to navigate this main risk is not a bad starting point from the viewpoint of human interests as well. Something is sufficiently healthy within it if it can reliably avoid self-destruction, see my earlier note for more details, https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential