I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.
In this moment I just wanted to improve our view at probability of only one SI in starting period.