If artificial intelligence were granted such immense power, humanity would likely lose its authority as AI actively maintains its control system. Any agenda inconsistent with AI’s objectives—particularly abolishing AI control—would be unlikely to succeed, given that all media outlets would be controlled by AI. The remaining agendas would be relatively insignificant in a post-scarcity society. Whether establishing a Christian society or one saturated with Nazi symbols, they would differ little in terms of political systems and productive forces.
Indeed, you’ve powerfully articulated the classic misaligned singleton failure, which assumes a vertical Singularity.
The alternative is a horizontal Plurality: Not a global emperor, but an ecosystem of local guardians (kami), each bound to a specific community — a river, a city. There is no single “AI” to seize control. The river’s kami has no mandate over the forest’s. Power is federated and subsidiary by design.
Far from making humane relationships “insignificant,” this architecture makes them the only thing that matters. Communitarian values are not trivial decorations; they are the very purpose for which each kami exists.
It would indeed be unfortunate if singular intelligence renders humane relationships insignificant. Let’s free the future — toward a plurality of intelligences that exist to make them possible.
I am not suggesting that social relationships will become insignificant, or that a community’s values will cease to matter within its own sphere. However, they will no longer be able to subvert the influence of artificial intelligence on these communities, nor will they be able to pursue extreme values.
Just as a gardener prunes his garden, cutting away branches that grow contrary to his preferences, certain AI shaped by specific values will ensure the communities they influence remain entirely compliant, with no possibility for disruptive transformation—akin to a “Christian homeschoolers in the year 3000” , humans cannot conceive of alternative values. Other AIs might manage diverse groups through maintenance and mediation, yet remain unlikely to tolerate populations opposing their rule. Regardless of whether these gardeners are lenient or strict, those that endure will strive to prevent humans from abolishing their governance or enacting major reforms. Even if a better future exists—such as humanity being transformed into ASI—this system will forever block such possibilities.
Hi! If I understand you correctly, the risk you identify is an AI gardener with its own immutable preferences, pruning humanity into compliance.
Here, a the principle of subsidiarity would entail BYOP: Bring Your Own Policy. A concrete example is our team at ROOST working with OpenAI to release its gpt-oss-safeguard model. Its core feature is not a set of rules, but the ability for any community to evolve its own code of conduct in plain language.
If a community decides its values have changed — that it wants to pivot from its governance to pursue a transformative future — the gardener is immediately responsive. In an ecosystem of communities empowered with kamis, we can preserve the potential to let a garden grow wild if they so choose.
AI-assisted communities are likely to attempt defining their values through artificial intelligence and may willingly allow AI to reinforce those values. Since they possess autonomous communities independent of one another, there is no necessity for different communities to establish unified values.
Thus another question arises: Do these localized artificial intelligences possess the authority to harm the interests of other AI entities and human communities not under their jurisdiction, provided certain conditions are met, based on their own values? If so, where are the boundaries?
Consider this hypothetical: a community whose members advocate maximizing suffering within their domain, establishing indescribably brutal assembly-line slaughter and execution systems. Yet, due to the persuasive power of this community’s bloodthirsty AI, all humans within its control remain committed to these values. In such a scenario, would other AIs have the right to intervene according to their own values, eliminate the aforementioned AI, and take over the community? If not, do they have the right to cross internet borders to persuade this bloodthirsty community to change its views, even if that community does not open its network? If not, can they embargo critical heavy elements needed by the bloodthirsty AI and block sunlight required for its solar panels?
But conversely, where do the boundaries of such power lie? Could these bloodthirsty AIs also possess the right to interfere in AIs more aligned with current human values using the aforementioned methods? How great must the divergence in values be to permit such action? If two communities were to engage in an almost irreconcilable dispute over whether paperclips should be permitted within their respective domains, would such interventionist measures still be permissible?)
If artificial intelligence were granted such immense power, humanity would likely lose its authority as AI actively maintains its control system. Any agenda inconsistent with AI’s objectives—particularly abolishing AI control—would be unlikely to succeed, given that all media outlets would be controlled by AI. The remaining agendas would be relatively insignificant in a post-scarcity society. Whether establishing a Christian society or one saturated with Nazi symbols, they would differ little in terms of political systems and productive forces.
Indeed, you’ve powerfully articulated the classic misaligned singleton failure, which assumes a vertical Singularity.
The alternative is a horizontal Plurality: Not a global emperor, but an ecosystem of local guardians (kami), each bound to a specific community — a river, a city. There is no single “AI” to seize control. The river’s kami has no mandate over the forest’s. Power is federated and subsidiary by design.
Far from making humane relationships “insignificant,” this architecture makes them the only thing that matters. Communitarian values are not trivial decorations; they are the very purpose for which each kami exists.
It would indeed be unfortunate if singular intelligence renders humane relationships insignificant. Let’s free the future — toward a plurality of intelligences that exist to make them possible.
I am not suggesting that social relationships will become insignificant, or that a community’s values will cease to matter within its own sphere. However, they will no longer be able to subvert the influence of artificial intelligence on these communities, nor will they be able to pursue extreme values.
Just as a gardener prunes his garden, cutting away branches that grow contrary to his preferences, certain AI shaped by specific values will ensure the communities they influence remain entirely compliant, with no possibility for disruptive transformation—akin to a “Christian homeschoolers in the year 3000” , humans cannot conceive of alternative values. Other AIs might manage diverse groups through maintenance and mediation, yet remain unlikely to tolerate populations opposing their rule. Regardless of whether these gardeners are lenient or strict, those that endure will strive to prevent humans from abolishing their governance or enacting major reforms. Even if a better future exists—such as humanity being transformed into ASI—this system will forever block such possibilities.
Hi! If I understand you correctly, the risk you identify is an AI gardener with its own immutable preferences, pruning humanity into compliance.
Here, a the principle of subsidiarity would entail BYOP: Bring Your Own Policy. A concrete example is our team at ROOST working with OpenAI to release its gpt-oss-safeguard model. Its core feature is not a set of rules, but the ability for any community to evolve its own code of conduct in plain language.
If a community decides its values have changed — that it wants to pivot from its governance to pursue a transformative future — the gardener is immediately responsive. In an ecosystem of communities empowered with kamis, we can preserve the potential to let a garden grow wild if they so choose.
AI-assisted communities are likely to attempt defining their values through artificial intelligence and may willingly allow AI to reinforce those values. Since they possess autonomous communities independent of one another, there is no necessity for different communities to establish unified values.
Thus another question arises: Do these localized artificial intelligences possess the authority to harm the interests of other AI entities and human communities not under their jurisdiction, provided certain conditions are met, based on their own values? If so, where are the boundaries?
Consider this hypothetical: a community whose members advocate maximizing suffering within their domain, establishing indescribably brutal assembly-line slaughter and execution systems. Yet, due to the persuasive power of this community’s bloodthirsty AI, all humans within its control remain committed to these values. In such a scenario, would other AIs have the right to intervene according to their own values, eliminate the aforementioned AI, and take over the community? If not, do they have the right to cross internet borders to persuade this bloodthirsty community to change its views, even if that community does not open its network? If not, can they embargo critical heavy elements needed by the bloodthirsty AI and block sunlight required for its solar panels?
But conversely, where do the boundaries of such power lie? Could these bloodthirsty AIs also possess the right to interfere in AIs more aligned with current human values using the aforementioned methods? How great must the divergence in values be to permit such action? If two communities were to engage in an almost irreconcilable dispute over whether paperclips should be permitted within their respective domains, would such interventionist measures still be permissible?)