I remember you from the Pugs days. Two questions about this presentation. One is more aspirational: do you think of this society of AIs as more egalitarian (many superhuman AIs at roughly the same level) or more hierarchical (a range of AI sizes, with the largest hopefully being the most aligned to those below)? And the other is more practical. Right now the AI market is locked in an arms race kind of situation, and in particular, scrambling to make AIs that will bring commercial profit. That can lead to nasty incentives, e.g. an AI working for a tax software company can help it lobby the government to keep tax filing difficult, and of course much worse things can be imagined as well. If this continues, all the nice vision of kami and so on will just fail to exist. What is to be done, in your opinion?
Hi! Great to hear from you. “Optimize for fun” (‑Ofun) is still very much the spirit of this 6pack.care work.
On practicality (bending the market away from arms‑race incentives): Here are some levers that worked, inspired by Taiwan’s tax-filing case, that shift returns from lock‑in to civic care:
Interoperability: Make “data portability” the rule. Mandate fair protocol‑level interop so users and complements can exit without losing their networks. Platforms must compete on quality of care, not captivity.
Public options: Offer simple public options (and shared research compute) so there’s always a baseline service that is easy, safe, and non‑extractive. Private vendors must beat it on care, not on lock‑in.
Provenance for Paid Reach: For ads and mass reach in political/financial domains, require verifiable sponsorship and durable disclosure. Preserve anonymity for ordinary speech via meronymity.
Mission‑Locked Governance — Through procurement rules, ensure steward‑ownership/benefit structures and board‑level safety duties so “civic care” is a fiduciary obligation, not a marketing slogan.
Institutionalize Alignment Assemblies and localized evals; pre‑commit vendors to adopt outcomes or explain deviations. Federate trust & safety so threat intel flows without central chokepoints.
On symbiosis: the kami view is neither egalitarian sameness nor fixed hierarchy. It’s a bounded, heterarchical ecology with many stewards with different scopes that coordinate without a permanent apex. (Heterarchy = overlapping centers of competence; authority flows to where the problem lives.)
Egalitarianism would imply interchangeable agents. As capabilities grow, we’ll see a range of kami sizes: a steward for continental climate models won’t be the same as one for a local irrigation system. That’s diversity of scope, not inequality of standing.
Hierarchy would imply command. Boundedness prevents that: each kami is powerful only within its scope of care and is designed for “enough, not forever.” The river guardian has no mandate nor incentive to run the forest.
When scopes intersect, alignment is defined by civic care: Each kami maintain the relational health of their shared ecosystem at the speed of the garden. Larger systems may act as ephemeral conveners, but they don’t own the graph or set permanent policy. Coordination follows subsidiarity and federation: solve issues locally when possible; escalate via shared protocols when necessary. Meanwhile, procedural equality (the right to contest, audit, and exit) keeps the ecology plural rather than feudal.
I remember you from the Pugs days. Two questions about this presentation. One is more aspirational: do you think of this society of AIs as more egalitarian (many superhuman AIs at roughly the same level) or more hierarchical (a range of AI sizes, with the largest hopefully being the most aligned to those below)? And the other is more practical. Right now the AI market is locked in an arms race kind of situation, and in particular, scrambling to make AIs that will bring commercial profit. That can lead to nasty incentives, e.g. an AI working for a tax software company can help it lobby the government to keep tax filing difficult, and of course much worse things can be imagined as well. If this continues, all the nice vision of kami and so on will just fail to exist. What is to be done, in your opinion?
Hi! Great to hear from you. “Optimize for fun” (‑Ofun) is still very much the spirit of this 6pack.care work.
On practicality (bending the market away from arms‑race incentives): Here are some levers that worked, inspired by Taiwan’s tax-filing case, that shift returns from lock‑in to civic care:
Interoperability: Make “data portability” the rule. Mandate fair protocol‑level interop so users and complements can exit without losing their networks. Platforms must compete on quality of care, not captivity.
Public options: Offer simple public options (and shared research compute) so there’s always a baseline service that is easy, safe, and non‑extractive. Private vendors must beat it on care, not on lock‑in.
Provenance for Paid Reach: For ads and mass reach in political/financial domains, require verifiable sponsorship and durable disclosure. Preserve anonymity for ordinary speech via meronymity.
Mission‑Locked Governance — Through procurement rules, ensure steward‑ownership/benefit structures and board‑level safety duties so “civic care” is a fiduciary obligation, not a marketing slogan.
Institutionalize Alignment Assemblies and localized evals; pre‑commit vendors to adopt outcomes or explain deviations. Federate trust & safety so threat intel flows without central chokepoints.
On symbiosis: the kami view is neither egalitarian sameness nor fixed hierarchy. It’s a bounded, heterarchical ecology with many stewards with different scopes that coordinate without a permanent apex. (Heterarchy = overlapping centers of competence; authority flows to where the problem lives.)
Egalitarianism would imply interchangeable agents. As capabilities grow, we’ll see a range of kami sizes: a steward for continental climate models won’t be the same as one for a local irrigation system. That’s diversity of scope, not inequality of standing.
Hierarchy would imply command. Boundedness prevents that: each kami is powerful only within its scope of care and is designed for “enough, not forever.” The river guardian has no mandate nor incentive to run the forest.
When scopes intersect, alignment is defined by civic care: Each kami maintain the relational health of their shared ecosystem at the speed of the garden. Larger systems may act as ephemeral conveners, but they don’t own the graph or set permanent policy. Coordination follows subsidiarity and federation: solve issues locally when possible; escalate via shared protocols when necessary. Meanwhile, procedural equality (the right to contest, audit, and exit) keeps the ecology plural rather than feudal.