I briefly read 6pack.care website and your post. It sounds to me like an idea supplementary to existing AI safety paradigms and not solving the core problem—aligning AIs. Looking at your website I see that it’s already assumed that AI is mostly aligned and issues with rogue AIs are not mentioned in the risks section
A midsize city is hit by floods. The city launches a simple chatbot to help people apply for emergency cash. Here is what attentiveness looks like in action:
Listening. People send voice notes, texts, or visit a kiosk. Messages stay in the original language, with a clear translation beside them. Each entry records where it came from and when.
Mapping. The team (and the bot) sort the needs into categories: housing, wage loss, and medical care. They keep disagreements visible — renters and homeowners need different proofs.
Receipts. Every contributor gets a link to see how their words were used and a button to say “that’s not what I meant.
Indeed, there are two classes of alignment problems. The first is vertical: making a single agent loyal. The second is horizontal: ensuring a society of loyal agents doesn’t devolve into systemic conflict. 6pack.care is a framework for the second challenge as articulated by CAIF.
It posits that long-term alignment is not a static property but a dynamic capability: alignment-by-process. The core mechanism is a tight feedback loop of civic care, turning interactions into a form of coherent blended volition. This is why the flood bot example, though simple, describes this fractal process.
This process-based approach is also our primary defense against power-seeking. Rather than trying to police an agent’s internal motives, we design the architecture for boundedness (kami) and federated trust (e.g. ROOST.tools), making unbounded optimization an anti-pattern. The system selects for pro-sociality.
This bridges two philosophical traditions: EA offers a powerful consequentialist framework. Civic Care provides the necessary process-based virtue ethic for a pluralistic world. The result is a more robust paradigm: EA/CC (Effective Altruism with Civic Care).
I briefly read 6pack.care website and your post. It sounds to me like an idea supplementary to existing AI safety paradigms and not solving the core problem—aligning AIs. Looking at your website I see that it’s already assumed that AI is mostly aligned and issues with rogue AIs are not mentioned in the risks section
and so on.
Indeed, there are two classes of alignment problems. The first is vertical: making a single agent loyal. The second is horizontal: ensuring a society of loyal agents doesn’t devolve into systemic conflict. 6pack.care is a framework for the second challenge as articulated by CAIF.
It posits that long-term alignment is not a static property but a dynamic capability: alignment-by-process. The core mechanism is a tight feedback loop of civic care, turning interactions into a form of coherent blended volition. This is why the flood bot example, though simple, describes this fractal process.
This process-based approach is also our primary defense against power-seeking. Rather than trying to police an agent’s internal motives, we design the architecture for boundedness (kami) and federated trust (e.g. ROOST.tools), making unbounded optimization an anti-pattern. The system selects for pro-sociality.
This bridges two philosophical traditions: EA offers a powerful consequentialist framework. Civic Care provides the necessary process-based virtue ethic for a pluralistic world. The result is a more robust paradigm: EA/CC (Effective Altruism with Civic Care).