Hi, thanks for writing this up. As someone who did their dissertation on the motivations behind AI development, I came to the conclusion that our world’s information gathering and coordiantion mechanisms were already inextricably tied to AI usage. Therefore it is very heartening to see a healthy and cooperative vision for AI applications that is not focused on power-seeking and instrumental convergence. Contra to other commenters, I think that such designs do meaningfully reduce the risk of rogue AI systems, mostly because they would be built out of constrained AI systems that are inherently less agentic. The improved coordination capacity for society would also help us rein in unconstrained racing towards AGI in the name of capitalist or international competition. This was my most recent attempt at the problem of “AI for governance/coordination” that I produced for a hackathon, and I’m glad to hear others are also working on similar problems!
Hi! Thank you for this note and for sharing your work on “Witness.”
You’re right: AI already underwrites coordination. The lever is not if we use it, but how. Shifting from unbounded strategic agents to bounded, tool‑like “kami of care” makes safety architectural; scope/latency/budget are capped, so power‑seeking becomes an anti‑pattern.
Witness cleanly instantiates Attentiveness: a push channel that turns lived experience into common knowledge with receipts, so alignment happens by process and at the speed of society. This is d/acc applied to epistemic security in practice, through ground‑up shared reality paired with stronger collective steering.
Glad to hear that you found Witness relevant to your work. I would be very happy to discuss further since it appears we do have a strong area of overlap with regards to AI for governance and decentralised governance. My contact is on my website. I also understand if you are already quite overloaded with work and requests to connect, in which case I’m happy that you took the time to respond to my comment :)
Hi, thanks for writing this up. As someone who did their dissertation on the motivations behind AI development, I came to the conclusion that our world’s information gathering and coordiantion mechanisms were already inextricably tied to AI usage. Therefore it is very heartening to see a healthy and cooperative vision for AI applications that is not focused on power-seeking and instrumental convergence. Contra to other commenters, I think that such designs do meaningfully reduce the risk of rogue AI systems, mostly because they would be built out of constrained AI systems that are inherently less agentic. The improved coordination capacity for society would also help us rein in unconstrained racing towards AGI in the name of capitalist or international competition. This was my most recent attempt at the problem of “AI for governance/coordination” that I produced for a hackathon, and I’m glad to hear others are also working on similar problems!
Hi! Thank you for this note and for sharing your work on “Witness.”
You’re right: AI already underwrites coordination. The lever is not if we use it, but how. Shifting from unbounded strategic agents to bounded, tool‑like “kami of care” makes safety architectural; scope/latency/budget are capped, so power‑seeking becomes an anti‑pattern.
Witness cleanly instantiates Attentiveness: a push channel that turns lived experience into common knowledge with receipts, so alignment happens by process and at the speed of society. This is d/acc applied to epistemic security in practice, through ground‑up shared reality paired with stronger collective steering.
Glad to hear that you found Witness relevant to your work. I would be very happy to discuss further since it appears we do have a strong area of overlap with regards to AI for governance and decentralised governance. My contact is on my website. I also understand if you are already quite overloaded with work and requests to connect, in which case I’m happy that you took the time to respond to my comment :)