The goal for this research is to help build the conceptual infrastructure to navigate conversations around agency and alignment.
Thanks for the post — really interesting. That said, I’d argue that as soon as you introduce agency, you’re already in dangerous territory, and we may need to think further outside the box. I sketched a rough idea of the problem here: How a Non-Dual Language Could Redefine AI Safety.
It resonates with your point about needing to cover all agentic lenses to avoid blind spots, though I’m suggesting an even more radical shift.
Thanks for the post — really interesting. That said, I’d argue that as soon as you introduce agency, you’re already in dangerous territory, and we may need to think further outside the box. I sketched a rough idea of the problem here: How a Non-Dual Language Could Redefine AI Safety.
It resonates with your point about needing to cover all agentic lenses to avoid blind spots, though I’m suggesting an even more radical shift.