I’ve been exploring exactly that. I am developing what I call “value alignment protocol”, a structured dialogue process where humans and AI together define values through inquiry, testing and refinement. It treats alignment more like relationship-building than engineering, focusing on how we create shared understanding rather than perfect obedience. The interesting challenge is designing systems that can evolve their ethical frameworks through conversation, while building a robust long-term value system.
I’ve been exploring exactly that. I am developing what I call “value alignment protocol”, a structured dialogue process where humans and AI together define values through inquiry, testing and refinement. It treats alignment more like relationship-building than engineering, focusing on how we create shared understanding rather than perfect obedience. The interesting challenge is designing systems that can evolve their ethical frameworks through conversation, while building a robust long-term value system.