I agree with the vast majority of your post, but I genuinely think that with Agentic AI systems, we need to verify the reasoning layer, where the actual harm takes place, and this layer currently produces no auditable outputs. We’re shipping much faster than how we govern these systems, that’s where the gap is.
I agree with the vast majority of your post, but I genuinely think that with Agentic AI systems, we need to verify the reasoning layer, where the actual harm takes place, and this layer currently produces no auditable outputs. We’re shipping much faster than how we govern these systems, that’s where the gap is.