Systems architect by day (Northrop Grumman Fellow, Chief Architect for Digital Ecosystems), independent researcher in AI philosophy by inclination. Currently pursuing an MS in Applied AI. My research programme examines what I think are foundational questions the AI field mostly ignores: not how to scale systems or even how to make them reason better, but what they’re for and how their outputs relate to reality. I’ve published 14 preprints on Zenodo developing a three-axis framework (horizontal/vertical/grounding) and the origination-derivation distinction between what humans do and what AI systems do. The enterprise angle isn’t academic to me; I’ve spent 20+ years watching organizations deploy complex systems, and the 70-95% AI failure rates look like grounding problems I’ve seen before in different dress. Interested in Lakatos, epistemology, the philosophy of science applied to AI discourse, and building frameworks that actually help practitioners make better deployment decisions. I think most AI alignment work addresses the right concerns on the wrong axis. Happy to be shown I’m wrong about that.
ORCID: 0009-0009-1383-7698