A design concept for superintelligent machines (and Popper’s critique of induction)

Link post

This is my first post on LessWrong and I apologize for the length. I thought it would be possible that someone here is interested in reading or critiquing it.
The blog post is my attempt to explain why we do not yet have AGI and a possible short path for getting there. The ideas are extrapolated from an interpretation of Karl Popper and David Miller’s critique of inductive probability. My view is that a world model composed of formal statements (theories) can only be constrained by observation (including observations in the form of induction); theories cannot be supported by evidence, they can only be consistent or not.

I try to clearly define two categories of knowledge with their unique properties, derive some principles for creating an explanatory world model, and share a toy example of how an LLM may be used to generate a formal explanatory world model (which I believe will be the foundation for AGI).

I am writing from the perspective of a physician with some background in philosophy and physics, not a software engineer. I will respond to any serious feedback. The text is a draft and I do intend to fix the typos.