ω-inconsistency isn’t exactly the same thing as being false in the standard model. Being ω-inconsistent requires both that the theory prove all the statements P(n) for standard natural numbers n, but also prove that there is an n for which P(n) fails. Therefore a theory could be ω-consistent because it fails to prove P(n), even though P(n) is true in the standard model. So even if we could check ω-consistency, we could take PA, add an axiom T, and end up with an ω-consistent theory which nonetheless is not true in the standard model.
By the way, there are some papers on models for adding random (true) axioms to PA. “Are Random Axioms Useful?” involves some fairly specific cases, but shows that in those situations, random axioms generally aren’t likely to tell you anything you wanted to know.
Everyone seems to be taking the phrase “human Gödel sentence” (and, for that matter, “the Gödel sentence of a turing machine”) as if its widely understood, so perhaps it’s a piece of jargon I’m not familiar with. I know what the Gödel sentence of a computably enumerable theory is, which is the usual formulation. And I know how to get from a computably enumerable theory to the Turing machine which outputs the statements of that theory. But not every Turing machine is of this form, so I don’t know what it means to talk about the Gödel sentence of an arbitrary Turing machine. For instance, what is the Gödel sentence of a universal Turing machine?
Some posters seem to be taking the human Gödel number to mean something like the Gödel number of the collection of things that person will ever believe, but the collection of things a person will ever believe has absolutely no need to be consistent, since people can (and should!) sometimes change their mind.
(This is primarily an issue with the original anti-AI argument; I don’t know how defenders of that argument clarify their definitions.)