I don’t know anything about Go. But the fact that following it helps you reminds me of In praise of fake frameworks: while “good shape” isn’t fully accurate at calculating the best move, it’s more “computationally useful” for most situations (similar to calculating physics with Newton’s laws vs general relativity and quantum mechanics). (The author also mentions “ki”, which makes no sense from a physics perspective, to get better at aikido.)
I think it’s just important to remember that the “model” is only a map for the “reality” (the rules of the game).
I don’t really doubt that increasing value while preserving values is nontrivial, but I wonder just how nontrivial it is: are the regions of the brain for intelligence and values separate? Actually, writing that out, I realize that (at least for me) values are a “subset” of intelligence: the “facts” we believe about science/math/logic/religion are generated in basically the same way as our moral values; the difference to us humans seems obvious, but it really is, well, nontrivial. The paper clip maximizing AI is a good example: even if it wasn’t about “moral values”—even if you wanted to maximize something like paper clips—you’d still run into trouble