If we could be rightfully confident that our random search through mindspace with modern ML methods
I understand this to connote “ML is ~uninformatively-randomly-over-mindspace sampling ‘minds’ with certain properties (like low loss on training).” If so—this is not how ML works, not even in an approximate sense. If this is genuinely your view, it might be helpful to first ponder why statistical learning theory mispredicted that overparameterized networks can’t generalize.
I understand this to connote “ML is ~uninformatively-randomly-over-mindspace sampling ‘minds’ with certain properties (like low loss on training).” If so—this is not how ML works, not even in an approximate sense. If this is genuinely your view, it might be helpful to first ponder why statistical learning theory mispredicted that overparameterized networks can’t generalize.