a properly distributed training data can be easily tuned with a smaller more robust dataset
I think this aligns with human instinct. While it’s not always true, I think that humans are compelled to constantly work to condense what we know. (An instinctual byproduct of knowledge portability and knowledge retention.)
I’m reading a great book right now that talks about this and other things in neuroscience. It has some interesting insights for my work life, not just my interest in artificial intelligence.
Forgot to mention that the principle behind this intuition—largely operating as well in my project is yeah “pareto principle.”
Btw. Novelties, we are somehow wired to be curious—this very thing terrifies me of a future AGI will be superior at exercising curiosity but if such same mechanic can be steered—I see a route that the novelty aspect, a route as well to alignment or a route to a conceptual approach to it...
I think this aligns with human instinct. While it’s not always true, I think that humans are compelled to constantly work to condense what we know. (An instinctual byproduct of knowledge portability and knowledge retention.)
I’m reading a great book right now that talks about this and other things in neuroscience. It has some interesting insights for my work life, not just my interest in artificial intelligence.
As a for instance: I was surprised to learn that someone has worked out the mathematics to measure novelty. Related Wired article and link to a paper on the dynamics of correlated novelties.
Forgot to mention that the principle behind this intuition—largely operating as well in my project is yeah “pareto principle.”
Btw. Novelties, we are somehow wired to be curious—this very thing terrifies me of a future AGI will be superior at exercising curiosity but if such same mechanic can be steered—I see a route that the novelty aspect, a route as well to alignment or a route to a conceptual approach to it...