So you’ve noted a need for labeling things? I posit a slightly different hybrid approach would be very useful in machine learning. In computer science, there is a concept known as memoization, which is simply storing the results of a previous calculation you might need to make again. In many cases, it can dramatically improve performance. Make a hybrid that can make it’s own entries into the knowledge base (and retrieve them, of course.) Seed it with things you know are useful, like how math works, and a dictionary or two, (plus, perhaps, a less firmly believed knowledge base like how you are talking about here,) but let the algorithm add as many new things as it likes. I’m not sure why people are willing to spend such crazy amounts on training the thing, and so little on memory/facts/concepts.
So you’ve noted a need for labeling things? I posit a slightly different hybrid approach would be very useful in machine learning. In computer science, there is a concept known as memoization, which is simply storing the results of a previous calculation you might need to make again. In many cases, it can dramatically improve performance. Make a hybrid that can make it’s own entries into the knowledge base (and retrieve them, of course.) Seed it with things you know are useful, like how math works, and a dictionary or two, (plus, perhaps, a less firmly believed knowledge base like how you are talking about here,) but let the algorithm add as many new things as it likes. I’m not sure why people are willing to spend such crazy amounts on training the thing, and so little on memory/facts/concepts.