It seems like the applications of DL that have generated useful products so far have been in the areas in which a useful result is easy or economical to verify, safe to test, close to the research itself, and in areas where small failures are inconsequential. Gwern’s list of applications indicates that this lies mostly in the realm of software engineering infrastructure, particularly for consumer products.
Unfortunately, it seems that the technologies that would most impress us are not bottlenecked by the fast-and-facile intelligence of a GPT-3.
One area that I would have hoped GPT-3 could contribute to would be learning: an automated personal tutor could revolutionize education in a way that MOOCs cannot. Imagine a chatbot with GPT-3′s conversational abilities that could also draw diagrams like DALL-E.
Unfortunately, GPT-3 just isn’t reliable enough for that. Actually, it’s still deeply problematic, because its explanations and answers to technical questions seem plausible to a novice, but are incorrect and lack of deep understanding. So it’s currently smart enough to mislead, but not smart enough to educate.
Seconded. AI is good at approximate answers, and bad at failing gracefully. This makes it very hard to apply to some problems, or requires specialized knowledge/implementation that there isn’t enough expertise or time for.
It seems like the applications of DL that have generated useful products so far have been in the areas in which a useful result is easy or economical to verify, safe to test, close to the research itself, and in areas where small failures are inconsequential. Gwern’s list of applications indicates that this lies mostly in the realm of software engineering infrastructure, particularly for consumer products.
Unfortunately, it seems that the technologies that would most impress us are not bottlenecked by the fast-and-facile intelligence of a GPT-3.
One area that I would have hoped GPT-3 could contribute to would be learning: an automated personal tutor could revolutionize education in a way that MOOCs cannot. Imagine a chatbot with GPT-3′s conversational abilities that could also draw diagrams like DALL-E.
Unfortunately, GPT-3 just isn’t reliable enough for that. Actually, it’s still deeply problematic, because its explanations and answers to technical questions seem plausible to a novice, but are incorrect and lack of deep understanding. So it’s currently smart enough to mislead, but not smart enough to educate.
Seconded. AI is good at approximate answers, and bad at failing gracefully. This makes it very hard to apply to some problems, or requires specialized knowledge/implementation that there isn’t enough expertise or time for.