I think a Loebner Silver Prize is still out of reach of current tech; GPT-4 sucks at most board games (which is possible for a judge to test over text).
If it is the case that OpenAI is already capable of building a weakly general AI by this process, then I guess most of the remaining uncertainty lies in determining when it’s worthwhile for them or someone like them to do it.
I believe you’re underrating the difficult of Loebner-silver. See my post on the topic. The other criteria are relatively easy, although it would be amusing if a text-based system failed on the technicality of not playing Montezuma’s revenge.
I think even basic LLMs with less than general AI can be powerfully helpful as a center point in a series of nodes constituting a mind like thing for less intelligent robots that are still very helpful, like home assistants.
I think a Loebner Silver Prize is still out of reach of current tech; GPT-4 sucks at most board games (which is possible for a judge to test over text).
I won’t make any bets about GPT-5 though!
If it is the case that OpenAI is already capable of building a weakly general AI by this process, then I guess most of the remaining uncertainty lies in determining when it’s worthwhile for them or someone like them to do it.
I believe you’re underrating the difficult of Loebner-silver. See my post on the topic. The other criteria are relatively easy, although it would be amusing if a text-based system failed on the technicality of not playing Montezuma’s revenge.
I think even basic LLMs with less than general AI can be powerfully helpful as a center point in a series of nodes constituting a mind like thing for less intelligent robots that are still very helpful, like home assistants.