At the time of writing, the Metaculus community predicts that in July 2024 there will be a 25% chance of a system of Loebner-silver-prize capability (along with the other resolution criteria). It is hard for me to imagine how this could happen.
Focus on imagining how we could get complete AI software R&D automation by then, that’s both more important than Loebner-silver-prize capability and implies it (well, it implies that after a brief period sometimes called “intelligence explosion” the capability will be reached).
On a longer time horizon, full AI R&D automation does seem like a possible intermediate step to Loebner silver. For July 2024, though, that path is even harder to imagine.
The trouble is that July 2024 is so soon that even GPT-5 likely won’t be released by then.
Altman stated a few days ago that they have no plans to start training GPT-5 within the next 6 months. That’d put earliest training start at Dec 2023.
We don’t know much about how long GPT-4 pre-trained for, but let’s say 4 months. Given that frontier models have taken progressively longer to train, we should expect no shorter for GPT-5, which puts its earliest pre-training finishing in Mar 2024.
GPT-4 spent 6 months on fine-tuning and testing before release, and Brockman has stated that future models should be expected to take at least that long. That puts GPT-5′s earliest release in Sep 2024.
Without GPT-5 as a possibility, it’d need to be some other project (Gato 2? Gemini?) or some extraordinary system built using existing models (via fine-tuning, retrieval, inner thoughts, etc.). The gap between existing chatbots and Loebner-silver seems huge though, as I discussed in the post—none of that seems up to the challenge.
Full AI R&D automation would face all of the above hurdles, perhaps with the added challenge of being even harder than Loebner-silver. After all, the Loebner-silver fake human doesn’t need to be a genius researcher, since very few humans are. The only aspect in which the automation seems easier is that the system doesn’t need to fake being a human (such as by dumbing down its capabilities), and that seems relatively minor by comparison.
Focus on imagining how we could get complete AI software R&D automation by then, that’s both more important than Loebner-silver-prize capability and implies it (well, it implies that after a brief period sometimes called “intelligence explosion” the capability will be reached).
On a longer time horizon, full AI R&D automation does seem like a possible intermediate step to Loebner silver. For July 2024, though, that path is even harder to imagine.
The trouble is that July 2024 is so soon that even GPT-5 likely won’t be released by then.
Altman stated a few days ago that they have no plans to start training GPT-5 within the next 6 months. That’d put earliest training start at Dec 2023.
We don’t know much about how long GPT-4 pre-trained for, but let’s say 4 months. Given that frontier models have taken progressively longer to train, we should expect no shorter for GPT-5, which puts its earliest pre-training finishing in Mar 2024.
GPT-4 spent 6 months on fine-tuning and testing before release, and Brockman has stated that future models should be expected to take at least that long. That puts GPT-5′s earliest release in Sep 2024.
Without GPT-5 as a possibility, it’d need to be some other project (Gato 2? Gemini?) or some extraordinary system built using existing models (via fine-tuning, retrieval, inner thoughts, etc.). The gap between existing chatbots and Loebner-silver seems huge though, as I discussed in the post—none of that seems up to the challenge.
Full AI R&D automation would face all of the above hurdles, perhaps with the added challenge of being even harder than Loebner-silver. After all, the Loebner-silver fake human doesn’t need to be a genius researcher, since very few humans are. The only aspect in which the automation seems easier is that the system doesn’t need to fake being a human (such as by dumbing down its capabilities), and that seems relatively minor by comparison.