I have no idea what the community consensus is. I doubt they’re lying.
For anyone who already had short timelines this couldn’t shorten them that much. For instance, 2027 or 2028 is very soon, and https://ai-2027.com/ assumed there would be successful research done along the way. So for me, very little more “yikes” than yesterday.
It does not seem to me like this is the last research breakthrough needed for full fledged agi, either. LLMs are superhuman at no/low context buildup tasks, but haven’t solved context management (be that through long context windows, memory retrieval techniques, online learning or anything else).
I also don’t think it’s surprising that these research breakthroughs keep happening. Remember that their last breakthrough (strawberry, o1) was “make RL work”. This one might be something like “make reward prediction and MCTS work” like mu zero, or some other banal thing that worked on toy cases in the 80s but was non trivial to reimplement in LLMs.
I have no idea what the community consensus is. I doubt they’re lying.
For anyone who already had short timelines this couldn’t shorten them that much. For instance, 2027 or 2028 is very soon, and https://ai-2027.com/ assumed there would be successful research done along the way. So for me, very little more “yikes” than yesterday.
It does not seem to me like this is the last research breakthrough needed for full fledged agi, either. LLMs are superhuman at no/low context buildup tasks, but haven’t solved context management (be that through long context windows, memory retrieval techniques, online learning or anything else).
I also don’t think it’s surprising that these research breakthroughs keep happening. Remember that their last breakthrough (strawberry, o1) was “make RL work”. This one might be something like “make reward prediction and MCTS work” like mu zero, or some other banal thing that worked on toy cases in the 80s but was non trivial to reimplement in LLMs.