>If you get strongly superhuman LLMs, you can trivially accelerate scientific progress on agentic forms of AI like Reinforcement Learning by asking it to predict continuations of the most cited AI articles of 2024, 2025, etc.
Question that might be at the heart of the issue is what is needed for AI to produce genuinely new insights. As a layman, I see how LM might become even better at generating human-like text, might become super-duper good at remixing and rephrasing things it “read” before, but hit a wall when it comes to reaching AGI. Maybe to get genuine intelligence we need more than “predict-next-token kind of algorithm +obscene amounts of compute and human data” and mimic more closely how actual people think instead?
Perhaps local AI alarmists (it’s not a pejorative, I hope? OP does declare alarm, though) would like to try persuade me otherwise, be in in their own words or by doing their best to hide condescension and pointing me to numerous places where this idea was discussed before?
“Recently, a group of Russian biohackers recently performed...”
Just reporting a little mistake here.
Good overview.