I think it depends on if the intelligences in charge at any point find a way to globally not try a promising idea. If not, then it doesn’t matter that much if LLMs are capable of superintelligence, or just AGI. (If they aren’t capable of AGI, of course that matters because it could lead to a proper fizzle) What really matters is whether they are the optimal design for super intelligence. If they aren’t, and no way is found to not try a promising idea, then my mental model of the next 50 years includes many transitions in what the architecture of the smartest optimizer is, each as different from each other as evolution is from neuron brains, or brains from silicon gradient descent. Then, the details of the motivations of silicon token predictors are more a hint to the breadth of variety of goals we will see than a crux.
I think it depends on if the intelligences in charge at any point find a way to globally not try a promising idea. If not, then it doesn’t matter that much if LLMs are capable of superintelligence, or just AGI. (If they aren’t capable of AGI, of course that matters because it could lead to a proper fizzle) What really matters is whether they are the optimal design for super intelligence. If they aren’t, and no way is found to not try a promising idea, then my mental model of the next 50 years includes many transitions in what the architecture of the smartest optimizer is, each as different from each other as evolution is from neuron brains, or brains from silicon gradient descent. Then, the details of the motivations of silicon token predictors are more a hint to the breadth of variety of goals we will see than a crux.