This might seem like a ton of annoying nitpicking.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
It’s mostly that, to my aesthetic senses, the refutation of a position X that consists of different detailed special-case arguments being fielded against different types of evidence for X, is a refutation that sounds weak (overfit to the current crop of evidence). Unless, that is, it offers some compact explanation regarding why we would expect to see all of these disparate evidence for X that are secretly not evidence for X.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you’ve in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for “AGI within the next 3-5 years”.
You don’t need to apologize for having a less optimistic view of current AI development. I’ve never heard anyone driving the hype train apologize for their opinions.
It’s mostly that, to my aesthetic senses, the refutation of a position X that consists of different detailed special-case arguments being fielded against different types of evidence for X, is a refutation that sounds weak (overfit to the current crop of evidence). Unless, that is, it offers some compact explanation regarding why we would expect to see all of these disparate evidence for X that are secretly not evidence for X.
Yes, a single strong, simple argument or piece of evidence that could refute the whole LLM approach would be more effective but as of now no one have the answer if the LLM approach will lead to AGI or not. However, I think you’ve in a meaningful way addressed interesting and important details that are often overlooked in broad hype statements that are repeated and thrown around like universal facts and evidence for “AGI within the next 3-5 years”.