Suppose that some technology requires 10 components to get to work. Over the last decades, you’ve seen people gradually figure out how to build each of these components, one by one. Now you’re looking at the state of the industry, and see that we know how to build 9 of them. Do you feel that the technology is still a long time away, because we’ve made “zero progress” towards figuring out that last component?
This seems pretty underspecified, so I don’t know, but I wouldn’t be very confident it’s close:
Am I supposed to assume the difficulty of the last component should reflect the difficulty of the previous ones?
I’m guessing you’re assuming the pace of building components hasn’t been decreasing significantly. I’d probably grant you this, based on my impression of progress in AI, although it could depend on what specific components you have in mind.
What if the last component is actually made up of many components?
I agree with the rest of your comment, but it doesn’t really give me much reason to believe it’s close, rather than just closer than before/otherwise.
Yeah, it was pretty underspecified, I was just gesturing at the idea.
Even more informally: Just look at GPT-4. Imagine that you’re doing it with fresh eyes, setting aside all the fancy technical arguments. Does it not seem like it’s almost there? Whatever the AI industry is doing, it sure feels like it’s moving in the right direction, and quickly. And yes, it’s possible that the common sense is deceptive here; but it’s usually not.
Or, to make a technical argument: The deep-learning paradigm is a pretty broad-purpose trick. Stochastic gradient descent isn’t just some idiosyncratic method of training neural networks; it’s a way to automatically generate software that meets certain desiderata. And it’s compute-efficient enough to generate software approaching human brains in complexity. Thus, I don’t expect that we’ll need to move beyond it to get to AGI — general intelligence is reachable by doing SGD over some architecture.
I expect we’ll need advancement(s) on the order of “fully-connected NN → transformers”, not “GOFAI → DL”.
I would say it seems like it’s almost there, but it also seems to me to already have some fluid intelligence, and that might be why it seems close. If it doesn’t have fluid intelligence, then my intuition that it’s close may not be very reliable.
This seems pretty underspecified, so I don’t know, but I wouldn’t be very confident it’s close:
Am I supposed to assume the difficulty of the last component should reflect the difficulty of the previous ones?
I’m guessing you’re assuming the pace of building components hasn’t been decreasing significantly. I’d probably grant you this, based on my impression of progress in AI, although it could depend on what specific components you have in mind.
What if the last component is actually made up of many components?
I agree with the rest of your comment, but it doesn’t really give me much reason to believe it’s close, rather than just closer than before/otherwise.
Yeah, it was pretty underspecified, I was just gesturing at the idea.
Even more informally: Just look at GPT-4. Imagine that you’re doing it with fresh eyes, setting aside all the fancy technical arguments. Does it not seem like it’s almost there? Whatever the AI industry is doing, it sure feels like it’s moving in the right direction, and quickly. And yes, it’s possible that the common sense is deceptive here; but it’s usually not.
Or, to make a technical argument: The deep-learning paradigm is a pretty broad-purpose trick. Stochastic gradient descent isn’t just some idiosyncratic method of training neural networks; it’s a way to automatically generate software that meets certain desiderata. And it’s compute-efficient enough to generate software approaching human brains in complexity. Thus, I don’t expect that we’ll need to move beyond it to get to AGI — general intelligence is reachable by doing SGD over some architecture.
I expect we’ll need advancement(s) on the order of “fully-connected NN → transformers”, not “GOFAI → DL”.
I would say it seems like it’s almost there, but it also seems to me to already have some fluid intelligence, and that might be why it seems close. If it doesn’t have fluid intelligence, then my intuition that it’s close may not be very reliable.