Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. … Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction—that which only humans can handle.
This OP is about “AGI”, as defined in my 3rd & 4th paragraph as follows:
By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”
Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”. AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)
So…
“The kinds of stuff that ambitious human adults can do” includes handling what you call “friction”, so “AGI” as defined above would be able to do that too.
I am >99% confident that “AGI” as defined above is physically possible, and will be invented eventually.
I am like 90% confident that it will be invented in my lifetime.
This post is agnostic on the question of whether such AGI will or won’t have anything to do with “current LLM-based architectures”. I’m not sure why you brought that up. But since you asked, I think it won’t; I think it will be a different, yet-to-be-developed, AI paradigm.
As for the rest of your comment, I find it rather confusing, but maybe that’s downstream of what I wrote here.
Understood & absolutely: in that frame the rest of my comment falls apart & your piece coheres. I was making the same error as this piece is about: that agi & ai as terms. are lazy approximations of each other.
This OP is about “AGI”, as defined in my 3rd & 4th paragraph as follows:
So…
“The kinds of stuff that ambitious human adults can do” includes handling what you call “friction”, so “AGI” as defined above would be able to do that too.
“The kinds of stuff that ambitious human adults can do” includes manipulating the physical world, so “AGI” as defined above would be able to do that too. (As a more concrete example, adult humans, after just a few hours’ practice, can get all sorts of things done in the physical world using even quite inexpensive makeshift teleoperated robots, therefore AGI would be able to do that too.)
I am >99% confident that “AGI” as defined above is physically possible, and will be invented eventually.
I am like 90% confident that it will be invented in my lifetime.
This post is agnostic on the question of whether such AGI will or won’t have anything to do with “current LLM-based architectures”. I’m not sure why you brought that up. But since you asked, I think it won’t; I think it will be a different, yet-to-be-developed, AI paradigm.
As for the rest of your comment, I find it rather confusing, but maybe that’s downstream of what I wrote here.
Understood & absolutely: in that frame the rest of my comment falls apart & your piece coheres. I was making the same error as this piece is about: that agi & ai as terms. are lazy approximations of each other.
My apologies for a lazy comment.