My understanding is “labor” & “capital” cannot nor should be applied out of the manufacturing context. “Labor”, indeed, involves flesh-&-blood, but exists specifically in relationship to “capital”, as the necessary component for its production and maintainance.
“Capital”, oft-misunderstood, is by nature local, limited, and operationally immobile. (I also think it’s useful to note that there should be some orders-of-magnitude cost associated with obtaining capital vs common services, especially one that involves specialized labor to set up the capital). Capital, by definition, is that which does not scale. A minilathe or a common brake press is not capital. A through-cooling 5-axis lathe+mill, or a CNC brake press, is capital. The building that shelters them and provides them ventilation, power, and cooling, is capital. A COTS 3d printer is not capital (although 1,000 of them may be). A wework is not capital for those that use it. The tunnels of a coal mine, or the deep pits of a copper mine, are capital. The explosives used to carve them are not. Etc.
In this vein, I think we first need to construct a labor-theory of a) the service economy and specifically b) the digital service economy.
I do not think Marxist theory applies to a resturant, a starbucks, or a mcdonalds, or spotify, or uber. There is very little capital present. Most of it is some real estate. AWS & their acres of data centers are capital. Proprietary source code is not. It is trivial to copy a repo. Capital cannot be trivial to duplicate—its scarcity is protected by nature, not litigation.
I do not think the labor is the same, because trained-or-coordinated workers are not required to maintain the capital against destruction. A mistake of labor can destroy capital. Not only is there no capital to destroy, it is not able to be destroyed by negligence.
Some new description of the relationship between service-business & service-workers needs to be developed. Then, an extended description of the relationship between digital-service-business & digital-service-programmers needs to be developed. Then, both need to be re-interpreted in the context of the service workers/programmers being nonhuman. I suspect this final step will be rather easy.
EDIT: The point of all this is to say: capital & invention are two different things. AI is a fascinating thing: an invention that can do invention. But I do not think it is capital, and I do not think it can do capital L Labor. Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. I can elaborate on my personal beliefs about intelligence & cognition but it’s not very relevant here. Philosophically, I want to emphasize the primacy of the physical world, something I often tend to forget as a child of the digital age & something I suspect some of us never consider.
EDIT EDIT: I need to develop a theory of friction. Capabilities that were previously required capital & labor can become not-capital & not-labor when they become infrastructurized, because infrastructurization dramatically reduces the friction associated with the capability. However, to really do something—that is, to do something without infrastructure (which, it should be noted, includes setting up new infrastructure) involves overcoming a LOT of friction. Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction—that which only humans can handle.
I don’t think AIs can handle friction until they can handle fucking and fighting. Thus, I think while AI can replace services (woe to you, America & her post-industrial service economy!), it cannot replace labor. Labor involves overcoming friction.
Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. … Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction—that which only humans can handle.
This OP is about “AGI”, as defined in my 3rd & 4th paragraph as follows:
By “AGI” I mean here “a bundle of chips, algorithms, electricity, and/or teleoperated robots that can autonomously do the kinds of stuff that ambitious human adults can do—founding and running new companies, R&D, learning new skills, using arbitrary teleoperated robots after very little practice, etc.”
Yes I know, this does not exist yet! (Despite hype to the contrary.) Try asking an LLM to autonomously write a business plan, found a company, then run and grow it for years as CEO. Lol! It will crash and burn! But that’s a limitation of today’s LLMs, not of “all AI forever”. AI that could nail that task, and much more beyond, is obviously possible—human brains and bodies and societies are not powered by some magical sorcery forever beyond the reach of science. I for one expect such AI in my lifetime, for better or worse. (Probably “worse”, see below.)
So…
“The kinds of stuff that ambitious human adults can do” includes handling what you call “friction”, so “AGI” as defined above would be able to do that too.
I am >99% confident that “AGI” as defined above is physically possible, and will be invented eventually.
I am like 90% confident that it will be invented in my lifetime.
This post is agnostic on the question of whether such AGI will or won’t have anything to do with “current LLM-based architectures”. I’m not sure why you brought that up. But since you asked, I think it won’t; I think it will be a different, yet-to-be-developed, AI paradigm.
As for the rest of your comment, I find it rather confusing, but maybe that’s downstream of what I wrote here.
Understood & absolutely: in that frame the rest of my comment falls apart & your piece coheres. I was making the same error as this piece is about: that agi & ai as terms. are lazy approximations of each other.
My understanding is “labor” & “capital” cannot nor should be applied out of the manufacturing context. “Labor”, indeed, involves flesh-&-blood, but exists specifically in relationship to “capital”, as the necessary component for its production and maintainance.
“Capital”, oft-misunderstood, is by nature local, limited, and operationally immobile. (I also think it’s useful to note that there should be some orders-of-magnitude cost associated with obtaining capital vs common services, especially one that involves specialized labor to set up the capital). Capital, by definition, is that which does not scale. A minilathe or a common brake press is not capital. A through-cooling 5-axis lathe+mill, or a CNC brake press, is capital. The building that shelters them and provides them ventilation, power, and cooling, is capital. A COTS 3d printer is not capital (although 1,000 of them may be). A wework is not capital for those that use it. The tunnels of a coal mine, or the deep pits of a copper mine, are capital. The explosives used to carve them are not. Etc.
In this vein, I think we first need to construct a labor-theory of a) the service economy and specifically b) the digital service economy.
I do not think Marxist theory applies to a resturant, a starbucks, or a mcdonalds, or spotify, or uber. There is very little capital present. Most of it is some real estate. AWS & their acres of data centers are capital. Proprietary source code is not. It is trivial to copy a repo. Capital cannot be trivial to duplicate—its scarcity is protected by nature, not litigation.
I do not think the labor is the same, because trained-or-coordinated workers are not required to maintain the capital against destruction. A mistake of labor can destroy capital. Not only is there no capital to destroy, it is not able to be destroyed by negligence.
Some new description of the relationship between service-business & service-workers needs to be developed. Then, an extended description of the relationship between digital-service-business & digital-service-programmers needs to be developed. Then, both need to be re-interpreted in the context of the service workers/programmers being nonhuman. I suspect this final step will be rather easy.
EDIT:
The point of all this is to say: capital & invention are two different things. AI is a fascinating thing: an invention that can do invention. But I do not think it is capital, and I do not think it can do capital L Labor. Manipulating the physical world is a very different problem from invention, and current LLM-based architectures are not suited for this. I can elaborate on my personal beliefs about intelligence & cognition but it’s not very relevant here. Philosophically, I want to emphasize the primacy of the physical world, something I often tend to forget as a child of the digital age & something I suspect some of us never consider.
EDIT EDIT:
I need to develop a theory of friction. Capabilities that were previously required capital & labor can become not-capital & not-labor when they become infrastructurized, because infrastructurization dramatically reduces the friction associated with the capability. However, to really do something—that is, to do something without infrastructure (which, it should be noted, includes setting up new infrastructure) involves overcoming a LOT of friction. Friction, all the consequence of a lack of knowledge about the problem; friction, all the million little challenges that need to be overcome; friction, that which is smoothed over the second and third and fourth times something done. Friction, that which is inevitably associated with the physical world. Friction—that which only humans can handle.
I don’t think AIs can handle friction until they can handle fucking and fighting. Thus, I think while AI can replace services (woe to you, America & her post-industrial service economy!), it cannot replace labor. Labor involves overcoming friction.
This OP is about “AGI”, as defined in my 3rd & 4th paragraph as follows:
So…
“The kinds of stuff that ambitious human adults can do” includes handling what you call “friction”, so “AGI” as defined above would be able to do that too.
“The kinds of stuff that ambitious human adults can do” includes manipulating the physical world, so “AGI” as defined above would be able to do that too. (As a more concrete example, adult humans, after just a few hours’ practice, can get all sorts of things done in the physical world using even quite inexpensive makeshift teleoperated robots, therefore AGI would be able to do that too.)
I am >99% confident that “AGI” as defined above is physically possible, and will be invented eventually.
I am like 90% confident that it will be invented in my lifetime.
This post is agnostic on the question of whether such AGI will or won’t have anything to do with “current LLM-based architectures”. I’m not sure why you brought that up. But since you asked, I think it won’t; I think it will be a different, yet-to-be-developed, AI paradigm.
As for the rest of your comment, I find it rather confusing, but maybe that’s downstream of what I wrote here.
Understood & absolutely: in that frame the rest of my comment falls apart & your piece coheres. I was making the same error as this piece is about: that agi & ai as terms. are lazy approximations of each other.
My apologies for a lazy comment.