The comparison to more competition in labour by way of things like population-growth doesn’t work because THAT competition consisted of human beings like you and me, and thus couldn’t fundamentally outcompete you.
A hypothetical AGI can work 24x7x365 at a cost low enough that were you paid the same, you’d not be able to buy the necessities you need to sustain life. It’d be like having your labour compete with the labor of someone who never gets tired, and needs a salary of a dollar a day to sustain themselves.
Why would ANYONE hire you at a cost of (say) $25/hour if there’s no task you can do better than a machine that costs $0.25/hour to operate?
A hypothetical GAI remains hypothetical. We’ve been working on self-managing systems for 25+ years and still have very little idea how to make them. IF we can solve hallucinations (which have recently been shown to be equivalent to patients with various stages of dementia) and research can give us new self-* technologies capable of configuring, healing, protecting, and optimizing both logical (software) and physical infrastructure (because what does a GAI do with a cable cut?), then we might see a quick devaluation of human labor. Also, everyone is speaking as if GAI is imminent. It’s not. LLMs are impressive as a nascent technology (primarily because they interact with us using natural language which is one of the largest initial necessary but wholly insufficient preconditions to strong AI) but we are very far off from something that can replace all or even most specialties of human labor.
If incomes essentially go to zero as opportunities for human labor decrease, who are these machines producing for? The COGS going exponentially down would need to have a corresponding price reduction I assume to account for lower incomes for purchasing goods?
The comparison to more competition in labour by way of things like population-growth doesn’t work because THAT competition consisted of human beings like you and me, and thus couldn’t fundamentally outcompete you.
A hypothetical AGI can work 24x7x365 at a cost low enough that were you paid the same, you’d not be able to buy the necessities you need to sustain life. It’d be like having your labour compete with the labor of someone who never gets tired, and needs a salary of a dollar a day to sustain themselves.
Why would ANYONE hire you at a cost of (say) $25/hour if there’s no task you can do better than a machine that costs $0.25/hour to operate?
A hypothetical GAI remains hypothetical. We’ve been working on self-managing systems for 25+ years and still have very little idea how to make them. IF we can solve hallucinations (which have recently been shown to be equivalent to patients with various stages of dementia) and research can give us new self-* technologies capable of configuring, healing, protecting, and optimizing both logical (software) and physical infrastructure (because what does a GAI do with a cable cut?), then we might see a quick devaluation of human labor. Also, everyone is speaking as if GAI is imminent. It’s not. LLMs are impressive as a nascent technology (primarily because they interact with us using natural language which is one of the largest initial necessary but wholly insufficient preconditions to strong AI) but we are very far off from something that can replace all or even most specialties of human labor.
If incomes essentially go to zero as opportunities for human labor decrease, who are these machines producing for? The COGS going exponentially down would need to have a corresponding price reduction I assume to account for lower incomes for purchasing goods?