I think you’re right that when evaluating how automated systems will affect economics going forward, standard assumptions and models will not hold. I’d argue the most important factor going forward is agency retention regarding humans, companies, and entities. Because agency is an upstream assumption, you get drastically different results when humans lose agency. To keep human economics meaningful, we need to ask the question: Who takes responsibility for an agent’s actions?
Douglas Doane
Karma: 0
I think you’re right that “confused” is the wrong word, but this comes from one specific mistake: attributing agentic qualities to systems that don’t actually exhibit these qualities.
Agency implies self-sufficiency. One that persists, maintains itself, sets its own goals, and acts without coercion. Thermostats act independently, but we don’t call them agents. They don’t set their own goals. They respond when prompted. Within the infrastructure that humans built, maintain, and direct.
When the discourse treats these systems as if they’re approaching genuine agency, it suggests assumptions about autonomy, self-sufficiency and the need for corrigibility. But that may not apply to something that was never self-sufficient in the first place. You don’t make a hammer corrigible.
We don’t lack a theory of agency, but we’re pattern matching systems into an agent frame where they don’t meet the criteria. Your post ends with the quote “The overall discourse should be improved.” I agree. That’s where we should start, but not end. If we stop projecting our own qualities on anything that generates text, a lot of the confusion you’re describing would dissolve.