Thanks for the reply, it was helpful. I elaborated my perspective and pointed out some concrete disagreements with how labor automation would play out, I wonder if you can identify the cruxes in my model of how the economy and automated labor interact.
I’d frame my perspective as; “We should not aim to put society in a position where >90%+ of humans need government welfare programs or charity to survive while vast numbers of automated agents perform the labor that humans are currently depending on to survive.” I don’t believe we have the political wisdom or resilience to steer our world in this direction while preserving good outcomes for existing humans.
We live in a something like a unique balance where through companies, the economy provides individuals the opportunity to sustain themselves and specialize while contributing to a larger whole which typically provides goods and services which benefit other humans. If we create digital minds and robots to naively accelerate these emergent corporate entities’ abilities to generate profit, we lose an important ingredient in this balance, human bargaining power. Further, if we had the ability to create and steer powerful digital minds (which is also contentious), it doesn’t seem obvious that labor automation is a framing that would lead to positive experiences for humans or the minds.
I anticipate that AGI-driven automation will create so much economic abundance in the future that it will likely be very easy to provide for the material needs of all biological humans.
I’m skeptical that economic abundance driven by automated agents will by default manifest as an increased quality and quantity of goods and services enjoyed by humans, and that humans will continue to have the economic leverage to incentivize these human specific goods
working human-specific service jobs where consumers intrinsically prefer hiring human labor
I expect the amount of roles/tasks available where consumers prefer hiring humans is a rounding error compared to the amount of humans that depend on work
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn’t feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.