Indeed, and that’s where the “more or less, or completely insulated” frame comes into play.
You would rightly expect someone who has a diverse asset portfolio that already allows them to live off of dividends/rent/interest, has shares in all the major AI companies and some ability to hedge against disruption (gold, crypto, long-dated put options, residences in different jurisdictions) to worry less about their labour value going to zero than someone who “just” owns a profitable restaurant serving high-rise office workers who themselves face obsolescence.
In both cases, concern follows from thinking about how one is affected by higher-order consequences of AI bankrupting labour, some of which are closer to the first-order effects (e.g. “can I still run my business if everyone in my area loses their job?”) and some of which are further away (e.g. questions related to social cohesion and the stability of the financial system).
Higher-order thinking of this type is more cognitively demanding and is somewhat self-limiting due to compounding uncertainty at each step. People react differently if there is a tiger in front of them, vs. if they are watching tigers appearing in front of other people through a window in their fortified position, and their self-referential anxiety is anti-correlated with the degree of (perceived) fortification.
It seems to me that the “it’s all going to be ok”-type narratives regarding the coming technological obsolescence of labour tend to originate from those who are basically insulated from its first-order effects (because they genuinely believe that they’re going to be ok), and then take on a memetic quality, spread by those who want to signal affiliation with elite ideology and by those for whom it is psychologically soothing.
Indeed, and that’s where the “more or less, or completely insulated” frame comes into play.
You would rightly expect someone who has a diverse asset portfolio that already allows them to live off of dividends/rent/interest, has shares in all the major AI companies and some ability to hedge against disruption (gold, crypto, long-dated put options, residences in different jurisdictions) to worry less about their labour value going to zero than someone who “just” owns a profitable restaurant serving high-rise office workers who themselves face obsolescence.
In both cases, concern follows from thinking about how one is affected by higher-order consequences of AI bankrupting labour, some of which are closer to the first-order effects (e.g. “can I still run my business if everyone in my area loses their job?”) and some of which are further away (e.g. questions related to social cohesion and the stability of the financial system).
Higher-order thinking of this type is more cognitively demanding and is somewhat self-limiting due to compounding uncertainty at each step. People react differently if there is a tiger in front of them, vs. if they are watching tigers appearing in front of other people through a window in their fortified position, and their self-referential anxiety is anti-correlated with the degree of (perceived) fortification.
It seems to me that the “it’s all going to be ok”-type narratives regarding the coming technological obsolescence of labour tend to originate from those who are basically insulated from its first-order effects (because they genuinely believe that they’re going to be ok), and then take on a memetic quality, spread by those who want to signal affiliation with elite ideology and by those for whom it is psychologically soothing.