If you can plausibly live off your capital (especially due to stock/options at AI companies), unless you consider higher-order social and economic risks (which are uncertain), the impact of AI on the job market is probably not as concerning to you as it is to the majority population.
Most people have exactly one economic value-generating asset, which is their ability to work. To the extent that you own capital (especially in AI companies), you are more or less, or completely insulated from having to reckon with the consequences of personally being forced into a permabroke underclass because of your labour value going to zero soon.
Alas, I also expect that the transformation is likely to undermine the role of the capital whose placement in the economic network is far from resource possessing coalitions. Imagine, for example, that the alleged capital was located in Detroit and consisted of a car factory and of something useful to the car factory’s workers, and that potential consumers chose cars from a different country. Then Detroit’s factory becomes useless, which undermines the workers’ salaries and the capital whose utility was based on serving said workers. If we replace Detroit’s car factory with a factory run not by AI and the different country with the AI-run economy, then we get a similar result of capital getting stuck in the niche of serving the underclass or outright disappearing along with it.
If you’re invested in AI companies and broad index funds I feel like you’ll be fairly immune to a parallel economy developing that you can’t invest in. Barring things like AI takeover, AI-assisted human takeover, and the end of property rights (out of scope here as “higher-order social and economic risks”), there will probably still be economies of scale that incentivize large firms, and they’ll still need capital, so you can invest in them.
Indeed, and that’s where the “more or less, or completely insulated” frame comes into play.
You would rightly expect someone who has a diverse asset portfolio that already allows them to live off of dividends/rent/interest, has shares in all the major AI companies and some ability to hedge against disruption (gold, crypto, long-dated put options, residences in different jurisdictions) to worry less about their labour value going to zero than someone who “just” owns a profitable restaurant serving high-rise office workers who themselves face obsolescence.
In both cases, concern follows from thinking about how one is affected by higher-order consequences of AI bankrupting labour, some of which are closer to the first-order effects (e.g. “can I still run my business if everyone in my area loses their job?”) and some of which are further away (e.g. questions related to social cohesion and the stability of the financial system).
Higher-order thinking of this type is more cognitively demanding and is somewhat self-limiting due to compounding uncertainty at each step. People react differently if there is a tiger in front of them, vs. if they are watching tigers appearing in front of other people through a window in their fortified position, and their self-referential anxiety is anti-correlated with the degree of (perceived) fortification.
It seems to me that the “it’s all going to be ok”-type narratives regarding the coming technological obsolescence of labour tend to originate from those who are basically insulated from its first-order effects (because they genuinely believe that they’re going to be ok), and then take on a memetic quality, spread by those who want to signal affiliation with elite ideology and by those for whom it is psychologically soothing.
Second order effects: history does not indicate the effects of an unemployable population are favorable for the owner class.
This it true, but the dynamics seem likely to change when the unemployable population basically can’t exert any military force, and the military will categorically not side with the unemployable population.
I agree, though higher-order effects become more difficult to conceptualize the further removed you are from the proverbial impact crater, and the uncertainty appears to be short-circuited by a normalcy bias. See my reply to StanislavKrym’s comment for a more elaborate explanation.
If you can plausibly live off your capital (especially due to stock/options at AI companies), unless you consider higher-order social and economic risks (which are uncertain), the impact of AI on the job market is probably not as concerning to you as it is to the majority population.
Most people have exactly one economic value-generating asset, which is their ability to work. To the extent that you own capital (especially in AI companies), you are more or less, or completely insulated from having to reckon with the consequences of personally being forced into a permabroke underclass because of your labour value going to zero soon.
Alas, I also expect that the transformation is likely to undermine the role of the capital whose placement in the economic network is far from resource possessing coalitions. Imagine, for example, that the alleged capital was located in Detroit and consisted of a car factory and of something useful to the car factory’s workers, and that potential consumers chose cars from a different country. Then Detroit’s factory becomes useless, which undermines the workers’ salaries and the capital whose utility was based on serving said workers. If we replace Detroit’s car factory with a factory run not by AI and the different country with the AI-run economy, then we get a similar result of capital getting stuck in the niche of serving the underclass or outright disappearing along with it.
If you’re invested in AI companies and broad index funds I feel like you’ll be fairly immune to a parallel economy developing that you can’t invest in. Barring things like AI takeover, AI-assisted human takeover, and the end of property rights (out of scope here as “higher-order social and economic risks”), there will probably still be economies of scale that incentivize large firms, and they’ll still need capital, so you can invest in them.
Indeed, and that’s where the “more or less, or completely insulated” frame comes into play.
You would rightly expect someone who has a diverse asset portfolio that already allows them to live off of dividends/rent/interest, has shares in all the major AI companies and some ability to hedge against disruption (gold, crypto, long-dated put options, residences in different jurisdictions) to worry less about their labour value going to zero than someone who “just” owns a profitable restaurant serving high-rise office workers who themselves face obsolescence.
In both cases, concern follows from thinking about how one is affected by higher-order consequences of AI bankrupting labour, some of which are closer to the first-order effects (e.g. “can I still run my business if everyone in my area loses their job?”) and some of which are further away (e.g. questions related to social cohesion and the stability of the financial system).
Higher-order thinking of this type is more cognitively demanding and is somewhat self-limiting due to compounding uncertainty at each step. People react differently if there is a tiger in front of them, vs. if they are watching tigers appearing in front of other people through a window in their fortified position, and their self-referential anxiety is anti-correlated with the degree of (perceived) fortification.
It seems to me that the “it’s all going to be ok”-type narratives regarding the coming technological obsolescence of labour tend to originate from those who are basically insulated from its first-order effects (because they genuinely believe that they’re going to be ok), and then take on a memetic quality, spread by those who want to signal affiliation with elite ideology and by those for whom it is psychologically soothing.
First order effects: yes, agreed.
Second order effects: history does not indicate the effects of an unemployable population are favorable for the owner class.
This it true, but the dynamics seem likely to change when the unemployable population basically can’t exert any military force, and the military will categorically not side with the unemployable population.
I also wrote the following, which speaks to your second point.
I agree, though higher-order effects become more difficult to conceptualize the further removed you are from the proverbial impact crater, and the uncertainty appears to be short-circuited by a normalcy bias. See my reply to StanislavKrym’s comment for a more elaborate explanation.