Thanks for this comment! I think this one of the main concerns I am pointing at.
I think somethings like fiscal aid could work, but have people tried making models for responses to things like this? It feels like with covid the relatively decent response was because the government was both enforcing a temporary policy of lockdown, and was sending checks to adjust things “back to normal” despite this. If job automation is slightly more gradual, on the scale of months to years, and specific only to certain jobs at a time, the response could be quite different, and it might be more likely that things end up poorly.
Agreed. I think a big part of the reason why we saw a large fiscal response in Covid but not in e.g. 2008 was because it was agreed that it was “nobody’s fault”.
In this sense, the faster that AI produces unemployment, the more likely we will see a policy response. If tens of millions of middle class educated workers suddenly wake up one day without a job, politicians will respond. If, on the other hand, AI slowly squeezes the lowest productivity workers out of a job over the course of 1-2 decades, there will be calls for “reeducation” or “tough love” or some such nonsense as the economy slowly spirals downward Japan style.
Ironically, this then makes for one of the few cases where “going faster” makes the transition to AGI less harmful. Whereas most AI safety issues are worse the faster the transition is.
Thanks for this comment! I think this one of the main concerns I am pointing at.
I think somethings like fiscal aid could work, but have people tried making models for responses to things like this? It feels like with covid the relatively decent response was because the government was both enforcing a temporary policy of lockdown, and was sending checks to adjust things “back to normal” despite this. If job automation is slightly more gradual, on the scale of months to years, and specific only to certain jobs at a time, the response could be quite different, and it might be more likely that things end up poorly.
Agreed. I think a big part of the reason why we saw a large fiscal response in Covid but not in e.g. 2008 was because it was agreed that it was “nobody’s fault”.
In this sense, the faster that AI produces unemployment, the more likely we will see a policy response. If tens of millions of middle class educated workers suddenly wake up one day without a job, politicians will respond. If, on the other hand, AI slowly squeezes the lowest productivity workers out of a job over the course of 1-2 decades, there will be calls for “reeducation” or “tough love” or some such nonsense as the economy slowly spirals downward Japan style.
Ironically, this then makes for one of the few cases where “going faster” makes the transition to AGI less harmful. Whereas most AI safety issues are worse the faster the transition is.