Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We’d like to believe that we’d easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don’t see how it prevents violence once people’s minds are set on violence. Telling them “Don’t worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI” seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying “market forces will allow new jobs to be created” seems unlikely to convince anyone if they’ve been thrown out due to AI.
And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it’s just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.
The way to stop this is total information control and deception, which, again, we’ve decided is totally undesirable and dystopian behavior. Justifying it with “For the greater good” and “the ends justifies the means” becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.
This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I’d put p(doom) at probably as high as 90% that I’m actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.
If there is a more controlled burn— if we don’t simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.
Indeed I don’t have answers, but only because this is indeed a sort of “AI mid” future, assuming some remnant of the status quo remains intact, whether because AI does not advance as far as anticipated as quickly as anticipated (a position I no longer hold), or because a future AI model deliberately chooses to maintain an artificial status quo bubble, a “human reserve” relatively indistinguishable from a more gradually-progressing future for psychosocial reasons (which is plausible, but not certain).
Generally though, it’s the epistemological barrier at play, since the intent was to provide more of a grounded economist-focus look at the effects of universal task automation. Now I have mulled on technism a great deal recently, but since I have no economics background, the intention on my end wasn’t to ask o1 but to use this as a launchpad, something that an even more impressive AI system could handle. Deep Research is probably that model, so I intend on returning to this with a follow up to see whether any of this is coherent or if this really is as schizophrenic and sophistic as “the means of production owns the means of production” historically would have sounded.