So perhaps a “proto-AGI” is a better term to use for it. Not quite the full thing just yet, but shows clear generality across a wide number of domains. If it can spread out further and become much larger, as well as have recursivity (which might require an entirely different architecture), it could become what we’ve all been waiting for.
Yuli_Ban
On a fundamental level, I agree. However, there is some aspects of this technology that makes me wonder if things might not be a tad bit different and past experiences may not accurately predict the future. Artificial intelligence is a different beast from what we are used to, that is to say “mechanical effort”.
When it comes to multimedia deepfakes, the threat is less “people believe everything they see” and more “people no longer trust anything they see”. The reason why we trust written text and photographs is because most of us have never dealt with faked letters and most altered photos are very obviously altered. What’s more, there are consequences for doing so. When I was a child, I sometimes had my senile grandmother write letters detailing why I was “sick” and couldn’t come to school or had her sign homework under my father’s name. Eventually, the teachers found out and stopped trusting any letter I brought in, even if they were legitimately from my father.
I wonder if there is any major plan to greatly expand the context window? Or perhaps add a sort “inner voice”/chain of thought for the model to write down its intermediate computational steps to refer to in the future? I’m aware the context window increases with parameter count.
Correct me if I’m wrong, but a context window of even 20,000 memory tokens could be enough for it to reliably imitate a human’s short-term memory to consistently pass a limited Turing Test (e.g. the same Eugene Goostman barely scraped by ~2014), as opposed to the constant forgetting of LSTMs and Markov Chains. Sure, the Turing Test isn’t particularly useful for AI benchmarks, but the market for advanced conversational agents could be a trillion-dollar business, and the average Joe is far more susceptible to the ELIZA Effect than we commonly idealize.
In my rambling, I intended to address some of these issues but chose to cap it off at a point I found satisfying.
The first point: simply put, I do not see the necessary labor an AGI would need to bring about the full potential of its capability requiring any more than 10% of the labor force. This is an arbitrary number with no basis in reality.
On the second point, I do not believe we need to see even more than 30% unemployment before severe societal pressure is put on the tech companies and government to do something. This isn’t quite as arbitrary, as unemployment rates as “low” as 15% have been triggers for severe social unrest.
As it stands, roughly 60% of the American economy is wrapped up in professional work: https://www.dpeaflcio.org/factsheets/the-professional-and-technical-workforce-by-the-numbers
Assuming only half of that is automated within five years due to a good bit of that still requiring physical robots, you already have caused enough pain to get the government involved.
However, I do predict that there will be SOME material capability in the physical world. My point is more that the potential for a rebellion to be crushed solely through robotics capabilities alone will not be there, as most robotic capabilities will indeed be deployed for labor.
I suppose, the point there is that there is going to be a “superchilled” point of robotics capabilities at around the exact same time AGI is likely to arrive— the latter half of the 2020s, a point when robotics are advanced enough to do labor and deployed in a large enough scale to do so, but not to such an overwhelming point that literally every possible physical job is automated. Hence why I kept the estimates down to around 50% unemployment at most, though possibly as high as 70% if companies aggressively try futureproofing themselves for whatever reason.
Furthermore, I’m going more with the news that companies are beginning to utilize generative AI to automate their workforce (mostly automating tasks at this point, but which will inevitably generalize to whole positions). This despite the technology not yet being fully mature for deployment (e.g. ChatGPT, Stable Diffusion/Midjourney, etc.)
https://finance.yahoo.com/news/companies-already-replacing-workers-chatgpt-140000856.html
If it’s feasible for companies to save some money via automation, they are wont to take it. Likewise, I expect plenty of businesses to automate ahead of time in the near future as a result of AI hype.The third point is one which I intended to address more directly indeed: that the prospect of a loss of material comfort and stability is in fact a suitable emotional and psychological shock that can drive unrest and, given enough uncertainty, a revolution. We saw this as recently as the COVID lockdowns in 2020 and the protests that arose following that March (for various reasons). We’ve seen reactions to job loss be similarly violent at earlier points in history. Some of this was buffered by the prevalence of unions, but we’ve successfully deunionized en masse.
It should also be stressed that we in the West have not had to deal with such intense potential permanent unemployment. In America and the UK, the last time the numbers were anywhere near “30%” was during the Great Depression. Few people in those times expected such numbers to remain so high indefinitely. Yet in our current situation, we’re not just expecting 30% to be the ceiling; we’re expecting it to be the floor, and to eventually reach 100% unemployment (or at least 99.99%).
I feel most people wouldn’t mind losing their jobs if they were paid for it. I feel most people wouldn’t mind having comfortable stability through robot-created abundance. I merely present a theory that all of this change coming too fast to handle, before we’re properly equipped to handle it, in a culture that does not at all value or prepare us for a lifestyle anywhere similar to what is being promised, is going to end very badly.
There are any number of other things which might already have caused a society-wide luddite revolt—nuclear weapons, climate change, Internet surveillance—but it hasn’t happened.
The fundamental issue is that none of these have had a direct negative impact on the financial, emotional, and physical wellbeing of hundreds of millions of people all at once. Internet surveillance is the closest, but even then, it’s a somewhat abstract privacy concern; climate change eventually will, but not soon enough for most people to care about— this scenario, however, would be actively tangibly happening, and at accelerando speeds. I’d also go so far as to say these issues merely built up like a supervolcanic caldera over the decades, as many people do care about these issues, but there has not been a major trigger to actually protest en masse as part of a Luddite revolt over them.
The situation I’m referring to is entirely the long-idealized “mass unemployment from automation,” and current trends suggest this is going to happen very quickly rather than over longer timeframes. If there has ever been a reason for a revolt, taking away people’s ability to earn income and put food on the table is it.
I expect there will be a token effort to feed people to prevent revolt, but the expectation that things are not going to change only to be faced by the prospect of wild, uncontrollable change will be the final trigger. The promise that “robots are coming to give you abundance” is inevitably going to go down badly. It’ll inevitably be a major culture war topic, and one that I don’t think enough people will believe even in the face of AI and robotic deployment. And again, that’s not bringing up the psychosocial response to all this where you have millions upon millions who would feel horribly betrayed by the prospect of their expected future immediately going up in smoke, their incomes being vastly reduced, and the prospect of death (whether that be by super-virus, disassembly, or mind-uploading, the latter of which is indistinguishable to death for the layman). And good lord, that’s not even bringing up cultural expectations, religious beliefs, and entrenched collective dogma.
The only possible way to avoid this is to time it perfectly right. Don’t automate much right up until AGI’s unveiling. Then, while people are horribly shocked, automate as much as possible, and then deploy machines to increase abundance.
Of course, the AGI likely kills everyone instead, but if it works, you might be able to stave off a Luddite rebellion if there is enough abundance to satisfy material comforts. But this is an almost absurd trickshot that requires capitalists stop acting like capitalists for several years, then discard capitalism entirely afterwards.
Ideally that would be the case. However, if I had to guess, this roiling mass of Luddites would likely have chosen to boycott anything to do with AI as a result of their job/career losses. We’d like to believe that we’d easily be convinced out of violence. However, when humans get stuck in a certain of thinking, we become stubborn and accept our own facts regardless of whatever an expert, or expert system, says to us. This future ChatGPT could use this to its advantage, but I don’t see how it prevents violence once people’s minds are set on violence. Telling them “Don’t worry, be happy, this will all pass as long as you trust the government, the leaders, and the rising AGI” seems profoundly unlikely to work especially in America where telling anyone to trust the government just makes them distrust the messenger even more. And saying “market forces will allow new jobs to be created” seems unlikely to convince anyone if they’ve been thrown out due to AI.
And the increasing crackdowns on any one particular group would only be tolerated if there was a controlled burn of unemployment through society. When it’s just about everyone you have to crackdown against, at that point, you have a revolution on your hands. All it takes is one group suffering brutality for it to cascade.
The way to stop this is total information control and deception, which, again, we’ve decided is totally undesirable and dystopian behavior. Justifying it with “For the greater good” and “the ends justifies the means” becomes the same sort of crypto-Leninist talk that the technoprogressives tend to so furiously hate.
This thought experiment requires the belief that automation will happen rapidly, without any care or foresight or planning, and that there are no serious proposals to allow for a soft landing. The cold fact is that this is not an unrealistic expectation. I’d put p(doom) at probably as high as 90% that I’m actually underestimating the amount of reaction, failing to account for racial radicalization, religious radicalization, third-worldism, progressivism flirting with Ludditism, conservatism becoming widespread paleoconservative primitivism, and so on.
If there is a more controlled burn— if we don’t simply throw everyone out of their jobs with only a basic welfare scheme to cover for them— then that number drops dramatically because we are easily amused and distracted by tech toys and entertainment. It is entirely possible for a single variable to drastically alter outcomes, and right now, we seem to be speedrunning the outcome with all the worst possible variables working against us.
It appears this hypothesis has come true with GPT-3 and the new API.
Great takes on all this, better than a typical reply.
I certainly hope someone can reasonably prove me wrong as well. The best retort I’ve gotten is that “this is no different than when a young child is forced to go to school for the first time. They have to deal with an extreme overwhelming change all at once that they’ve never been equipped to deal with before. They cry and throw a tantrum and that’s it; they learn to deal with it.”
My counter-retort to that was “You do realize that just proves my point, right? Because now imagine, all at once, tens of millions of 4-to-5 year olds threw a tantrum, except they also knew how to use guns and bombs and had good reason to fear they were actually never going to see their parents again unless they used them. Nothing about that ends remotely well.”