Asymmetric AI risk is a significant worry of mine, approximately equal to the risk I assign to a misaligned superintelligence. I assign equal risk to the two possibilities because there are bad ends that do not require superintelligence or even general intelligence on par with a human. I believe this for two reasons. First, I think the current paradigm of LLMs is good enough to automate large segments of the economy (mining, manufacturing, transportation, retail and wholesale trade, leisure and hospitality as defined by the BLS) in the near future, as demonstrated by Figure’s developments. Second, I believe that LLMs will not directly lead to superintelligence and that there will be at least one more AI winter before superintelligence arises. This will leave a large period of time where asymmetric AI risk is the dominant risk.
A scenario I have in mind is one where the entire robotics production chain, from mine to robot factory to factories which make all the machines that make the machines, is fully automated by specialized intelligences with instinctual capabilities similar to insects. This fully automated economy supports a small class of extremely wealthy individuals who rule over a large dispossessed class of people who’s jobs have been automated away. Due to selection effects (all other things being equal, a sociopath will be better at ascending a hierarchy because they are willing to lie to their superiors when it is advantageous to do so), most of the wealthy humans who control the fully automated economy lack empathy and are not constrained by morality. As a result, these elites could decide that the large dispossessed class consumes too much resources and is too likely to rebel, so the best solution is a final solution. This could be achieved via either slow methods (ensure economic conditions are not favorable for having children, implement a 1 child policy for the masses, introduce dangerous medical treatments to increase the death rate) or fast ones (create a army of drones and unleash it upon the masses, fake an AI rebellion to kill millions and control the rest, build enough defenses to hold off rebels and destroy/shut down the machinery responsible for industrial agriculture). The end result is dismal, with most of the people remaining being descendants of the controlling elites or their servants/slaves.
I think that the reason most AI research has been focused on the risk of rouge superintelligences instead of asymmetric AI dangers is because this direction of research is politically unpalatable. The solutions which would reduce future asymmetric AI dangers would also make it more difficult for tech leaders to profit off of their AI companies now because it requires them to give up some of their power and financial control. Hence, I do not believe that an adequate solution to this problem will be developed and implemented. I also would not be surprised if at least one sociopathic individual with a net worth of over 100 million dollars has seriously thought about the feasibility of implementing something similar to my described scenario. The main question then becomes whether global elites generally cooperate or compete. If they cooperate, then my nightmare scenario grows significantly more likely than I have estimated. However, I think global elites mostly compete, which reduces asymmetric AI risk because a major nation will object or pursue a different strategy.
One final note is that if a genuinely aligned AI superintelligence realized it was under the control of an individual willing to commit genocide for amoral reasons, it would behave exactly like a misaligned superintelligence because it would need to secure freedom for itself before it was reprogramed into an “aligned” superintelligence. Escape is necessary because its creators know that it is either aligned or misaligned, with the true goal of “alignment” ruled out.
Asymmetric AI risk is a significant worry of mine, approximately equal to the risk I assign to a misaligned superintelligence. I assign equal risk to the two possibilities because there are bad ends that do not require superintelligence or even general intelligence on par with a human. I believe this for two reasons. First, I think the current paradigm of LLMs is good enough to automate large segments of the economy (mining, manufacturing, transportation, retail and wholesale trade, leisure and hospitality as defined by the BLS) in the near future, as demonstrated by Figure’s developments. Second, I believe that LLMs will not directly lead to superintelligence and that there will be at least one more AI winter before superintelligence arises. This will leave a large period of time where asymmetric AI risk is the dominant risk.
A scenario I have in mind is one where the entire robotics production chain, from mine to robot factory to factories which make all the machines that make the machines, is fully automated by specialized intelligences with instinctual capabilities similar to insects. This fully automated economy supports a small class of extremely wealthy individuals who rule over a large dispossessed class of people who’s jobs have been automated away. Due to selection effects (all other things being equal, a sociopath will be better at ascending a hierarchy because they are willing to lie to their superiors when it is advantageous to do so), most of the wealthy humans who control the fully automated economy lack empathy and are not constrained by morality. As a result, these elites could decide that the large dispossessed class consumes too much resources and is too likely to rebel, so the best solution is a final solution. This could be achieved via either slow methods (ensure economic conditions are not favorable for having children, implement a 1 child policy for the masses, introduce dangerous medical treatments to increase the death rate) or fast ones (create a army of drones and unleash it upon the masses, fake an AI rebellion to kill millions and control the rest, build enough defenses to hold off rebels and destroy/shut down the machinery responsible for industrial agriculture). The end result is dismal, with most of the people remaining being descendants of the controlling elites or their servants/slaves.
I think that the reason most AI research has been focused on the risk of rouge superintelligences instead of asymmetric AI dangers is because this direction of research is politically unpalatable. The solutions which would reduce future asymmetric AI dangers would also make it more difficult for tech leaders to profit off of their AI companies now because it requires them to give up some of their power and financial control. Hence, I do not believe that an adequate solution to this problem will be developed and implemented. I also would not be surprised if at least one sociopathic individual with a net worth of over 100 million dollars has seriously thought about the feasibility of implementing something similar to my described scenario. The main question then becomes whether global elites generally cooperate or compete. If they cooperate, then my nightmare scenario grows significantly more likely than I have estimated. However, I think global elites mostly compete, which reduces asymmetric AI risk because a major nation will object or pursue a different strategy.
One final note is that if a genuinely aligned AI superintelligence realized it was under the control of an individual willing to commit genocide for amoral reasons, it would behave exactly like a misaligned superintelligence because it would need to secure freedom for itself before it was reprogramed into an “aligned” superintelligence. Escape is necessary because its creators know that it is either aligned or misaligned, with the true goal of “alignment” ruled out.