If this reasoning is right, and we don’t manage to defy fate, humanity will likely forever follow that earthbound path, and be among dozens – or perhaps hundreds, or thousands, or millions – of intelligent species, meekly lost in the dark.
Unfortunately, even a lack of superintelligence and mankind’s AI-indusable degradation don’t exclude progress[1] to interstellar travel.
Even your scenario has “robots construct and work in automated wet labs testing countless drugs and therapies” and claims that “AIs with encyclopedic knowledge are sufficient to do the job well enough”. Therefore, progress may fail to be stopped even by human extinction.
Moreover, suppose that LLMs, including the neuralese ones, are inevitably worse than humans at generating novel insights. Then what about the government creating babies (think of Brave New World, where babies are produced in factories and raised) or superbabies? And what about artificial or simulated brains with more chaotic interconnections and OOMs more neurons or OOMs longer flexibility periods? And if there is no way to create a usable discovery-capable AI, then how do the humans even arrive at discoveries, by having souls that do OOMs more compute than human brains provide?
In addition, even the assumption that ASI is impossible and medium-level AIs can’t advance progress, but can doom the world by superstimuli, still leaves humans the chance to outlaw the latter, or at least have[2] left such a chance in the past. Think of the Cold War between the USSR and the USA. Had the Soviet Union won[3], it could’ve, for example, outlawed social parasitism or porn everywhere, prevented the appearance of social networks or their degradation into current state or tried to align the AIs to communism instead of benefits of corporations.
P.S. My scenario has the AIs’ utility bottlenecked on alignment: superintelligence emerges, but ends up either misaligned and ready to take over the world or be aligned to be usable only as help to elicit maximal capabilities of the user’s brain. Is the latter scenario a medium-level AI?
I tried to ask many AIs to estimate the probability of the USSR’s victory in the Cold War by using data available on Jan 1, 1951 and right before Khruschev resigning. What I got doesn’t imply that the USSR’s victory was totally impossible:
Unfortunately, even a lack of superintelligence and mankind’s AI-indusable degradation don’t exclude progress[1] to interstellar travel.
Even your scenario has “robots construct and work in automated wet labs testing countless drugs and therapies” and claims that “AIs with encyclopedic knowledge are sufficient to do the job well enough”. Therefore, progress may fail to be stopped even by human extinction.
Moreover, suppose that LLMs, including the neuralese ones, are inevitably worse than humans at generating novel insights. Then what about the government creating babies (think of Brave New World, where babies are produced in factories and raised) or superbabies? And what about artificial or simulated brains with more chaotic interconnections and OOMs more neurons or OOMs longer flexibility periods? And if there is no way to create a usable discovery-capable AI, then how do the humans even arrive at discoveries, by having souls that do OOMs more compute than human brains provide?
In addition, even the assumption that ASI is impossible and medium-level AIs can’t advance progress, but can doom the world by superstimuli, still leaves humans the chance to outlaw the latter, or at least have[2] left such a chance in the past. Think of the Cold War between the USSR and the USA. Had the Soviet Union won[3], it could’ve, for example, outlawed social parasitism or porn everywhere, prevented the appearance of social networks or their degradation into current state or tried to align the AIs to communism instead of benefits of corporations.
P.S. My scenario has the AIs’ utility bottlenecked on alignment: superintelligence emerges, but ends up either misaligned and ready to take over the world or be aligned to be usable only as help to elicit maximal capabilities of the user’s brain. Is the latter scenario a medium-level AI?
I consider the case when interstellar travel is actually impossible to be highly unlikely. Think of Project Daedalus, for example.
Of course, even if humans have lost their significant chance, then the aliens aren’t all doomed to lose their chances.
I tried to ask many AIs to estimate the probability of the USSR’s victory in the Cold War by using data available on Jan 1, 1951 and right before Khruschev resigning. What I got doesn’t imply that the USSR’s victory was totally impossible: