On the other hand, A Fire Upon the Deep begins with a human expedition poking through an ancient alien data-library. The humans are well aware that there can be dangers in ancient archives, and they think they are just safely browsing, but in fact they unintentionally respawn a malign AI which, when it’s ready, bootstraps its way to superintelligence and kills them all.
One of the interesting details of a A Fire Upon the Deep is that the humans dig up an unusually nasty superintelligence, by the standards of their universe. In-world, everyone knows about the “Powers”, transcendent superintelligences with unknowable goals and incomprehensible power. But the Powers mostly leave dumber sapients alone (because the dumber sapients inhabit portions of the galaxy that would be fatal to the exotic physics used by the Powers), and most Powers “burn out” or disappear within a couple of decades (because they think vastly faster than entire human civilizations). Basically, the universe of the book is carefully set up so that superintelligences don’t profit by trying to “eat the light cone.” So they typically remain spatially localized and turn inward.
But the humans at the Straumli Realm High Lab dig up the remnants of a Power that enjoys messing with dumber sapients, and which is unusually capable of confronting its peers. This is something much worse than an x-risk. It’s a superintelligence with an unusually perverse value function, one which ignores the usual incentives that affect the Powers. It appears to actively value doing horrifying things to sapients, even when doing so would involve paying a steep price in efficiency. Given the fact that the universe with the Powers isn’t much like our own, I’m not sure what the moral is here. Except, perhaps, “Be thankful if the worst thing the incomprehensible alien superintelligence wants is your atoms.” Or maybe, “You’re not the upper limit of the intelligence scale, and you never had any control over the superintelligence.”
Accelerando and A Fire Upon the Deep were interesting early attempts to imagine what an actual incomprehensible superintelligence might be like. The Vile Offspring and The Blight still give me the creeps decades later. The problem is that a lot of readers originally read these books (and similar warning tales), and thought, “Hey, I know! I should totally build the Torment Nexus!” Thanks partly to fiction like this, there were definitely pockets of near-messianic believers in the Singularity in the 00s. I have long suspected that this is one of the reasons why Stross gave up writing books like Accelerando.
But there has long been a strain of disquiet among people who took the longest views. C.S. Lewis warned about “The Conditioners”, who had the power to build custom minds to spec. He figured this would be a bad thing:
Man’s conquest of Nature turns out, in the moment of its consummation, to be Nature’s conquest of Man. Every victory we seemed to win has led us, step by step, to this conclusion. All Nature’s apparent reverses have been but tactical withdrawals...
There are progressions in which the last step is sui generis—incommensurable with the others—and in which to go the whole way is to undo all the labour of your previous journey.
And then there’s the infamous Yudkowsky-like warning from 1863, “Darwin Among the Machines”, though this at least superficially reads like satire. I’m not entirely sure whether there is a real concern hidden under the satire (“ha ha only serious”). But certainly in those days, the “rise of the machines” rightfully seemed like a problem for far-distant generations, if at all. But as Lewis warns, sometimes all the steps right up until the final step are apparently beneficial, and the final step is fatal.
One of the interesting details of a A Fire Upon the Deep is that the humans dig up an unusually nasty superintelligence, by the standards of their universe. In-world, everyone knows about the “Powers”, transcendent superintelligences with unknowable goals and incomprehensible power. But the Powers mostly leave dumber sapients alone (because the dumber sapients inhabit portions of the galaxy that would be fatal to the exotic physics used by the Powers), and most Powers “burn out” or disappear within a couple of decades (because they think vastly faster than entire human civilizations). Basically, the universe of the book is carefully set up so that superintelligences don’t profit by trying to “eat the light cone.” So they typically remain spatially localized and turn inward.
But the humans at the Straumli Realm High Lab dig up the remnants of a Power that enjoys messing with dumber sapients, and which is unusually capable of confronting its peers. This is something much worse than an x-risk. It’s a superintelligence with an unusually perverse value function, one which ignores the usual incentives that affect the Powers. It appears to actively value doing horrifying things to sapients, even when doing so would involve paying a steep price in efficiency. Given the fact that the universe with the Powers isn’t much like our own, I’m not sure what the moral is here. Except, perhaps, “Be thankful if the worst thing the incomprehensible alien superintelligence wants is your atoms.” Or maybe, “You’re not the upper limit of the intelligence scale, and you never had any control over the superintelligence.”
Accelerando and A Fire Upon the Deep were interesting early attempts to imagine what an actual incomprehensible superintelligence might be like. The Vile Offspring and The Blight still give me the creeps decades later. The problem is that a lot of readers originally read these books (and similar warning tales), and thought, “Hey, I know! I should totally build the Torment Nexus!” Thanks partly to fiction like this, there were definitely pockets of near-messianic believers in the Singularity in the 00s. I have long suspected that this is one of the reasons why Stross gave up writing books like Accelerando.
But there has long been a strain of disquiet among people who took the longest views. C.S. Lewis warned about “The Conditioners”, who had the power to build custom minds to spec. He figured this would be a bad thing:
And then there’s the infamous Yudkowsky-like warning from 1863, “Darwin Among the Machines”, though this at least superficially reads like satire. I’m not entirely sure whether there is a real concern hidden under the satire (“ha ha only serious”). But certainly in those days, the “rise of the machines” rightfully seemed like a problem for far-distant generations, if at all. But as Lewis warns, sometimes all the steps right up until the final step are apparently beneficial, and the final step is fatal.