For organic life, not for machines. We have machines crawling all over space and already exiting the solar system, still going.
TBH this analysis seems quite far removed from the capbilities usually imagined for superintelligence. If a machine intelligence can nano-bot humanity out of extinction in 1 second then it can definitely go to the moon more easily than we can (which we did with relative ease). If AI can’t colonise space, then I’m no longer afraid of it at all.
Fair point. Cosmic radiation is also hostile to machines, probably more deadly to the more sensitive ones, but I guess a combination of shielding, self-checking, and redundancy could solve it.
We don’t have data on what a typical AI might look like—I mean, an AI developed by a random space civilization. Do they all get some variant of LLMs first? Something that can copy their skills and become smart enough to destroy them, but also has a nonzero amount of hallucination, especially in unfamiliar scenarios, which afterwards at some point destroys the AI itself before it can conquer the universe? But this is pure speculation with no data, the imaginations could go in any direction...
Haha I have no idea! I agree the possibility space is huge. All I do know is that we don’t see any evidence of alien AIs around us, so they are a poor explanation as a great filter for other alien races (unless they kill those races and then for some reason kill themselves, too / decide to be non-expansionist every single time).
“In space, the smallest mistake will kill you”
For organic life, not for machines. We have machines crawling all over space and already exiting the solar system, still going.
TBH this analysis seems quite far removed from the capbilities usually imagined for superintelligence. If a machine intelligence can nano-bot humanity out of extinction in 1 second then it can definitely go to the moon more easily than we can (which we did with relative ease). If AI can’t colonise space, then I’m no longer afraid of it at all.
Fair point. Cosmic radiation is also hostile to machines, probably more deadly to the more sensitive ones, but I guess a combination of shielding, self-checking, and redundancy could solve it.
We don’t have data on what a typical AI might look like—I mean, an AI developed by a random space civilization. Do they all get some variant of LLMs first? Something that can copy their skills and become smart enough to destroy them, but also has a nonzero amount of hallucination, especially in unfamiliar scenarios, which afterwards at some point destroys the AI itself before it can conquer the universe? But this is pure speculation with no data, the imaginations could go in any direction...
Haha I have no idea! I agree the possibility space is huge. All I do know is that we don’t see any evidence of alien AIs around us, so they are a poor explanation as a great filter for other alien races (unless they kill those races and then for some reason kill themselves, too / decide to be non-expansionist every single time).