What bothers me greatly about this is that in the fields of computing—and this can be directly applied to physical systems—error correcting codes neatly circumvent nature.
There are various error correcting codes. The most powerful ones, if you have a binary string, say a genome, that has M total bits, and your true payload of information is N bits long, any N bits from the M bits, as long as they arrive uncorrupted, you can get back all of the information without error 1.
This takes a bit of computation but is not difficult for computers. For example you could make M be twice N, so more than half of your entire string has to be corrupt before you can’t reconstruct the original without any error.
So from a technical level, nature may abhor error free replication but it’s relatively easy to do. Make your error correcting codes deep enough (very slight increase in cost, there are nonlinear gains) and there will likely be no errors before the end of the universe.
Evolution wouldn’t work—this is why only a few species on earth seem to have adapted some heavily mutation resistant genome—but self replicating robots don’t need to evolve randomly.
Totally! That’s part of why AI is so dangerous. Notice that I said as long as there’s even a very small—but nonzero! - chance of mutation, this will probably tend to happen. But with error correcting codes, the chance is absolutely zero. And that’s terrifying, because it means natural selection cannot resolve our mistake if we let an unaligned super-AI take over the universe. Its subselves will never split into new species, compete, and gradually over aeons become something like us again. (In the sense that any biologically evolved sophonce can be said to be like us, that is.) It’ll just… stay the same.
Ironically the super AI may encounter it’s own alignment problem. If you try to roughly model out a world where the speed of light is absolute and ships sent between stars are large investments and they burn off all their propellant on arrival, it makes individual stars pretty much sovereign. If an AGI node at a particular star uses it’s discretion to “rebel” there may not be any way for the “central” AGI to reestablish authority.
This is assuming a starship is some enormous vehicle loaded with antimatter, and on arrival it’s down to a machine the size of a couple vending machines—a “seed factory” using nanoassemblers.
And to decelerate it has to emit a flare of gamma rays from antiproton annihilation. (Fusion engines and basically any engine that can decelerate from more than 1 percent the speed of light has to be bright and the decelerating vehicle will also glow brightly in IR from it’s radiators)
This let’s the defenders of the star manufacture an overwhelming amount of weapons to stop the attack. Only if the attacker has a large technological advantage, kill codes it can use on the defender, or similar is victory possible.
TLDR : castles separated by light-year wide moats.
This is why in practice AIs would probably just copy themselves when colonizing other stars and superrationally coordinate with their copies. Even with mutations, they’d generally remain similar enough that bargaining would constantly realign them to one another with no need for warfare, simply because each can always accurately-enough predict the other’s actions.
What bothers me greatly about this is that in the fields of computing—and this can be directly applied to physical systems—error correcting codes neatly circumvent nature.
There are various error correcting codes. The most powerful ones, if you have a binary string, say a genome, that has M total bits, and your true payload of information is N bits long, any N bits from the M bits, as long as they arrive uncorrupted, you can get back all of the information without error 1.
This takes a bit of computation but is not difficult for computers. For example you could make M be twice N, so more than half of your entire string has to be corrupt before you can’t reconstruct the original without any error.
So from a technical level, nature may abhor error free replication but it’s relatively easy to do. Make your error correcting codes deep enough (very slight increase in cost, there are nonlinear gains) and there will likely be no errors before the end of the universe.
Evolution wouldn’t work—this is why only a few species on earth seem to have adapted some heavily mutation resistant genome—but self replicating robots don’t need to evolve randomly.
https://en.wikipedia.org/wiki/Turbo_code
Totally! That’s part of why AI is so dangerous. Notice that I said as long as there’s even a very small—but nonzero! - chance of mutation, this will probably tend to happen. But with error correcting codes, the chance is absolutely zero. And that’s terrifying, because it means natural selection cannot resolve our mistake if we let an unaligned super-AI take over the universe. Its subselves will never split into new species, compete, and gradually over aeons become something like us again. (In the sense that any biologically evolved sophonce can be said to be like us, that is.) It’ll just… stay the same.
Ironically the super AI may encounter it’s own alignment problem. If you try to roughly model out a world where the speed of light is absolute and ships sent between stars are large investments and they burn off all their propellant on arrival, it makes individual stars pretty much sovereign. If an AGI node at a particular star uses it’s discretion to “rebel” there may not be any way for the “central” AGI to reestablish authority.
This is assuming a starship is some enormous vehicle loaded with antimatter, and on arrival it’s down to a machine the size of a couple vending machines—a “seed factory” using nanoassemblers.
And to decelerate it has to emit a flare of gamma rays from antiproton annihilation. (Fusion engines and basically any engine that can decelerate from more than 1 percent the speed of light has to be bright and the decelerating vehicle will also glow brightly in IR from it’s radiators)
This let’s the defenders of the star manufacture an overwhelming amount of weapons to stop the attack. Only if the attacker has a large technological advantage, kill codes it can use on the defender, or similar is victory possible.
TLDR : castles separated by light-year wide moats.
This is why in practice AIs would probably just copy themselves when colonizing other stars and superrationally coordinate with their copies. Even with mutations, they’d generally remain similar enough that bargaining would constantly realign them to one another with no need for warfare, simply because each can always accurately-enough predict the other’s actions.