How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language? Everyone posits things like “they will use universal principles of math in their volume 0 messages” but how do you bootstrap from that to meaning when you have nothing but their signal? Are we even vaguely sure this is possible?
When we do language pairing in the real world it’s between beings that share almost identical compute and sensor topologies, and we can ‘point’ to shared objects that we can both observe with our sensors. This creates a shared context that you don’t get from a series of amplitude/phase changes from a transmitter many lightyears away.
And then ok, say you get this far. Who will run alien code shared only as ‘bytecode’ without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand?
In fact it seems kinda obvious that any ‘news’ aliens blast to us from that distance has to be a self replicating parasite. Why else would they invest the resources? We should know that it’s highly dangerous data and treat it accordingly.
How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language?
Some people have already proposed ways of doing this. For example, in 1960 Hans Freudenthal described Lincos, which is intended to be readily understandable by intelligent aliens on the receiving end of interstellar contact. Maybe he succeeded, maybe he didn’t, but I don’t think the problem is very hard in an absolute sense. Extremely technologically advanced aliens should be able to solve this problem.
Since aliens inhabit the same physical universe we do, and likely evolved via natural selection, then it’s very likely they will share a few key cognitive concepts with us. Of course, maybe you think I’m making unjustified assumptions here, but I’d consider these to be among the least objectionable assumptions in this whole framework.
Who will run alien code shared only as ‘bytecode’ without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand? [...] We should know that it’s highly dangerous data and treat it accordingly.
As I pointed out in the post, the official protocol of the SETI Institute recommends that we immediately flood the internet with any alien messages we receive. It’s going to be pretty hard to prevent people from running code, once that happens.
The irony is that you immediately recognized this as a bad policy. But that’s exactly my point.
Don’t get me wrong, I’d love to be wrong because smart people have already thought about this, and instantly realized the flaw in running arbitrary computer programs sent to us by aliens without even a token effort to review the programs first. But, uhh, this is not an area our civilization has been particularly good at thinking about so far.
My default expectation would be that it’s a civilization descended from an unaligned AGI, so I’m confused why you believe this is likely.
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
In the context of this comment, I don’t think it really matters whether the alien AGIs are aligned or not. The point is whether they will share cognitive concepts with us. I think AIs will share at least a few cognitive concepts even if they’re very misaligned with us. It’s kind of hard for me to imagine this not being true (aren’t they living in the same universe?). That said, I admit that point about evolution wasn’t very strong; I mostly meant that aliens would descend from some precursor species that evolved via natural selection. The much stronger argument is that aliens will share some cognitive concepts because there’s a natural set of concepts in the universe, such as the concept of “atoms”.
One way of sending data is following: Aliens can send an easily recognisable 2D images using principles of TV: line ending symbols every n bites. Using pictures, they will send blueprints of a simple computer, like a Turing machine, and then a code for it. This computer will draw and adapt a blueprint of a more efficient computer, which can run more complex code which will be AI.
The AI would still need to be relatively simple or trained after being built, right, if they wouldn’t be able to encode billions of parameter values in the images?
The complexity of the human genome puts a rough upper bound on how many parameters would be required to specify an AGI (it will have more learned parameters, once deployed). Of course, a superintelligence capable of taking over the world is harder to bound.
One further thing to note is that an alien AI might require a lot of memory and processing power to perform its intended task. As I wrote in the post, this is one reason to suppose that aliens might want to target civilizations after they have achieved a certain level of technological development. Because otherwise their scheme might fail.
How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language? Everyone posits things like “they will use universal principles of math in their volume 0 messages” but how do you bootstrap from that to meaning when you have nothing but their signal? Are we even vaguely sure this is possible?
When we do language pairing in the real world it’s between beings that share almost identical compute and sensor topologies, and we can ‘point’ to shared objects that we can both observe with our sensors. This creates a shared context that you don’t get from a series of amplitude/phase changes from a transmitter many lightyears away.
And then ok, say you get this far. Who will run alien code shared only as ‘bytecode’ without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand?
In fact it seems kinda obvious that any ‘news’ aliens blast to us from that distance has to be a self replicating parasite. Why else would they invest the resources? We should know that it’s highly dangerous data and treat it accordingly.
A proposal for how to initiate communication with aliens, starting from just mathematics, is Hans Freudenthal’s “Lincos: Design of a Language for Cosmic Intercourse, Part 1”. (No part 2 ever appeared.)
A fictional example of receiving a dangerous message from the stars is Piers Anthony’s “Macroscope”.
Some people have already proposed ways of doing this. For example, in 1960 Hans Freudenthal described Lincos, which is intended to be readily understandable by intelligent aliens on the receiving end of interstellar contact. Maybe he succeeded, maybe he didn’t, but I don’t think the problem is very hard in an absolute sense. Extremely technologically advanced aliens should be able to solve this problem.
Since aliens inhabit the same physical universe we do, and likely evolved via natural selection, then it’s very likely they will share a few key cognitive concepts with us. Of course, maybe you think I’m making unjustified assumptions here, but I’d consider these to be among the least objectionable assumptions in this whole framework.
As I pointed out in the post, the official protocol of the SETI Institute recommends that we immediately flood the internet with any alien messages we receive. It’s going to be pretty hard to prevent people from running code, once that happens.
The irony is that you immediately recognized this as a bad policy. But that’s exactly my point.
Don’t get me wrong, I’d love to be wrong because smart people have already thought about this, and instantly realized the flaw in running arbitrary computer programs sent to us by aliens without even a token effort to review the programs first. But, uhh, this is not an area our civilization has been particularly good at thinking about so far.
“likely evolved via natural selection”
My default expectation would be that it’s a civilization descended from an unaligned AGI, so I’m confused why you believe this is likely.
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
In the context of this comment, I don’t think it really matters whether the alien AGIs are aligned or not. The point is whether they will share cognitive concepts with us. I think AIs will share at least a few cognitive concepts even if they’re very misaligned with us. It’s kind of hard for me to imagine this not being true (aren’t they living in the same universe?). That said, I admit that point about evolution wasn’t very strong; I mostly meant that aliens would descend from some precursor species that evolved via natural selection. The much stronger argument is that aliens will share some cognitive concepts because there’s a natural set of concepts in the universe, such as the concept of “atoms”.
One way of sending data is following: Aliens can send an easily recognisable 2D images using principles of TV: line ending symbols every n bites. Using pictures, they will send blueprints of a simple computer, like a Turing machine, and then a code for it. This computer will draw and adapt a blueprint of a more efficient computer, which can run more complex code which will be AI.
The AI would still need to be relatively simple or trained after being built, right, if they wouldn’t be able to encode billions of parameter values in the images?
The complexity of the human genome puts a rough upper bound on how many parameters would be required to specify an AGI (it will have more learned parameters, once deployed). Of course, a superintelligence capable of taking over the world is harder to bound.
One further thing to note is that an alien AI might require a lot of memory and processing power to perform its intended task. As I wrote in the post, this is one reason to suppose that aliens might want to target civilizations after they have achieved a certain level of technological development. Because otherwise their scheme might fail.