How would any of this work? How do you go from a sequence of unknown symbols with no key to even understanding the definitions of the alien bytecode or programming language?
Some people have already proposed ways of doing this. For example, in 1960 Hans Freudenthal described Lincos, which is intended to be readily understandable by intelligent aliens on the receiving end of interstellar contact. Maybe he succeeded, maybe he didn’t, but I don’t think the problem is very hard in an absolute sense. Extremely technologically advanced aliens should be able to solve this problem.
Since aliens inhabit the same physical universe we do, and likely evolved via natural selection, then it’s very likely they will share a few key cognitive concepts with us. Of course, maybe you think I’m making unjustified assumptions here, but I’d consider these to be among the least objectionable assumptions in this whole framework.
Who will run alien code shared only as ‘bytecode’ without a VM? Who would be dumb enough not to try to translate or analyze it in a high level form that humans can understand? [...] We should know that it’s highly dangerous data and treat it accordingly.
As I pointed out in the post, the official protocol of the SETI Institute recommends that we immediately flood the internet with any alien messages we receive. It’s going to be pretty hard to prevent people from running code, once that happens.
The irony is that you immediately recognized this as a bad policy. But that’s exactly my point.
Don’t get me wrong, I’d love to be wrong because smart people have already thought about this, and instantly realized the flaw in running arbitrary computer programs sent to us by aliens without even a token effort to review the programs first. But, uhh, this is not an area our civilization has been particularly good at thinking about so far.
My default expectation would be that it’s a civilization descended from an unaligned AGI, so I’m confused why you believe this is likely.
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
In the context of this comment, I don’t think it really matters whether the alien AGIs are aligned or not. The point is whether they will share cognitive concepts with us. I think AIs will share at least a few cognitive concepts even if they’re very misaligned with us. It’s kind of hard for me to imagine this not being true (aren’t they living in the same universe?). That said, I admit that point about evolution wasn’t very strong; I mostly meant that aliens would descend from some precursor species that evolved via natural selection. The much stronger argument is that aliens will share some cognitive concepts because there’s a natural set of concepts in the universe, such as the concept of “atoms”.
Some people have already proposed ways of doing this. For example, in 1960 Hans Freudenthal described Lincos, which is intended to be readily understandable by intelligent aliens on the receiving end of interstellar contact. Maybe he succeeded, maybe he didn’t, but I don’t think the problem is very hard in an absolute sense. Extremely technologically advanced aliens should be able to solve this problem.
Since aliens inhabit the same physical universe we do, and likely evolved via natural selection, then it’s very likely they will share a few key cognitive concepts with us. Of course, maybe you think I’m making unjustified assumptions here, but I’d consider these to be among the least objectionable assumptions in this whole framework.
As I pointed out in the post, the official protocol of the SETI Institute recommends that we immediately flood the internet with any alien messages we receive. It’s going to be pretty hard to prevent people from running code, once that happens.
The irony is that you immediately recognized this as a bad policy. But that’s exactly my point.
Don’t get me wrong, I’d love to be wrong because smart people have already thought about this, and instantly realized the flaw in running arbitrary computer programs sent to us by aliens without even a token effort to review the programs first. But, uhh, this is not an area our civilization has been particularly good at thinking about so far.
“likely evolved via natural selection”
My default expectation would be that it’s a civilization descended from an unaligned AGI, so I’m confused why you believe this is likely.
A guess: you said you’re optimistic about alignment by default—so do you expect aligned AGI acting in accordance with the interests of a natural-selection-evolved species?
In the context of this comment, I don’t think it really matters whether the alien AGIs are aligned or not. The point is whether they will share cognitive concepts with us. I think AIs will share at least a few cognitive concepts even if they’re very misaligned with us. It’s kind of hard for me to imagine this not being true (aren’t they living in the same universe?). That said, I admit that point about evolution wasn’t very strong; I mostly meant that aliens would descend from some precursor species that evolved via natural selection. The much stronger argument is that aliens will share some cognitive concepts because there’s a natural set of concepts in the universe, such as the concept of “atoms”.