As I remember, even small radiotelesopes (used as senders) can be used for METI if focused on exact star. The distance can always be compensation by assumption that advance civilization has larger receivers, so sending to Andromeda still is feasible. I don’t know any test of CosmicOS, but human hackers were able to break much more complex codes.
The whole thing can be done unilaterally, may be even by privately owned radiotransmitter. Previously METI enthusiasts tried to used small transmitters. But the fact of transmission has to be public so it will have an restraining effect on the future AI.
If we will start discussing it, we will not reach consensus for many years.
Yeah, but have you done a back of the envelope calculation here, or has anyone else? What size target could we hit in the Andromeda galaxy using, e.g., $50 million at our current tech levels, and how long could we transmit for? How large of a receiver would that target need to have pointing toward us in order to receive the message with anything like reasonable fidelity? If our message is focused no more tightly than on “a star,” then would the receivers need an antenna the size of a solar system to pick it up? If not, why not?
I’m not sure codebreaking is a reasonable test of a supposedly universal language. A coded message has some content that you know would make sense if the code can be broken. By contrast, a CosmicOS message might or might not have any content that anyone else would be able to absorb. Consider the difference between, e.g., a Chinese transmission sent in the clear, and an English transmission sent using sophisticated encryption. If you’re an English speaker who’s never been exposed to even the concept of a logographic writing system, then it’s not obvious to me that it will be easier to make sense of the plaintext Chinese message than the encrypted English message. I think we should test that hypothesis before we invest in an enormous transmitter.
I’m not sure what your comment “if we will start discussing it, we will not reach consensus for many years” implies about your interest in this conversation. If you don’t see a discussion on this topic as valuable, that’s fine, and I won’t take up any more of your time.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
> assumption that advance civilization has larger receivers
If they are more advanced than us, wouldn’t they either have aligned AI or be AI? In that case, I’m not sure what warning them about our possible AI would do for them?
As I remember, even small radiotelesopes (used as senders) can be used for METI if focused on exact star. The distance can always be compensation by assumption that advance civilization has larger receivers, so sending to Andromeda still is feasible. I don’t know any test of CosmicOS, but human hackers were able to break much more complex codes.
The whole thing can be done unilaterally, may be even by privately owned radiotransmitter. Previously METI enthusiasts tried to used small transmitters. But the fact of transmission has to be public so it will have an restraining effect on the future AI.
If we will start discussing it, we will not reach consensus for many years.
Yeah, but have you done a back of the envelope calculation here, or has anyone else? What size target could we hit in the Andromeda galaxy using, e.g., $50 million at our current tech levels, and how long could we transmit for? How large of a receiver would that target need to have pointing toward us in order to receive the message with anything like reasonable fidelity? If our message is focused no more tightly than on “a star,” then would the receivers need an antenna the size of a solar system to pick it up? If not, why not?
I’m not sure codebreaking is a reasonable test of a supposedly universal language. A coded message has some content that you know would make sense if the code can be broken. By contrast, a CosmicOS message might or might not have any content that anyone else would be able to absorb. Consider the difference between, e.g., a Chinese transmission sent in the clear, and an English transmission sent using sophisticated encryption. If you’re an English speaker who’s never been exposed to even the concept of a logographic writing system, then it’s not obvious to me that it will be easier to make sense of the plaintext Chinese message than the encrypted English message. I think we should test that hypothesis before we invest in an enormous transmitter.
I’m not sure what your comment “if we will start discussing it, we will not reach consensus for many years” implies about your interest in this conversation. If you don’t see a discussion on this topic as valuable, that’s fine, and I won’t take up any more of your time.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
> assumption that advance civilization has larger receivers
If they are more advanced than us, wouldn’t they either have aligned AI or be AI? In that case, I’m not sure what warning them about our possible AI would do for them?
Request for help and also expose our AI future AI as potentially misaligned so it would have double down to pretend to be aligned.