I think this is a promising strategy that deserves more investigation. Your game theory analysis of dark forest-type situations is particularly compelling; thank you for sharing it. I have two main questions: (1) to what extent is this technically feasible, and (2) how politically costly would the weirdness of the proposal be?
For technical feasibility, I was very surprised to hear you suggest targeting the Andromeda Galaxy. I agree that in principle the nearest stars are more likely to already have whatever data they might want about Earth, but I think of “the nearest stars” as being within 50 light-years or so, not as including the entire Milky Way. Can you explain why you think we’d be able to send any message at all to the Andromeda Galaxy in the next few years, or why an alien civilization 1,000 light-years away in a different part of the Milky Way would most likely be able to passively gather enough data on Earth to draw their own conclusions about us without the need for a warning?
The other part of the technical feasibility question is whether constructed languages like CosmicOS actually work. Has anyone done testing to see whether, e.g., physicists with no prior exposure to the language and no reference guides are able to successfully decipher messages in CosmicOS?
Politically, I’d like to see focus groups and polling on the proposal. Does the general American public approve or disapprove of such warnings? Do they think it’s important or unimportant? What about astronomers, or Congressional staffers, or NASA employees? Yes, this is a weird idea, but the details could turn out to matter in terms of whether it’s so weird that there’s a high risk of burning significant amounts of credibility for the AI safety movement as a whole.
As I remember, even small radiotelesopes (used as senders) can be used for METI if focused on exact star. The distance can always be compensation by assumption that advance civilization has larger receivers, so sending to Andromeda still is feasible. I don’t know any test of CosmicOS, but human hackers were able to break much more complex codes.
The whole thing can be done unilaterally, may be even by privately owned radiotransmitter. Previously METI enthusiasts tried to used small transmitters. But the fact of transmission has to be public so it will have an restraining effect on the future AI.
If we will start discussing it, we will not reach consensus for many years.
Yeah, but have you done a back of the envelope calculation here, or has anyone else? What size target could we hit in the Andromeda galaxy using, e.g., $50 million at our current tech levels, and how long could we transmit for? How large of a receiver would that target need to have pointing toward us in order to receive the message with anything like reasonable fidelity? If our message is focused no more tightly than on “a star,” then would the receivers need an antenna the size of a solar system to pick it up? If not, why not?
I’m not sure codebreaking is a reasonable test of a supposedly universal language. A coded message has some content that you know would make sense if the code can be broken. By contrast, a CosmicOS message might or might not have any content that anyone else would be able to absorb. Consider the difference between, e.g., a Chinese transmission sent in the clear, and an English transmission sent using sophisticated encryption. If you’re an English speaker who’s never been exposed to even the concept of a logographic writing system, then it’s not obvious to me that it will be easier to make sense of the plaintext Chinese message than the encrypted English message. I think we should test that hypothesis before we invest in an enormous transmitter.
I’m not sure what your comment “if we will start discussing it, we will not reach consensus for many years” implies about your interest in this conversation. If you don’t see a discussion on this topic as valuable, that’s fine, and I won’t take up any more of your time.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
> assumption that advance civilization has larger receivers
If they are more advanced than us, wouldn’t they either have aligned AI or be AI? In that case, I’m not sure what warning them about our possible AI would do for them?
I think this is a promising strategy that deserves more investigation. Your game theory analysis of dark forest-type situations is particularly compelling; thank you for sharing it. I have two main questions: (1) to what extent is this technically feasible, and (2) how politically costly would the weirdness of the proposal be?
For technical feasibility, I was very surprised to hear you suggest targeting the Andromeda Galaxy. I agree that in principle the nearest stars are more likely to already have whatever data they might want about Earth, but I think of “the nearest stars” as being within 50 light-years or so, not as including the entire Milky Way. Can you explain why you think we’d be able to send any message at all to the Andromeda Galaxy in the next few years, or why an alien civilization 1,000 light-years away in a different part of the Milky Way would most likely be able to passively gather enough data on Earth to draw their own conclusions about us without the need for a warning?
The other part of the technical feasibility question is whether constructed languages like CosmicOS actually work. Has anyone done testing to see whether, e.g., physicists with no prior exposure to the language and no reference guides are able to successfully decipher messages in CosmicOS?
Politically, I’d like to see focus groups and polling on the proposal. Does the general American public approve or disapprove of such warnings? Do they think it’s important or unimportant? What about astronomers, or Congressional staffers, or NASA employees? Yes, this is a weird idea, but the details could turn out to matter in terms of whether it’s so weird that there’s a high risk of burning significant amounts of credibility for the AI safety movement as a whole.
As I remember, even small radiotelesopes (used as senders) can be used for METI if focused on exact star. The distance can always be compensation by assumption that advance civilization has larger receivers, so sending to Andromeda still is feasible. I don’t know any test of CosmicOS, but human hackers were able to break much more complex codes.
The whole thing can be done unilaterally, may be even by privately owned radiotransmitter. Previously METI enthusiasts tried to used small transmitters. But the fact of transmission has to be public so it will have an restraining effect on the future AI.
If we will start discussing it, we will not reach consensus for many years.
Yeah, but have you done a back of the envelope calculation here, or has anyone else? What size target could we hit in the Andromeda galaxy using, e.g., $50 million at our current tech levels, and how long could we transmit for? How large of a receiver would that target need to have pointing toward us in order to receive the message with anything like reasonable fidelity? If our message is focused no more tightly than on “a star,” then would the receivers need an antenna the size of a solar system to pick it up? If not, why not?
I’m not sure codebreaking is a reasonable test of a supposedly universal language. A coded message has some content that you know would make sense if the code can be broken. By contrast, a CosmicOS message might or might not have any content that anyone else would be able to absorb. Consider the difference between, e.g., a Chinese transmission sent in the clear, and an English transmission sent using sophisticated encryption. If you’re an English speaker who’s never been exposed to even the concept of a logographic writing system, then it’s not obvious to me that it will be easier to make sense of the plaintext Chinese message than the encrypted English message. I think we should test that hypothesis before we invest in an enormous transmitter.
I’m not sure what your comment “if we will start discussing it, we will not reach consensus for many years” implies about your interest in this conversation. If you don’t see a discussion on this topic as valuable, that’s fine, and I won’t take up any more of your time.
I think that METI theorists have such calculations.
I analyzed (related to SETI-risk) the ways how to send self-evident data and concluded that the best starting point is to send two-dimensional images encoded in the ways like old school TV signal.
No, it was just stating the facts: the discussion of METI risks continues for decades and positions of opponents and proponents are entrenched.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
> assumption that advance civilization has larger receivers
If they are more advanced than us, wouldn’t they either have aligned AI or be AI? In that case, I’m not sure what warning them about our possible AI would do for them?
Request for help and also expose our AI future AI as potentially misaligned so it would have double down to pretend to be aligned.