If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
If I remember right, the present received wisdom is that if you succeed in sending a message like that, you’re inviting somebody to wipe you out. So you may get active opposition.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.