Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.
Yes. So here is the choice between two theories of existential risk. One is that there is no dangerous AI possible and aliens are near and slow. In that case, METI is dangerous. Another is that superintelligent AI is possible soon and present the main risk and aliens are far. Such choice boils down to discussion about AI risk in general.