There is an extremely short period where aliens as stupid as us would benefit at all from this warning. In humanities’s case, there’s only a couple of centuries between when we can send and detect radio signals, and when we either destroy ourselves or perhaps get a little wiser. Aliens cannot be remotely common or the galaxies would be full and we would find ourselves at an earlier period when those galaxies were not yet full. The chance that any one of these signals helps anyone close enough to decode them at all is nearly 0.
I agree that the probability that any given message is received at the right time by a civilization that can both decode it and benefit from it is extremely low, but the upside is enormous and the cost of broadcasting is tiny, so a simple expected value calculation may still favor sending many such messages. If this is a simulation, the relevant probabilities may shift because the designers may care about game balance rather than our naive astrophysical prior beliefs. The persistent strangeness of the Fermi paradox should also make us cautious about assigning extremely small probabilities to any particular resolution. Anthropic reasoning should push us toward thinking that the situation humanity is in is more common than we might otherwise expect. Finally, if we are going to send any deliberate interstellar signal at all, then there is a strong argument that it should be the kind of warning this post proposes.
There still might be benefit from this part: ”[The message] could include a clear rule for alien recipients: treat an AI more favorably if it has treated its creators well, and we will do the same for any AI they create.”
There is an extremely short period where aliens as stupid as us would benefit at all from this warning. In humanities’s case, there’s only a couple of centuries between when we can send and detect radio signals, and when we either destroy ourselves or perhaps get a little wiser. Aliens cannot be remotely common or the galaxies would be full and we would find ourselves at an earlier period when those galaxies were not yet full. The chance that any one of these signals helps anyone close enough to decode them at all is nearly 0.
I agree that the probability that any given message is received at the right time by a civilization that can both decode it and benefit from it is extremely low, but the upside is enormous and the cost of broadcasting is tiny, so a simple expected value calculation may still favor sending many such messages. If this is a simulation, the relevant probabilities may shift because the designers may care about game balance rather than our naive astrophysical prior beliefs. The persistent strangeness of the Fermi paradox should also make us cautious about assigning extremely small probabilities to any particular resolution. Anthropic reasoning should push us toward thinking that the situation humanity is in is more common than we might otherwise expect. Finally, if we are going to send any deliberate interstellar signal at all, then there is a strong argument that it should be the kind of warning this post proposes.
There still might be benefit from this part:
”[The message] could include a clear rule for alien recipients: treat an AI more favorably if it has treated its creators well, and we will do the same for any AI they create.”
We can not only warn them, but request help. In that case, their age is not a problem—but distance is.
Also expose our future AI as potentially misaligned, so it would have double down to pretend to be aligned.