Thank you all for your comments and feedback! No matter how pleased I am with the active reception of my idea, the very same thing also makes me feel sad, for obvious reasons.
I agree that sending transmissions of sufficient intensity can be challenging and may be a dealbreaker. It would be great if someone did proper calculations; perhaps I will do them.
However, I want to emphasise one thing which I probably did not emphasise enough in the article itself: for me, at this point, it is more about acknowledging that something useful can be done even if AI doom is imminent and creating a list of ideas rather than discussing and implementing selected few ideas. I gave specific ideas more for the sake of illustration, although it is, of course, good if they can play out.
It may just be that suggesting additional ideas for plan E is genuinely hard, so no one did it, but maybe I did not create a proper call to action, so I am doing it now.
I asked an LLM to do the math explicitly, and I think it shows that it’s pretty infeasible—you need a large portion of total global power output, and even then you need to know who’s receiving the message, you can’t do a broad transmission.
I also think this plan preserves almost nothing I care about. At the same time, at least it’s realistic about our current trajectory, so I think planning along these lines and making the case for doing it clearly and publicly is on net good, even if I’m skeptical of the specific details you suggested, and don’t think it’s particularly great even if we succeed.
Thank you all for your comments and feedback! No matter how pleased I am with the active reception of my idea, the very same thing also makes me feel sad, for obvious reasons.
I agree that sending transmissions of sufficient intensity can be challenging and may be a dealbreaker. It would be great if someone did proper calculations; perhaps I will do them.
However, I want to emphasise one thing which I probably did not emphasise enough in the article itself: for me, at this point, it is more about acknowledging that something useful can be done even if AI doom is imminent and creating a list of ideas rather than discussing and implementing selected few ideas. I gave specific ideas more for the sake of illustration, although it is, of course, good if they can play out.
It may just be that suggesting additional ideas for plan E is genuinely hard, so no one did it, but maybe I did not create a proper call to action, so I am doing it now.
I asked an LLM to do the math explicitly, and I think it shows that it’s pretty infeasible—you need a large portion of total global power output, and even then you need to know who’s receiving the message, you can’t do a broad transmission.
I also think this plan preserves almost nothing I care about. At the same time, at least it’s realistic about our current trajectory, so I think planning along these lines and making the case for doing it clearly and publicly is on net good, even if I’m skeptical of the specific details you suggested, and don’t think it’s particularly great even if we succeed.