We have a communication channel to a dangerously expansive and militaristic alien civilization, and excellent surveillance of them (we managed to obtain a copy of their Internet archive), so we know a lot about the current state of their culture. We can send them messages, but since they are paranoid they will basically always disregard these, unless they’re valid checkable mathematical proofs. We’re pretty sure that if we let them expand they will start an interstellar war and destroy us, so we need to crash their civilization, by sending them mathematical proofs. What do we send them? Assume our math is a millenium ahead of theirs, and theirs is about current-day.
There are way more bits available to the aliens/AI if they are allowed to choose what mathematical proofs to send. In my hypothetical, the only choice they can make is whether to fail to produce a valid proof. We don’t even see the proof itself, since we just run it through the proof assistant and discard it.
Security thinking:
We have a communication channel to a dangerously expansive and militaristic alien civilization, and excellent surveillance of them (we managed to obtain a copy of their Internet archive), so we know a lot about the current state of their culture. We can send them messages, but since they are paranoid they will basically always disregard these, unless they’re valid checkable mathematical proofs. We’re pretty sure that if we let them expand they will start an interstellar war and destroy us, so we need to crash their civilization, by sending them mathematical proofs. What do we send them? Assume our math is a millenium ahead of theirs, and theirs is about current-day.
There are way more bits available to the aliens/AI if they are allowed to choose what mathematical proofs to send. In my hypothetical, the only choice they can make is whether to fail to produce a valid proof. We don’t even see the proof itself, since we just run it through the proof assistant and discard it.
I’d missed that, and I agree it makes a huge difference.
However, I don’t think a culture that isn’t willing to pause AI development entirely would accept you proposal.
Yeah, I made the most conservative possible proposal to make a point, but there’s probably some politically-viable middle ground somewhere
Yes, I thought I specified this in the post, but maybe it is not clear.