I have never done cryptography, but the way I imagine working in it is that it exists in a context of extremely resourceful adversarial agents, and thus you have to give up a kind of casual, not quite noticed neglect toward extremely weird and artificial-sounding edge cases / seemingly weird and unlikely scenarios, because this is where the danger lives: your adversaries may force these weird edge cases to happen, and this is a part of the system’s behavior you haven’t sufficiently thought through.
Maybe one possible analogy with AI alignment, at least, is that we’re also talking about potential extremely resourceful agents that are adversarial until we’ve actually solved alignment, so we’re not allowed to treat weird hypothetical scenarios as unlikely edge cases and say “Come on, that’s way too far-fetched, how would it even do that?”, because it’s like pointing to a hole in a ship’s hull and saying “What are the odds the water molecules would decide to go through this hole? The ship is so big!”
I have never done cryptography, but the way I imagine working in it is that it exists in a context of extremely resourceful adversarial agents, and thus you have to give up a kind of casual, not quite noticed neglect toward extremely weird and artificial-sounding edge cases / seemingly weird and unlikely scenarios, because this is where the danger lives: your adversaries may force these weird edge cases to happen, and this is a part of the system’s behavior you haven’t sufficiently thought through.
Maybe one possible analogy with AI alignment, at least, is that we’re also talking about potential extremely resourceful agents that are adversarial until we’ve actually solved alignment, so we’re not allowed to treat weird hypothetical scenarios as unlikely edge cases and say “Come on, that’s way too far-fetched, how would it even do that?”, because it’s like pointing to a hole in a ship’s hull and saying “What are the odds the water molecules would decide to go through this hole? The ship is so big!”