After reading some of Schneier’s Applied Cryptography, I think I know why there isn’t much interest in this: because this kind of cryptography is a case of “security by obscurity” [1], and imposes significant costs on the users of the system, which generally outweight the costs imposed on attackers. Basically, if your cryptosystem’s security relies on outsiders not knowing how it works, then you can’t add and remove users (everyone must be trusted never to squeal about the method), and you can’t get the help of others pointing out potential holes—the only ones looking will be attackers, not defenders.
I admit, though, that I’ve wondered myself why you can’t improve a cryptosystem by hiding the cypher used, as long as you don’t depend on it remaining hidden. It seems to me that if you use a feasibly-unbreakable cypher, then you get the advantage of good protocol review and resilience, but you also significantly narrow down the problem space: instead of “this could be anything”, it’s “let the computer work it”. This makes it possible to automate, eliminating the roadblocks involved in having a human cryptanalyst have to think about it.
That’s part of why I’m learning about cryptography—to see why that reasoning works or doesn’t work, and to what extent.
For example, I learn that RSA’s public key [1] involves a combination of two integers, each with a different use. Then I look at a signed message using the RSA protocol and it’s just a long hex-string, with no clear delimiters on which part means what. YES, I can look this up in a spec—but as an example of what MinibearRex is proposing, what if the public key I associate with a message could be any one of several (two-integer-output) operations on the “public key” string? It’s easy for the recipient to guess a few operations and see if they work with some private key bank.
But for attackers, they have to go through a lot of extra work just to get to the “extract private key from public key” stage—and the former problem can’t just be passed off to a beowulf cluster.
(Btw, to have a more informative topic title, you should change it to “Breaking unknown ciphers” or something like that.)
[1] In violation of the maxim “The enemy knows the system”, aka Kerchoff’s law.
[2] and public keys should really be called “locks”, but I’m not even going to go there.
The output from a cryptographic cipher just looks like random bits. Most protocols like PGP and SSL add some plaintext that tells what cipher was used because it helps interoperability and forward-compatibility, but if you want to use the unadorned ciphertext no one’s stopping you.
Edit: Someone think’s I’m being obtuse (or is just downvoting out of anger), so let me clarify. If I send a message encrypted with a one-time pad, then, unlike public key cryptography, the message doesn’t announce, “Hey! Here’s the cipher we used! Here’s what you need to do to break it!” No, it just looks like gibberish, with no hint as to how it’s done (unless of course the note says, “use our one-time pad, dude” in plaintext).
They have to do considerable work to even reduce the problem to that of subverting a one-time pad … and yet it has not been made thereby insecure, even with this extra complexity.
Sometimes one-time pads are insecure, yes. There was a case where a bunch of messages the Soviets had encrypted with one-time pads were cracked by American cryptanalysts, because they had reused some of the pads, because of the difficulties involved with sending fresh pads by guaranteed secure courier. (If that weren’t a difficult problem, after all, you could just use the guaranteed secure couriers for the actual messages. That’s why people don’t normally use one-time pads in practice.)
Now you could say any scheme is insecure if used improperly, and that’s true as far as it goes. But the corollary is that part of the practical security of a scheme is that it be easy to use properly.
Here’s another example of a measure that actually reduces security: password inputs replacing the letters with asterisks as you type. Yes I know it’s designed to improve security in an environment where untrusted third parties may look over your shoulder, and if you are in that sort of environment, then it’s necessary. But if you are not, then it compromises security by harshly penalizing the use of long passwords. If people would actually understand that usability is part of security, maybe they would understand the need for a setting to disable that feature.
One-time pads are very simple: both parties have n random bytes of secret data. To encrypt or decrypt an n-byte message, just XOR it together with the random bytes. Don’t use the same random bytes twice. This is the entire algorithm. How simple is that?
What ciphergoth was getting at is that secure crypto methods should be simple enough that you can analyze them easily looking for vulnerabilities, and implement them correctly without horrible security-breaking bugs. To this end, it’s typical to have just one thing that’s secret: the key. Everything else about the algorithm is public, and as simple as possible.
Yes, but in this context, the proposal is that the ciphertext not tell Eves what the protocol is. Maybe the public key’s hidden somewhere in it, maybe it’s a one-time pad, etc. Added complexity, but not in a way that (AFAICT) subverts the security, and I think ciphergoth was being a bit hasty in applying this reasoning—it warrants a deeper explanation.
If you have a secure encryption algorithm, then whether or not you tell Eve the algorithm isn’t important. Yes, it makes the code-breaking harder for her, but that difficulty is a drop in the bucket, negligible compared to the difficulty of guessing the key.
Proper crypto must be secure whether or not the algorithm is known to attackers. Go ahead and keep the algorithm secret if you really want to, but you needn’t bother.
So it adds no significant difficulty when the plaintext is in a foreign language with few translators you have access to? It was pointless for the US military to use Navajo code-talkers? The shortage of Arabic translators imposes no notable cost on the CIA’s eavesdroppers?
Those things are difficult, sure, and I never said otherwise. But I’m not sure you appreciate just how staggeringly hard it is to break modern crypto. Navajo code-talkers are using a human language, with patterns that can be figured out by a properly determined adversary. There are quite a lot of people who can translate Arabic. Those are nowhere near the difficulty of, say, eavesdropping on a message encrypted with AES-128 when you don’t know the key. Or finding a collision with a given SHA-256 hash. Those things are hard.
Generally, when a security system is broken, it’s not because of the “core” algorithm (RSA/AES etc) being broken, it’s because of other flaws in the system. If you’re keeping the system secret, you’re making things a bit harder for the bad guys (who have to play some guessing game, or get hold of a copy of your program and reverse-engineer it), but you’re also stopping it from getting the examination it needs from good-guy experts (who have better things to do with their lives than try to understand your disassembled source code).
But the key aspects of the code have been reviewed—it’s just that it’s no longer in a format that can algorithmically be passed to a breaker, and requires intelligent thought to get it to that stage, which would seem to put a bottleneck on attacks.
It’s been reviewed by you. Unless you’re a three-letter agency, that’s extremely unlikely to be thorough enough to say with any confidence that it’s secure.
Hm, actually, it depends on what you’re trying to be secure against. If, say, you’re running a website with a standard installation of something, it can be worth changing it a little bit so that automated scanning tools won’t be able to exploit flaws in it.. There won’t be huge benefit against people deliberately targetting you, though.
Yes, one-time-pads are insecure; they have no mechanism for message integrity. However, that’s a side issue.
There’s a reason our files tend to have things like magic bytes at the beginning that tell us what sort of file they are; our lives would be more complicated if these things are missing. Direct cryptanalysis is generally the least of our security worries. Measures like those you propose make things stronger where they are already strong enough at the cost of making them weaker where they are already weak.
Key management is hard. While the algorithm is simple and easy to implement, keeping the one-time pads secret may add the complexity that ciphergoth refers to.
Some professionally designed crypto systems have turned out to have serious flaws. In general, if your system is secret you don’t get the advantage of having the entire crypto community look it over and decide if it is reliable.
Moreover, for many practical applications of cryptography you need to interact with a large number of people who need to have some idea how the protocol works. For example, if I want people to be able to email me in a secure fashion even if I’ve never met them, I need something like an RSA public key. There’s no way around that. Similarly, if I’m a large organization, like say a bank or an online business that needs to send a lot of secure data to a variety of systems, each of those systems needs to know about it. Security by obscurity isn’t just a bad idea in that sort of context, it is essentially impossible.
For example, I learn that RSA’s public key [1] involves a combination of two integers, each with a different use. Then I look at a signed message using the RSA protocol and it’s just a long hex-string, with no clear delimiters on which part means what. YES, I can look this up in a spec—but as an example of what MinibearRex is proposing, what if the public key I associate with a message could be any one of several (two-integer-output) operations on the “public key” string? It’s easy for the recipient to guess a few operations and see if they work with some private key bank.
I’m not sure I follow this line of logic. The way RSA works the sender looks up the recipient’s public key. How do you intend for the sender to decide what to do?
I’m not sure I follow this line of logic. The way RSA works the sender looks up the recipient’s public key. How do you intend for the sender t to decide what to do?
Guess a few (predisclosed, known) permutations of the key. A nightmare for attackers (who have to guess what the public key is), but easy for the recipient to recover.
I’m not sure I follow. If the public key is public, then the attackers don’t have to guess. If the public key isn’t known how is a sender supposed to encrypt it?
The public key is known; its association with a particular user is not. Through a separate channel [1], members of the club were given (the very short message of) transformations they can apply to a real public key. The sender uses the real public key but labels it as having the transformed one. When the ciphertext decrypts to garbage, attackers have to figure out what the real key is, requiring un-automatable analysis.
The recipient need only try a few reverse transformations to get back the true public key.
[1] which makes it unusable for the context of e.g. banks that you discussed, but the topic was theoretical situations where this can increase security
I don’t see what your transformations are buying you. You don’t have to label an encrypted message with its key at all. The intended recipient knows their own key. So your proposal is equivalent to telling your “public” key only to people who are supposed to send you messages.
After reading some of Schneier’s Applied Cryptography, I think I know why there isn’t much interest in this: because this kind of cryptography is a case of “security by obscurity” [1], and imposes significant costs on the users of the system, which generally outweight the costs imposed on attackers. Basically, if your cryptosystem’s security relies on outsiders not knowing how it works, then you can’t add and remove users (everyone must be trusted never to squeal about the method), and you can’t get the help of others pointing out potential holes—the only ones looking will be attackers, not defenders.
I admit, though, that I’ve wondered myself why you can’t improve a cryptosystem by hiding the cypher used, as long as you don’t depend on it remaining hidden. It seems to me that if you use a feasibly-unbreakable cypher, then you get the advantage of good protocol review and resilience, but you also significantly narrow down the problem space: instead of “this could be anything”, it’s “let the computer work it”. This makes it possible to automate, eliminating the roadblocks involved in having a human cryptanalyst have to think about it.
That’s part of why I’m learning about cryptography—to see why that reasoning works or doesn’t work, and to what extent.
For example, I learn that RSA’s public key [1] involves a combination of two integers, each with a different use. Then I look at a signed message using the RSA protocol and it’s just a long hex-string, with no clear delimiters on which part means what. YES, I can look this up in a spec—but as an example of what MinibearRex is proposing, what if the public key I associate with a message could be any one of several (two-integer-output) operations on the “public key” string? It’s easy for the recipient to guess a few operations and see if they work with some private key bank.
But for attackers, they have to go through a lot of extra work just to get to the “extract private key from public key” stage—and the former problem can’t just be passed off to a beowulf cluster.
(Btw, to have a more informative topic title, you should change it to “Breaking unknown ciphers” or something like that.)
[1] In violation of the maxim “The enemy knows the system”, aka Kerchoff’s law.
[2] and public keys should really be called “locks”, but I’m not even going to go there.
The output from a cryptographic cipher just looks like random bits. Most protocols like PGP and SSL add some plaintext that tells what cipher was used because it helps interoperability and forward-compatibility, but if you want to use the unadorned ciphertext no one’s stopping you.
Trying to keep the protocol secret makes your life more complicated, and complexity is the enemy of security.
So one-time pads are insecure?
Edit: Someone think’s I’m being obtuse (or is just downvoting out of anger), so let me clarify. If I send a message encrypted with a one-time pad, then, unlike public key cryptography, the message doesn’t announce, “Hey! Here’s the cipher we used! Here’s what you need to do to break it!” No, it just looks like gibberish, with no hint as to how it’s done (unless of course the note says, “use our one-time pad, dude” in plaintext).
They have to do considerable work to even reduce the problem to that of subverting a one-time pad … and yet it has not been made thereby insecure, even with this extra complexity.
Sometimes one-time pads are insecure, yes. There was a case where a bunch of messages the Soviets had encrypted with one-time pads were cracked by American cryptanalysts, because they had reused some of the pads, because of the difficulties involved with sending fresh pads by guaranteed secure courier. (If that weren’t a difficult problem, after all, you could just use the guaranteed secure couriers for the actual messages. That’s why people don’t normally use one-time pads in practice.)
Now you could say any scheme is insecure if used improperly, and that’s true as far as it goes. But the corollary is that part of the practical security of a scheme is that it be easy to use properly.
Here’s another example of a measure that actually reduces security: password inputs replacing the letters with asterisks as you type. Yes I know it’s designed to improve security in an environment where untrusted third parties may look over your shoulder, and if you are in that sort of environment, then it’s necessary. But if you are not, then it compromises security by harshly penalizing the use of long passwords. If people would actually understand that usability is part of security, maybe they would understand the need for a setting to disable that feature.
One-time pads are very simple: both parties have n random bytes of secret data. To encrypt or decrypt an n-byte message, just XOR it together with the random bytes. Don’t use the same random bytes twice. This is the entire algorithm. How simple is that?
What ciphergoth was getting at is that secure crypto methods should be simple enough that you can analyze them easily looking for vulnerabilities, and implement them correctly without horrible security-breaking bugs. To this end, it’s typical to have just one thing that’s secret: the key. Everything else about the algorithm is public, and as simple as possible.
Yes, but in this context, the proposal is that the ciphertext not tell Eves what the protocol is. Maybe the public key’s hidden somewhere in it, maybe it’s a one-time pad, etc. Added complexity, but not in a way that (AFAICT) subverts the security, and I think ciphergoth was being a bit hasty in applying this reasoning—it warrants a deeper explanation.
If you have a secure encryption algorithm, then whether or not you tell Eve the algorithm isn’t important. Yes, it makes the code-breaking harder for her, but that difficulty is a drop in the bucket, negligible compared to the difficulty of guessing the key.
Proper crypto must be secure whether or not the algorithm is known to attackers. Go ahead and keep the algorithm secret if you really want to, but you needn’t bother.
So it adds no significant difficulty when the plaintext is in a foreign language with few translators you have access to? It was pointless for the US military to use Navajo code-talkers? The shortage of Arabic translators imposes no notable cost on the CIA’s eavesdroppers?
Those things are difficult, sure, and I never said otherwise. But I’m not sure you appreciate just how staggeringly hard it is to break modern crypto. Navajo code-talkers are using a human language, with patterns that can be figured out by a properly determined adversary. There are quite a lot of people who can translate Arabic. Those are nowhere near the difficulty of, say, eavesdropping on a message encrypted with AES-128 when you don’t know the key. Or finding a collision with a given SHA-256 hash. Those things are hard.
Generally, when a security system is broken, it’s not because of the “core” algorithm (RSA/AES etc) being broken, it’s because of other flaws in the system. If you’re keeping the system secret, you’re making things a bit harder for the bad guys (who have to play some guessing game, or get hold of a copy of your program and reverse-engineer it), but you’re also stopping it from getting the examination it needs from good-guy experts (who have better things to do with their lives than try to understand your disassembled source code).
But the key aspects of the code have been reviewed—it’s just that it’s no longer in a format that can algorithmically be passed to a breaker, and requires intelligent thought to get it to that stage, which would seem to put a bottleneck on attacks.
It’s been reviewed by you. Unless you’re a three-letter agency, that’s extremely unlikely to be thorough enough to say with any confidence that it’s secure.
Hm, actually, it depends on what you’re trying to be secure against. If, say, you’re running a website with a standard installation of something, it can be worth changing it a little bit so that automated scanning tools won’t be able to exploit flaws in it.. There won’t be huge benefit against people deliberately targetting you, though.
Yes, one-time-pads are insecure; they have no mechanism for message integrity. However, that’s a side issue.
There’s a reason our files tend to have things like magic bytes at the beginning that tell us what sort of file they are; our lives would be more complicated if these things are missing. Direct cryptanalysis is generally the least of our security worries. Measures like those you propose make things stronger where they are already strong enough at the cost of making them weaker where they are already weak.
Key management is hard. While the algorithm is simple and easy to implement, keeping the one-time pads secret may add the complexity that ciphergoth refers to.
Some professionally designed crypto systems have turned out to have serious flaws. In general, if your system is secret you don’t get the advantage of having the entire crypto community look it over and decide if it is reliable.
Moreover, for many practical applications of cryptography you need to interact with a large number of people who need to have some idea how the protocol works. For example, if I want people to be able to email me in a secure fashion even if I’ve never met them, I need something like an RSA public key. There’s no way around that. Similarly, if I’m a large organization, like say a bank or an online business that needs to send a lot of secure data to a variety of systems, each of those systems needs to know about it. Security by obscurity isn’t just a bad idea in that sort of context, it is essentially impossible.
I’m not sure I follow this line of logic. The way RSA works the sender looks up the recipient’s public key. How do you intend for the sender to decide what to do?
Guess a few (predisclosed, known) permutations of the key. A nightmare for attackers (who have to guess what the public key is), but easy for the recipient to recover.
I’m not sure I follow. If the public key is public, then the attackers don’t have to guess. If the public key isn’t known how is a sender supposed to encrypt it?
The public key is known; its association with a particular user is not. Through a separate channel [1], members of the club were given (the very short message of) transformations they can apply to a real public key. The sender uses the real public key but labels it as having the transformed one. When the ciphertext decrypts to garbage, attackers have to figure out what the real key is, requiring un-automatable analysis.
The recipient need only try a few reverse transformations to get back the true public key.
[1] which makes it unusable for the context of e.g. banks that you discussed, but the topic was theoretical situations where this can increase security
I don’t see what your transformations are buying you. You don’t have to label an encrypted message with its key at all. The intended recipient knows their own key. So your proposal is equivalent to telling your “public” key only to people who are supposed to send you messages.