1 / 45

Cryptography: from Theory to Practice (a personal perspective)

Cryptography: from Theory to Practice (a personal perspective). Hugo Krawczyk IBM Research Asiacrypt’2010. An exciting journey…. This talk reflects my personal journey between theory and practice I will focus on the “ influencing practice ” part

Download Presentation

Cryptography: from Theory to Practice (a personal perspective)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cryptography: from Theory to Practice (a personal perspective) Hugo Krawczyk IBM Research Asiacrypt’2010

  2. An exciting journey… • This talk reflects my personal journey between theory and practice • I will focus on the “influencing practice” part • hence the title “from theory to practice”, though it is always a two-way journey (as we’ll see) • It is laborious but ultimately quite fun and rewarding (most of the time  ) • I will try to offer some (hopefully useful) lessons for those who want to join the journey • …and you are very welcome and encouraged to do so!

  3. "In theory, theory and practice are the same. In practice, they are not."

  4. In cryptography, theory IS practical • We don’t have the luxury of most engineering fields: • Can’t resort to experimental evidence: no testing, no simulations • Think of safety systems (airplanes, cars): one can simulate nature forces experimentally or by computer • But you cannot model “human force”– especially a malicious one (think of designing WTC to withstand an airplane accident vs a 9/11 attack) • And you cannot computer-simulate crypto attackers, but you can “simulate” them via mathematical models and proofs • Math models & proofs as our main source of confidence (even when keeping in mind the imperfection of math models…) • Alternative is to spend “1000 PY cryptanalysis” or pray (or both!)

  5. Can you sell this view to practitioners? • Not easy.. but it’s getting much easier • Today if you come with a proposal even the engineers may ask for (mathematical) analyses – an farfetched dream 15 years ago • It has one condition though: You need to respect the “rules of the engineering game” • Simplicity, efficiency, low cost of deployment, … • And it has to solve a problem the engineers want to solve • Selling a solution is not easy – selling a problem is much harder • As we’ll see it is all about a challenging balancing act

  6. Engineering Theory Theory Engineering Engineering Theory A tough balancing act…

  7. HMAC Story:A Balancing Act

  8. 1994: IPsec design underway • Goal: Secure the Internet Protocol • Authenticate and encrypt IP packets • Authentication method: Key-prepend MAC • MACK(P) = Hash(K || P) • Can you see the problem? • Next: A not too friendly “dialogue” with some IPsec leading engineers at the time

  9. A 1994 IPsec dialogue • Me: This mode is open to extension attacks! • They: Aha! But not if P’s length is prepended… and IP packets always carry the length in the header! • Me: I do not like to use non-cryptographic elements as essential cryptographic elements. What if tomorrow the length is omitted? • They: Are you cookoo? IP packets will always carry their length • I insisted, though I did not have a truly convincing answer beyond: “prudent engineering, sound principles” • But guess what? How? The length indeed “disappeared”

  10. IP Packet Authentication in IPsec • Two modes • AH (authentication header): Authentication only • ESP (encapsulating security payload): Authentication plus optional encryption

  11. Authentication Header (AH*)[~ RFC 2402] Payload Length IP Header SPI Replay Prevention Sequence Number Payload MAC (covers header) Padding Protocol Pad Length *approximate description MAC Value

  12. ESP Format [RFC 2403] MAC does not cover IP header! Payload Length IP Header SPI Replay Prevention Sequence Number Initial Vector MAC Payload Optionally Encrypted Payload Padding Protocol Pad Length MAC Value

  13. The end of the story? • ESP is the most common IPsec mode, the MAC does not cover the header (for some good reasons). • So how about the MAC security w/o the length? • Eventually, HMAC was (invented and) adopted • The prepend-only proposal is a clear example of too much focus on engineering-only considerations • It was simple and secure under the specific conditions at the time but too fragile to withstand future changes It required a balancing act…

  14. Theory: NMAC • NMACK1,K2(M)= HashK1(HashK2(M)) • The “right thing”: conceptually PRF(UH) & nice proof • but requires a variable IV (replaced with the keys) • ... and two keys • Engineers wanted black-box call to the hash (e.g. h/w implementation) and a single short key Key replaces IV

  15. Bridging between Theory and Practice • Proof used NMACK1,K2(M)= HashK1(HashK2(M)) • Here subscript means replacing hash IV with key • The “compromise”(black-box call to Hash, single key) • HMAC(K, msg) = Hash(K+opad | Hash(K+ipad | M)) • As NMAC but with “built-in” key derivation K1=Hash(K+opad) K2=Hash(K+ipad) • and most important: no change of IV balance regained and the rest is history… 

  16. Randomized Hashing Story:A Balancing Act [HK’06]

  17. The Collisions Crisis • Background: The “hash confidence” crisis • Obvious answer: design, standardize, implement new stronger hash functions (SHA3 competition) • But what if new functions broken in 5-10 years? • Can we buy insurance against future collisions?? • Specifically: Can we build digital signatures without collision resistance? (remain secure even if collisions found) • In theory: Yes [NY’89]. • Then why are we building signatures that depend on coll. resistance?

  18. Randomized Hashing • From: Sign(H(M)) (hash-then-sign) To: Sign(Hr(M)) where Hr = randomized version of H • Idea: r chosen by signer at the time of signature; r sent with message and signature Thus, attack on signature starts only after learning r • Fundamental shift in attack scenario: Off-line vs. On-line • In particular, no use for off-line collisions • But can it be built? Can it be proven? Can it be practical?

  19. Initial “Randomized Hashing” Proposal [HK06] • First proposal: Sign( r | H(M+r)) • | concatenation, + blockwise xor, r = repeated random block • We proved that off-line collisions on H do not break the signature • Attacker has a much harder problem to solve • An implementation of the “target collision resistance”notion • But, would the engineers buy it? • No! we broke the hash-then-sign paradigm and with it 1000s of implementations • The signature of r in addition to the hash does not work with sign-then-hash

  20. Back to the drawing board • Find a randomized hashing scheme that provides freedom from collisions but preserves hash-then-sign • Enter RMX: Given msg M to sign • Signer chooses fresh r • Computes σ = Sign( H (RMX(r,M)) ) • Sign is any “hash-then-sign signature scheme” (RSA, DSS,…) • Note that only the output of H is signed, not r • Signature = (σ, r)

  21. RMX: Preserving Hash-then-Sign M=(m1,…,mL) M=(m1,…,mL) r RMX (r, m1r,,…,mLr( HASH HASH RMXX 1st proposal SIGN SIGN signature ( signature , r )

  22. Randomized Hashing Lesson • To meet the engineers’ requirement to preserve hash-then-sign we had to re-design RMX • We had to change the proof and introduce a new notion: “enhanced target collision resistance” (eTCR). • Luckily, we could prove eTCR under the same assumptions on underlying compression function. • So, were the engineers impressed? Mixed results. • NIST standardized the RMX technique and eTCR is an explicit requirement for SHA3. • Yet, no real implementations. • Why? Partly, because the following principle is missing

  23. Randomized Hashing Lesson • Current focus is on changing the hash functions -- not on the way we use them • Need to convey the message: It’s not just about WHAT to use but HOW to use it • Many times, the “how to use” is more critical than the basic design (using it right can survive a weaker function) • I love to use the following ECB vs CBC analogy • The “what”: Block cipher • The “how”: ECB vs CBC

  24. Analogy from block ciphers: CBC vs ECB • ECB encrypts each input block independently • CBC randomizes encryption to hide repeated blocks • But does CBC adds real security? • “Come on, how bad it is to leak repeated blocks?” • Certainly not too bad for “random” (incompressible) plaintext • Well..

  25. ECB vs CBC Encrypted w/ECB! Linux Penguin Encrypted w/CBC • Indeed, it’s about how to use it! • Note that in this case a strong block cipher (e.g. AES) with ECB is weaker than a weak block cipher (e.g. DES) with CBC Courtesy of wikipedia “modes of operation”

  26. Back to Randomized Hashing • I don’t have an eloquent picture to show the benefits of randomized hashing as with CBC vs ECB • But the safety net provided to digital signatures by an RMX-like technique is huge • If we were using RMX, today’s signatures would be much less at risk from collision attacks • Obviously, we need to design the strongest possible hash functions but also use them more prudently

  27. Randomized Hashing: Work in Progress • The randomized hashing story is not over… we need to make it happen in the real world (and you can help!) • Need to update standards to compute RMX and transport r • Note: can re-use randomness in randomized signatures (DSA, PSS) • Should we rely on random serial #’s in certificates? • No! Remember the reliance on pre-pended length in IPsec? • Today s/n may be random, tomorrow just a sequence number (why not?)

  28. HKDF Story:A Balancing Act

  29. KDF in Practice • Key derivation function (KDF): From an imperfect source of keying material to strong crypto keys • Imperfect: non-uniform, side information (partial secrecy) • Random Number Gen, system entropy sources, Diffie-Hellman (KE), ... • Output: one or more keys (e.g., encryption, MAC, etc) • A fundamental crypto primitive: Yet common design very ad-hoc-ish • Common practice: skm = “source key material” info=context info • Hash(skm || “1” || info) || . . . || Hash(skm || “t” || info) • Can it be proven? Only for “perfect” hash (random oracle) • Can we do better?

  30. Hash(skm || “1” || info) || ... || Hash(skm || “t” || info) • To illustrate weakness, consider the “easy” case: where skm is a fully random key • In this case only need a PRF to convert skm into multiple keys • But the above design is not even a good PRF • This is the old “prepend-key” PRF: PRFK(x)=Hash(k||x) • which has been deprecated long ago in favor of better modes’ • KDF is a much more demanding primitive than PRF (needs to work with non-random keys), and yet we are using a design that is weak even as a PRF. • And don’t rely on “info” being of fixed length (multi-purpose KDF)

  31. Can we balance the KDF act?HKDF [K’10] • Follow the “extract-then-expand” paradigm • Key Extraction: Derive a first cryptographically strong key from an “imperfect source of randomness” • Key Expansion: Given a first cryptographically strong key derive more keys (basically a PRF with variable-lengthoutput) • Generic Extract-then-Expand KDF Optional (random but non-secret) Kprf = Extract(salt, skm) skm= source key material Keys = Expand(Kprf , Keys-length, ctxt_info) . Binds key to the application “context”

  32. Can we balance the KDF act?HKDF [K’10] • How to implement “extraction”? (from imperfect source to single strong PRF key) • Can use the theory of randomness extractors (complexity theory) • Efficient unconditional constructions exist (e.g. strong univ. f’s) • But require large salt, do not fulfill all crypto needs (e.g. RO uses, tight gaps, etc), and are not available in crypto libraries • Unconditional extractors not likely to be used in practice • balance tilted to the theoretical side

  33. Regaining Balance: HKDF (HMAC as extractor and PRF) Kprf = HMAC(salt, skm) skm= source key material Keys = HMAC(Kprf , Keys-length, ctxt_info) whereKeys = K1 || K2 || … Ki+1= HMAC(Kprf, Ki || ctxt_info || i) (HMAC as PRF in “feedback mode”) . OR

  34. HKDF: Balance regained • Practice: HMAC already used as PRF, here re-used as extractor. • Theory: Detailed analysis [DGHKR’04, CDMP’05,K’10] shows HMAC as a good computational extractor under various models of compression functions • Wide range of assumptions depending on use: from purely combinatorial to random-oracle type modeling • Not satisfied by plain M-D hash: HMAC much stronger as a property-preserving mode (pseudorandomness and extraction)

  35. The Power of Proofsand Proof-driven Design in Cryptographic ProtocolsSimple example: Asymmetric Key Wrapping and One-pass Key Exchange

  36. Key Wrapping • Key-wrapping or key encapsulation: server transmits a key (and possibly associated data) to a client • Major key management tool: storage, hardware security modules, secure co-processors, ATM machines, etc. • Symmetric vs Asymmetric: AES-based, RSA-based • Industry is searching for an ECC solution • Main candidate: DHIES [ABR’01] • CCA-secure encryption, can transmit key and associated data • Implicitly authenticates the intended receiver (the only one that can read the key/data)

  37. Authenticated Key Wrapping • DHIES implicitly authenticates the receiver • Only intended receiver can read the key/data • But how about sender’s authentication? • Interestingly: Just adding sender’s signature on top of DHIES is weaker than it may look • Good news: Can solve the problem even more efficiently than with signatures and with better security • Thanks to well-defined (reusable) primitives and models • Thanks to designs that can get rid of safety margins • Thanks to the power of proofs in well defined models.

  38. mutually Authenticated DHIES and One-Pass KE • DHIES is an instantiation of the KEM/DEM paradigm: (Y,C,T) • Key Encapsulation:Y=gy encapsulates K=H(Ay) (A is receiver’s PK) • Data Encapsulation: (C,T) CCA-encrypts data under K • KEM  receiver-authenticated one-pass KE  Authenticated DHIES = “authenticated KEM” + DEM • Authenticated KEM  Authenticated One-Pass KE • ECC-friendly AKEM implementation [HK’11]: One-Pass HMQV [K,MQV] • Reduction to the OP-KE case avoids need for new models and protocols • Efficiency: same as DHIES for sender and just ½ expon more for receiver • Sender: from Ay to Ay+be ; Receiver: from Ya to (YBe)a relation to signcryption [Gorantla et al, Dent]

  39. See what ½ exponentiation buys us • Sender authentication (plus all properties of basic KE security) • Sender forward security (disclosure of b does not compromise past keys and messages) • y-security : The disclosure of ephemeral y does not compromise any keys or messages • Moreover: the disclosure of both y and b reveals the msg sent using y but no other msgs sent by b’s owner Note: Just adding a signature (say DSA) on top of DHIES is signifi- cantly more expensive, provides weakerauthentication (attacker can learn someone else’s M), and does not provides y-security

  40. The Power of Proofs (not only in theory but in practice too) • Proof maps exact functionality of each element in the protocol • What security properties require that element and which do not • What the effect of leakage of each secret value is • A precise guide to protocol design and to getting rid of safety margins  simpler and more efficient protocols • Proofs are even excellent cryptanalysis tools for debugging protocols and finding attacks • I called this “Proof-Driven Design” • but this is a topic for another full talk…

  41. Concluding Remarks

  42. Crypto: a truly cool field! • Amazing ideas, beautiful math, great theory, practical and social relevance. Unusual mix of theory and practice. • In crypto theory IS practical! • No experimental evidence, no computer-simulations, no testing • Only well-defined notions and models* to develop sound analysis ( * it’s ok if the models are not perfect; perfect models don’t exist; but w/o models and proofs we are left with “words of gentlemen” and broken designs) • It’s amazing how well the theoretical notions and techniques work in practice (pseudo-randomness, simulations, zero knowledge, semantic security, even idealized random oracles and “ugly assumptions” such as Gap-DH) • Proof-Driven Design: Sound and Practical for the same price

  43. Lessons from the field… • Only way to get sound crypto used is to interact with the engineers: listen, respect, argue, be persistent • Practice-Efficiency-Simplicity-Beauty • Timing: Not easy to sell solutions but much harder to sell problems (see what keeps the engineers awake at night) • Sound principles, robustness of design • Design by functionality not by implementation (PRF/MAC/extract vs HMAC/SHA1); do not piggyback on non-crypto elements*; design to last * examples: pre-pended length, relying on random s/n in certs, assuming “info” to be of fixed length (actually, adv. chosen) • Remember: it’s not just about what to use but how to use it

  44. and most importantly…it requires…

  45. Theory YOU Engineering THANK exhilarating ___________ A challenging balancing act… YOU THANK

More Related