1 / 46

Secure Remote Authentication Using Biometrics

Secure Remote Authentication Using Biometrics. Jonathan Katz *. Portions of this work done with Xavier Boyen, Yevgeniy Dodis, Rafail Ostrovsky, Adam Smith. * Work supported by NSF Trusted Computing grant #0310751. Motivation.

Patman
Download Presentation

Secure Remote Authentication Using Biometrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Secure Remote Authentication Using Biometrics Jonathan Katz* Portions of this work done with Xavier Boyen, Yevgeniy Dodis, Rafail Ostrovsky, Adam Smith *Work supported by NSF Trusted Computing grant #0310751

  2. Motivation “Humans are incapable of securely storing high-quality cryptographic secrets, and they have unacceptable speed and accuracy…. (They are also large [and] expensive to maintain…. But they are sufficiently pervasive that we must design our protocols around their limitations.)” From: “Network Security: Private Communication in a Public World,” by Kaufman, Perlman, and Speciner

  3. Possible solutions? • (Short) passwords? • (Hardware) tokens? • Biometrics • Storage of high-entropy data “for free”

  4. Problems with biometrics • At least two important issues: • Biometrics are not uniformly random • Biometrics are not exactly reproducible • Outside the scope of this talk • Are biometrics private? • Sufficiently-high entropy? • Revocation?

  5. Previous work I • Davida-Frankel-Matt ’98; Monrose-Reiter-(Li)-Wetzel ’99, ’01 • Juels-Wattenberg ’99; Frykholm-Juels ’01; Juels-Sudan ’02

  6. Previous work II • Dodis-Reyzin-Smith ’04 • Their framework and terminology adopted here • Boyen ’04 • Two main results • One result information-theoretic; second in RO model

  7. Question: • Can we use biometric data (coupled with these techniques…) for remote user authentication? • E.g., authentication over an insecure, adversarially-controlled network? • Without requiring users to remember additional info, or the use of hardware tokens?

  8. Does previous work, work? • [DRS04] No! • Assume “secure channel” btw/ user and server • Security vs. passive eavesdropping only • [Boyen04] • Focus is on general solutions to different problems • In general, techniques only seem to achieve unidirectional authentication • By focusing on the specific problem of interest, can we do better?

  9. Main results • Short answer: Yes! • By focusing specifically on remote authentication, we can do better • Two solutions… • Compared to [Boyen04]: • Solution in standard model • Solutions tolerating more general errors • Achieve mutual authentication • Improved bounds on the entropy loss

  10. First solution • Generic, “plug-in” solution whenever data from server may be tampered • In particular, applies to remote authentication • Proven secure in RO model… • Tolerates more general class of errors than [Boyen04] • Mutual authentication

  11. Second solution • Specific to the case of remote authentication/key exchange • Provably secure in standard model • Lower entropy loss compared to [Boyen04] and previous solution • Can potentially be used for lower-entropy biometrics and/or secrets (passwords?) • Still tolerates more general errors and allows mutual authentication (as before)

  12. Some background…

  13. Security model I • Standard model for (key exchange) + mutual authentication [BR93] • Parties have associated set of instances • Adversary can passively eavesdrop on protocol executions • Adversary can actively interfere with messages sent between parties; can also initiate messages of its own

  14. Security model II • Notion of “partnering” • Informally, two instances are partnered if they execute the protocol with no interference from the adversary • More formally (but omitting some details), instances are partnered if they have identical transcripts

  15. Security model III • (Mutual) authentication • Instances accept if they are satisfied they are speaking to the corresponding partner (determined by the protocol) • Adversary succeeds if there is an accepting instance which is not partnered with any other instance

  16. Security model IV • Quantify adversary’s success in terms of its resources • E.g., as a function of the number of sessions initiated by the adversary • “On-line” vs. “off-line” attacks • This can give a measure of the “effective key-length” of a solution

  17. Recap of [DRS04] • Use Hamming distance for simplicity… • (m, m’, t)-secure sketch (SS, Rec): • For all w, w’ with d(w,w’)  t:Rec(w’, SS(w)) = w(I.e., “recovery from error”) • If W has min-entropy m, the average min-entropy of W|SS(W) is m’(I.e., “w still hard to guess”)

  18. Recap of [DRS04] • (m, l, t, )-fuzzy extractor (Ext, Rec): • Ext(w) -> (R, P) s.t. 1. SD((R, P), (Ul,P))  (I.e., R is “close to uniform”)2. For all w’ s.t. d(w,w’)  t, Rec(w’, P) = R(I.e., “recovery from error”)

  19. Applications… • [DRS04] assumes that P is reliably transmitted to the user • E.g., “in-person” authentication to your laptop computer • No guarantees if P is corrupted

  20. [Boyen04] • Main focus is reusability of biometric data (e.g., with multiple servers) • Somewhat tangential to our concern here • Also defines a notion of security for fuzzy extractors when P may be corrupted…

  21. [Boyen04] • (Ignoring reusability aspect…) • w* chosen; (R, P) = Ext((w*)) for some ; adversary gets P • Adversary submits P1, …  P and 1, …; gets back R1 = Rec(1(w*), P1), … • “Secure” if adv. can’t distinguish R from random (except w/ small prob.)

  22. Error model • We assume here that d(w*, i(w*))  t • I.e., errors occurring in practice are always at most the error-correcting capability of the scheme • Under this assumption, [Boyen04] disallows Pi = P in adversary’s queries

  23. Construction • Construction in [Boyen04] achieves security assuming errors are “data-independent” • I.e., constant shifts • Construction analyzed in RO model

  24. User Server P, nonce  Application to remote authentication Essentially as suggested in [Boyen04]: (w) (P, PK) (R,P) = Ext(w*) R -> (SK, PK) R = Rec(P, w) R -> (SK, PK)  = SignSK(nonce) Verify…

  25. Security? • Intuition: • If adversary forwards P, then user is signing using his “real” secret key • Using a secure signature scheme • If adversary forwards P’  P: • User computes some R’ and a signature w.r.t. (key derived from) R’ • But even R’ itself would not help adversary learn R!

  26. But… • Unidirectional authentication only • No authentication of server to user • The definition of [Boyen04] (seemingly) cannot be used to achieve mutual authentication • Nothing in the definition guarantees that adversary can’t send some P’ and thereby guess R’

  27. New constructions

  28. Construction I • Modular replacement for any protocol based on fuzzy extractors, when P may be corrupted • Idea: ensure that for any P’  P, the user will reject • Adversary “forced” to forward real P • Sealed (fuzzy) extractor • Allow Rec to return “reject”

  29. Error model • Defined by a sequence of random variables (W0, W1, …) over some probability space  such that for all , i we have d(W0(), Wi())  t • More general model than [Boyen04] • Allows data-dependent errors • May be too strong…

  30. Security definition • User has w0; computes (R,P)<-Ext(w0); adversary given P • Adversary submits P1, …, Pn P • Adversary succeeds if i s.t. Rec(wi, Pi)  “reject”

  31. User Server P, n1 c2 c1, n2 Application to remote authentication (w) (P, R) (R,P) = Ext(w*) R = Rec(P, w) c1 = FR(n1) Verify… c2 = FR(n2) (Or run authenticated Diffie-Hellman)

  32. Security? • If adversary forwards P’  P, user simply rejects • If adversary forwards P, then user and server are simply running auth. protocol of [BR93]

  33. Constructing sealed extractor • First construct secure sketch • Definition similar to that of sealed extractor • Construction is in the RO model • Then apply standard extractors (as in [DRS04]) • This conversion is unconditional

  34. Constructing sealed sketch • Let (SS’, Rec’) be any secure sketch • Define (SS, Rec) as follows: SS(w) s’<-SS’(w) h = H(w,s’) output (s’,h) Rec(w’,(s’,h)) w<-Rec’(w,s’) if (h=H(w,s’) and d(w,w’)  t) output w else “reject”

  35. Intuition? • h “certifies” the recovered value w • But because of the RO model, it does not leak (much) information about w • Also, because of RO model, impossible to generate “forged” h without making (explicitly) a certain query to the RO • Adversary doesn’t make this query (except with small probability) since min-entropy of recovered w is still “high enough”

  36. Performance? • “Entropy loss” of w occurs in essentially three ways • From public part s’ of underlying sketch, and application of (standard) extractor • Bounded in [DRS04] • Due to the error model itself • Inherent if we are using this strong model • From the sealed extractor construction • Roughly a loss of (log Volt,n) bits

  37. Construction II • Specific to remote authentication • Idea: “bootstrap” using auth. protocol that can handle non-uniform shared secrets • “Problem” of non-uniformity goes away • All we are left with is the issue of error-correction

  38. Specifics… • Use a password-only authentication (and key exchange) protocol (PAK)! • These were designed for use with “short” passwords… • …But no reason to limit their use to this application

  39. Brief introduction/review • Problem: • Two parties share a password from a (constant-size) dictionary D • If D is “small” (or has low min-entropy), an adversary can always use an on-line attack to “break” the protocol • Can we construct a protocol where this is the best an adversary can do?

  40. Introduction/review • Specifically, let Q denote the number of “on-line” attacks • Arbitrarily-many “off-line” attacks are allowed • Then adversary’s probability of success should be at most Q/D • Or Q/2min-entropy(D)

  41. Introduction/review • Can view PAK protocols in the following, intuitive way: • Each on-line attack by the adversary represents a single “guess” of the actual password • This is the best an adversary can do!

  42. Constructions? • [Bellovin-Merrit]… • [BPR,BMP] – definitions, constructions in random oracle/ideal cipher models • [GL] – construction in standard model • [KOY] – efficient construction in standard model, assuming public parameters

  43. User Server s Run PAK using “password” (s,w*) Application to remote authentication (w) (s, w*) s = SS(w*) w* = Rec(s, w)

  44. Intuition • Even if adversary changes s, the value w’ recovered by the user still has “high enough” min-entropy • By security of PAK protocol, adversary reduced to guessing this w’

  45. Performance? • Using a secure sketch is enough • Do not need fuzzy extractor • PAK protocol doesn’t need uniform secrets! • Save 2log(1/) bits of entropy • This approach works even when residual min-entropy is small • Can potentially apply even to mis-typed passwords

  46. Summary • Two approaches for using biometric data for remote authentication • “Drop-in” solution in RO model • Solution specific to remote authentication in standard model • Compared to previous work: • Solutions tolerating more general errors • Achieve mutual authentication • Improved bounds on the entropy loss • Solution in standard model

More Related