1 / 40

Rational Cryptography Some Recent Results

Rational Cryptography Some Recent Results. Jonathan Katz University of Maryland. Rational cryptography. “Applying cryptography to game theory” When can a cryptographic protocol be used to implement a gave involving a trusted party? [B92, DHR00, LMS05, ILM05, ADGH06, …]

mmccaa
Download Presentation

Rational Cryptography Some Recent Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rational CryptographySome Recent Results Jonathan Katz University of Maryland

  2. Rational cryptography • “Applying cryptography to game theory” • When can a cryptographic protocol be used to implement a gave involving a trusted party? [B92, DHR00, LMS05, ILM05, ADGH06, …] • “Applying game theory to cryptography” • How to deal with rational, computationally bounded parties in cryptographic protocols?[HT04, GK06, LT06, KN08, ACH11, …]

  3. Can we get “better” cryptographic protocols by focusing on rational adversaries rather than arbitrary adversaries? We believe that (most) parties act rationally, i.e., in their own self interest We want protocols that are resilient to malicious behavior The dream?

  4. The dream? • Can we construct more efficient protocols if we assume a rational adversary (with known utilities)? • Can we circumvent impossibility results if we assume a rational adversary (with known utilities)? YES!

  5. Two examples • Fairness • Two-party setting[Groce-K (Eurocrypt ’12)] • The multi-party setting, and other extensions[Beimel-Groce-K-Orlov ‘12] • Byzantine agreement / broadcast[Groce-Thiruvengadam-K-Zikas (ICALP ’12)]

  6. Fairness

  7. Fairness • Two parties computing a function f using some protocol • (Intuitively) the protocol is fair if either both parties learn the output, or neither party does • Note: fairness is non-trivial even without privacy, and even in the fail-stop setting

  8. The challenge? x y X f(x, y)

  9. Impossibility of fairness • [Cleve ’86]: Fair computation of boolean XOR is impossible

  10. Dealing with impossibility • Fairness for specific functions [GHKL08] • Limited positive results known • Partial fairness [BG89, GL90, GMPY06, MNS09, GK10, …] • Physical assumptions [LMPS04, LMS05, IML05, ILM08] • Here: what can be done if we assume rational behavior?

  11. Rational fairness • Fairness in a rational setting [ACH11] • Look at a specific function/utilities/setting • Main goal is to explore and compare various definitions of rational fairness • Main result is “pessimistic”: boolean XOR can be computed in a rationally fair way, but only with probability of correctness at most ½

  12. Consider the following game… • Parties run a protocol to compute some function f • Receive inputs x0, x1 from known distribution • Run the protocol… • Output an answer • Utilities depend on both parties’ outputs, and the true answer f(x0, x1) D

  13. Utilities • Each party prefers to learn the correct answer, and otherwise prefers that the other party output an incorrect answer • This generalizes the setting of rational secret sharing[HT04, GK06, LT06, ADGH06, KN08, FKN10, …] Player 1 Player 0 b0 > a0 ≥ d0 ≥ c0 b1>a1 ≥ d1 ≥ c1

  14. Deviations? • Two settings: • Fail-stop: parties can (only) choose to abort the protocol, at any point • Byzantine: parties can arbitrarily deviate from the protocol (including changing their input) • Parties are computationally bounded

  15. Definition • Fix f, a distribution D, and utilities for the parties. A protocol π computing f is rationally fair(for f, D, and these utilities) if running π is a (Bayesian) computational Nash equilibrium • Note: stronger equilibrium notions have been considered in other work • We leave this for future work

  16. Question • For which settings of f, D, and the parties’ utilities do rationally fair protocols exist?

  17. Consider the following game… Parties have access to a trusted party computing f Receive inputs x0, x1 from known distribution Send input or  to trusted party; get back result or  Output an answer Utilities depend on both parties’ outputs, and the true answer f(x0, x1) D

  18. Revisiting [ACH11] • The setting of [ACH11]: • f = boolean XOR • D = independent, uniform inputs • utilities: • Evaluating f with a trusted party gives both parties utility 0 • They can get the same expected utility by random guessing! • The parties have no incentive to run any protocol computing f • Running the ideal-world protocol is a Nash equilibrium, but not strict Nash

  19. Back to the ideal world • To fully define a protocol for the ideal world, need to define what a party should output when it receives  from the trusted party • (cooperate, W0): if receive , then generate output according to the distribution W0(x0)

  20. Definition • Fix f, a distribution D, and utilities for the parties. These are incentive compatibleif there exist W0, W1 such that ((cooperate, W0), (cooperate, W1)) is a Bayesian strict Nash equilibrium • (Actually only need strictnessfor one party)

  21. Main result • If computing f in the ideal world is a strict Nash equilibrium, then there is a real-world protocol π computing f such that following the protocol is a computational Nash equilibrium • If f, a distribution D, and the utilities are incentive compatible, then there is a protocol π computing f that is rationally fair (for f, D, and the same utilities)

  22. The protocol I Use ideas from [GHKL08, MNS09, GK10] • ShareGen • Choose i* from geometric distribution with parameter p • For each i ≤ n, create values ri, 0 and ri,1 • If i ≥ i*, ri, 0 = ri,1 = f(x0, x1) • If i < i*, ri, 0 and ri,1 are chosen according to distributions W0(x0) and W1(x1), respectively • Secret share each ri, j value between P0 and P1

  23. The protocol II • Compute ShareGen (unfairly) • In each round i, parties exchange shares • P0 learns ri,0 and then P1 learns ri, 1 • If the other party aborts, output the last value learned • If the protocol finishes, output rn,0 and rn,1 • Note: correctness holds with all but negligible probability; can modify the protocol so it holds with probability 1

  24. Will P0 abort early? • Assume P0 is told when i* has passed • Aborting afterward cannot increase its utility • Consider round i ≤ i*:If P0 does not abort  utility a0If P0 aborts: • i = i*  utility b0 • i < i*  utility strictly less than a0 Because strict Nash in ideal world

  25. Will P0 abort early?  • Use W0, W1 with full support • Always possible • Set p to a small enough constant so that the above is strictly less than a0 Probability i = i* b0 Expected utility if abort = +  Probability i < i* a0 - 

  26. Summary • By setting p=O(1) small enough, we get a protocol π computing f for which following π is a computational Nash equilibrium • Everything extends to the Byzantine case also, with suitable changes to the protocol

  27. Recent extensions [BGKO] • More general classes of utility functions • Arbitrary functions over the parties’ inputs and outputs • Randomized functions • Extension to the multi-party setting, with coalitions of arbitrary size

  28. Open questions • Does a converse hold? • I.e., in any non-trivial setting*, does existence of a rationally fair protocol imply that the ideal-world computation is strict Nash for one party? • Stronger equilibrium notions • More efficient protocols • Handling f with exponential-size range * You get to define “non-trivial”

  29. Byzantine agreement /broadcast

  30. Definitions • Byzantine agreement: n parties with inputs x1, …, xn run a protocol giving outputs y1, …, yn. • Agreement: All honest parties output the same value y • Correctness: If all honest parties hold the same input, then that will be the honest parties’ output • Broadcast: A dealer holds input x; parties run a protocol giving outputs y1, …, yn. • Agreement: All honest parties output the same value • Correctness: If the dealer is honest, all honest parties output x

  31. Rational BA/broadcast? • Definitions require the security properties to hold against arbitrary actions of an adversary controlling up to t parties • What if the adversary has some (known) preference on outcomes? • E.g., Byzantine generals: • Adversary prefers that only some parties attack (disagreement) • Else prefers that no parties attack (agree on 0) • Least prefers that they all attack (agree on 1)

  32. Rational BA/broadcast • Consider preferences over {agree on 0, agree on 1, disagreement} • (Informally:) A protocol achieves rational BA / broadcast (for a given ordering of the adversary’s preferences) if: • When all parties (including the adversary) follow the protocol, agreement and correctness hold • The adversary never has any incentive to deviate from the protocol

  33. Note • A different “rational” setting from what we have seen before • Previously: each party is rational • Here: some parties honest; adversary rational • Though could also model honest parties as rational parties with a specific utility function

  34. A surprise(?) • Assuming the adversary’s complete preference order is known, rational BA is possible for any t < n(!) with no setup • Classical BA impossible for t ≥ n/3 w/o setup • (Classical BA undefined for t ≥ n/2)

  35. Protocol 1 • Assume the adversary’s preferences are agree on b > agree on 1-b > disagreement • Protocol • Every party sends its input to all other parties • If a party receives the same value from everyone, output it; otherwise output 1-b • Analysis: • If honest parties all hold b, no reason to deviate • In any other case, deviation doesn’t change outcome

  36. Protocol 2 • Assume the adversary’s preferences aredisagreement > agree on b > agree on 1-b • Protocol • All parties broadcast their input using detectable broadcast • If a party receives the same value from everyone, output it; otherwise output 1-b • Analysis: • Adversary has no incentive to interfere with any of the detectable broadcasts • Agreement/correctness hold in every case

  37. Other results • We also show certain conditions where partial knowledge of the adversary’s preferences is sufficient for achieving BA/broadcast for t < n • See paper for details

  38. Other surprises(?) • (Sequential) composition is tricky in the rational setting • E.g., classical reduction of BA to broadcast fails • Main problem: incentives in the sub-protocol may not match incentives in the larger protocol • Some ideas for handling this via different modeling of rational protocols

  39. Summary • Two settings where game-theoretic analysis allows us to circumvent cryptographic impossibility results • Fairness • Byzantine agreeement/broadcast • Other examples? • Realistic settings where such game-theoretic modeling makes sense? • Auctions? (cf. [MNT09])

  40. Thank you!

More Related