490 likes | 633 Views
Toward Strong Proofs of Interesting Properties for Cryptographic Protocols. Stephen W. Nuchia Center for Information Security The University of Tulsa. Introduction.
E N D
Toward Strong Proofs of Interesting Properties for Cryptographic Protocols Stephen W. Nuchia Center for Information Security The University of Tulsa
Introduction • The software crisis of the late 20th century was caused by poor programming practices. Commercial demand for ever-increasing functionality created a shortage of qualified programmers and especially programming leaders. • But ... when best practices are employed, projects succeed, at least as often as in other engineering disciplines.
Introduction • Meanwhile, another software crisis was brewing. By the late '90s it had become clear that security was at least as big a problem as Y2K and would be with us longer. • Unfortunately, there is no indication that best practices in security engineering actually lead to secure systems. Certainly they help, but over and over again fundamental flaws have been discovered in carefully designed protocols.
Introduction • Formal methods developed specifically to address this problem have so far proved too weak or too unwieldy to provide a solution, even when they are employed by research-level engineers. Tools for the working security engineer are not yet in sight. • We aim to fix that.
The Authentication Challenge • The authentication problem is simple enough: • Two principals wish to satisfy themselves as to the identity of the other, • Either by re-activating a pre-existing relationship or with the help of a mutually trusted third party. • But ... this must be accomplished across a gap of space and/or time, a gap occupied by the enemy.
The Terminal Login Problem • The classic context: • Terminals attached by dedicated wiring to a host processor serve users. • Users establish accounts with the host, resulting in a shared secret, the password. • A user transmits his identification together with the password to initiate a session. • Both parties assume the terminal is a reliable proxy for the host.
The Terminal Login Problem • Two attacks are known against this protocol, in addition to the cryptanalytic, indirect (social engineering) and bypass (escalation) attacks: • Wiretapping • Trojan horsing • The first is often assumed unlikely and the latter is detectable forensically. • This network topology with this protocol is generally considered to operate at an acceptable level of risk.
The Telnet Login Problem • From a cost/benefit viewpoint it is very efficient to replace a terminal-and-host network with a virtualized realization of the same topology. • Workstations run terminal emulation programs • Telnet protocol emulates the dedicated wiring. • Telnet was long known to be snoopable. Known, that is, to the Arpanauts who all knew and trusted each other. • Public terminal rooms, especially at universities, quickly became severe security problems. • Changing the implementation of an acceptable protocol made it totally unacceptable.
Protocol Boundary Problems C:> FORMAT C: Were you sure? Y/N Seen on a T-shirt at a Unix conference many years ago.
Example: SSL Certificate Spoofing • SSL (Secure Socket Layer) is intended to provide both mutual authentication and content privacy for connection-oriented Internet services. It is primarily used for Web applications. • The user trusts the browser and some service, which may be distributed across many servers, identified by a DNS name (FQDN). • The browser incorporates a short list of trusted Certifying Authorities (CA). • Authentication of the user to the service is done by a password, communicated over the SSL secure session.
Example: SSL Certificate Spoofing • Authentication of the service to the user is provided using public-key cryptography and cryptographically signed certificates, tracing their authenticity back to a CA on the short list. • The user relies on the browser application to raise an alarm if the certificate offered by the server cannot be validated. • In practice, the alert happens with some frequency. For instance, when a new machine is added to a server farm it may be initialized with the wrong certificate. Or expired certificates may be restored from a backup tape after a virus infects a server.
Example: SSL Certificate Spoofing • The user, then, is required to distinguish whether the alert message indicates an attack in progress or simply a minor configuration mixup at the server. • Given that the user sees administrative errors occasionally and actual attacks never, it is not surprising that the the user will eventually hit the Continue button. • This breakdown in the security chain allows a very simple man-in-the-middle attack. • Falsify DNS response, client connects to you • Provide forged but reasonable-looking certificate • Pass requests through to the real server to provide convincing interaction. • Capture the password and anything else you want.
Who Do You Want to Spoof Today? • An implementation oversight in Microsoft Internet Explorer makes SSL Certificate forgery trivially easy. • The SSL PKI provides for delegation of certifying authority. For instance, Mybroker, Inc may be authorized to create certificates for *.mybroker.com. • In addition to providing administrative flexibility for the company, this helps minimize the volume of known-plaintext material available for cryptanalytic attacks against the master CA.
Direct Certification Subject: Verisign Rights: CA(*) Public Key Signature: Verisign • Verisign self-certificate is on the short list • Server provides site certificate with subject matching the hostname portion of the URL requested by the user. • Browser accepts the certification. Subject: www.mybroker.com Rights: Site Public Key Signature: Verisign
Delegated Certification Subject: Verisign Rights: CA(*) Public Key Signature: Verisign Subject: MyBroker, Inc. Rights: CA(*.mybroker.com) Public Key Signature: Verisign • Server provides site certificate with matching subject plus the intermediate certificate. • Intermediate certificate is signed by a short-list CA. • Subject of the site certificate is within the scope of the delegation certificate, so the browser accepts the offered credentials. Subject: www.mybroker.com Rights: Site Public Key Signature: MyBroker, Inc.
Forged Certification Subject: Verisign Rights: CA(*) Public Key Signature: Verisign Subject: Hacks-R-Us Rights: CA(*.hack.org) Public Key Signature: Verisign • Server provides site certificate with matching subject plus the intermediate certificate. • Intermediate certificate is signed by a short-list CA. • Subject of the site certificate is not within the scope of the delegation certificate, so the should reject the credentials and raise a very loud alarm. Subject: www.mybroker.com Rights: Site Public Key Signature: Hacks-R-Us
It Gets Better! Subject: Verisign Rights: CA(*) Public Key Signature: Verisign Subject: www.hack.org Rights: Site Public Key Signature: Verisign • Here, the server offers an ordinary site certificate as its authority for the forged site certificate! • Microsoft IE (through August, 2002) and Konqueror fail to check the rights (basic constraints fields) at all and therefore accept these credentials silently! Subject: www.mybroker.com Rights: Site Public Key Signature: www.hack.org
Protocol Boundary vs Core • The previous two examples related to what Abadi calls the protocol boundaries. He argues, correctly, that verifying the core of the protocol without considering interface issues invites trouble. The SSH and SSL protocols provide for detection of attacks, but the applications either fail to report the anomalies or the user is desensitized to the alerts. • The following examples, in contrast, highlight the kinds of security flaws that are found in the cores of protocols. For these, at least, formal verification offers some hope.
Early ATM Hacking • From ``Why Cryptosystems Fail'' (1993) by Ross Anderson: Banks [in Italy] are basically suffering from two problems: • The first is a plague of bogus ATMs: devices that look like real ATMs, and may even be real ATMs, but which are programmed to capture customers' card and PIN data. • The second is that Italy's ATMs are generally offline. This means that anyone can open an account, get a card and PIN, make several dozen copies of the card, and get accomplices to draw cash from a number of different ATMs at the same time. This is also nothing new; it was a favorite modus operandi in Britain in the early 1980's.
Data Splicing ATM Attack • One large UK bank wrote the encrypted PIN to the card strip. It took the criminal fraternity fifteen years to figure out that you could change the account number on your own card's magnetic strip to that of your target, then use it with your own PIN to loot his account. • [A criminal] wrote a document about it which appears to have circulated in the UK prison system [...] • For this reason, VISA recommends that banks should combine the customer's account number and PIN before encryption. Not all of them do. -Anderson, 1993
1234 4567 issue issue Acct # 2 {4567}K Acct # 1 {1234}K clone K Acct # 2 1 {4567}K jackpot
Protocol Verification: BAN Logic • Problems such as this one, and there are many more examples, lead us to look for ways to formally verify our protocols. • An early attempt was BAN Logic (Burrows, Abadi, Needham, 1989). • BAN Logic has been widely used and has helped identify weaknesses in many protocols. It has been especially useful as a tool in the development of new security protocols. • The central notion of BAN logic is belief. The goal of a BAN logic proof is to show that the protocol steps in fact justify the principals' beliefs about the security of their communications.
Black-Box Cryptography • Before we delve into BAN logic, let's agree on an abstract treatment of cryptographic mathematics as used by these protocols. As our description of Diffie-Hellman illustrates, the details can be gory. • We use K, K1, K2 etc to denote arbitrary keys. • Keys shared by pairs of principals are subscripted by the principal identifiers: KAB. • A public key for principal A is KAand the corresponding private key is KA-1.
Black-Box Cryptography • The unencrypted message content is referred to as the plaintext and M is used to denote arbitrary plaintext. • Braces are used to denote the cyphertext resulting from application of an encryption function to some plaintext: {M}Kis the cyphertext version of M encoded using key K. • The specific encryption method, or cryptosystem, should be apparent from the context.
Cryptosystem Properties • Conventional cryptography uses shared or symmetric keys. The best-known example is DES, with its successor AES coming on strong. • In a symmetric or shared key cryptosystem, a single key determines both the encryption and decryption algorithms. • As a special case, {{M}K}K= M in many, but not all, cryptosystems. • Determining M from {M}K without knowing K should be effectively impossible.
Cryptosystem Properties • A public key or asymmetric cryptosystem employs pairs of keys K and K-1with the essential property that K-1is needed in order to determine M from {M}K . • Practical public-key cryptosystems have have the key symmetry property {{M}K}K-1= M. In other words, encryption and decryption use the same algorithm. • The key generation problem is to come up with distinct, mathematically sound keys as needed and in such a way that observers can't guess (entirely or partially) what keys will be or have been generated. • Key generation problems happen in practice. We assume reliable and secure key generation.
Cryptosystem Properties • If one could guess K from {M}K the game would obviously be over. This is the classical cryptanalysis problem. • It is desirable that the cryptosystem resist derivation of information about K from the combination of M and{M}K (or a series of plaintext/cyphertext pairs.) This is the known plaintext attack scenario.
Cryptosystem Properties • Many cryptosystems are vulnerable to chosen plaintext attacks where the adversary causes certain specific messages to be encrypted by the legitimate principal, exposing information about the key (or simply providing a marker in the cyphertext stream). • Cryptographic protocols must defend against any chosen-plaintext vulnerabilities in the underlying cryptosystem and should minimize the volume of known plaintext available to eavesdroppers.
Cryptosystem Properties • The message integrity and stream integrity properties are extremely important and very difficult to get right. • If your adversary has {a,b}K and {a',b‘}Kyou don't want her to be able to construct {a,b'}Kwithout also knowing K. • The class of message splicing attacks has not been systematically studied and new vulnerabilities in old protocols are being discovered.
Cryptosystem Properties • We won't get into the technical details of message integrity mechanisms here. Unless otherwise specified, we assume message integrity is assured by the cryptosystem but stream integrity is not. • Public-key cryptography permits digital signature applications. Recall that {{M}K1}K2= M characterizes public-key systems. For message secrecy applications K1 is public and K2 secret. If instead K1 is the secret key and K2 is public, anyone can verify that the cyphertext was produced by a principal in possession of K1.
Cryptosystem Properties • Another class of cryptographic tools are the trapdoor or hash functions. These use a non-secret algorithm to transform a plaintext into a characteristic fingerprint (or hash). • The key property of a trapdoor is that it is very difficult to find M' given M so that f(M')= f(M). • One common trick is to use {T}Mwith a fixed T. • Better algorithms exist, exemplified by MD5. • The clearsigning paradigm is to use a hash function and send {M,{f(M)}K-1} . • Digital signature applications demand a reliable way to associate public keys with principals. Solutions are called Public Key Infrastructure (PKI) architectures.
Engineering and BAN-idealized representations of the Otway-Rees protocol.
The assumptions underlying the Otway-Rees protocol, above, and the conclusions below. Homework: prove that the conclusions follow from the assumptions using the inference rules of the BAN logic.
ATM Cards Revisited • Principals: Bank B, Customer A, and Machine T. Let P be the customer's key (PIN) and K be the PIN-encrypting key shared by B and T. • The card is treated as a message from the Bank to the ATM: B->T: A, {P}K. • The customer's instruction to the ATM is authenticated using the PIN: A->T: <X>P . • The ATM acts on the instruction locally and later informs the bank about it (via a secure channel): T->B: {X(A)}K .
ATM Cards Revisited • The ATM's job is to make sure that the instruction really came from the owner of the account. In idealized form, T believes A believes X. • To prove this after message 2 we need to idealize message 1 as {A/B share P}K, but a moment's inspection shows why we can't do that. • The proof goes through with the corrected protocol: B->T {A,P}K . • You have to assume message 2 is fresh; this is reasonable, since it is typed on T's keyboard.
ATM Cards Revisited • Homework: • Complete the proof; • Show that the proof fails for the original protocol. • Use BAN and Engineering notation together to describe the thief's attack. • Hints: • T believes B controls T/* share * • Note that the A in T->B: X(A) comes from message 1, that is, the ATM card. • It is traditional to use C to represent the malicious party in a protocol run. Please do so, for the grader.
BAN Logic Limitations • We have seen that BAN logic can highlight design flaws in authentication protocols. It has also proved useful in helping protocol designers think more clearly about the assumptions and goals of their protocols. • We have also seen that idealization is not a rigorously defined process, and the ability of BAN logic to identify failures depends on proper idealization.
BAN Logic Limitations • We have also seen that BAN logic does not capture the credential-checking required of the principals in a formal way (ATM example). In other words, BAN logic treats the core of the protocol without considering the boundary issues. • To illustrate the point, let's look at an attack against Otway-Rees found by Boyd and Mao. • Recall that OW was ``proved'' by BAN.
The remaining protocol steps lead A to incorrectly believe that the key KCS is shared with B when in fact it is shared with C. Since a shared key creates a logical secure channel, the effect is to allow C to impersonate B in a conversation with A, in violation of the purported authentication guaranteed by the protocol. Boyd and Mao go on to show another pitfall related to idealization in BAN's treatment of this protocol. Refer to their paper for detail.
BAN Logic Limitations • The Otway-Rees example shows that the protocol interface zone is not adequately verified by BAN logic. • It turns out that BAN logic is flawed in even more fundamental ways: • BAN analysis (89) is able to detect potential replay attacks (81,83) against the Needham-Schroeder (78) protocol. • A different attack was discovered by Lowe in 1995, when the protocol was fourteen years old and seven years after BAN ``proved'' NS secure.
Other Approaches to Protocol Verification • Model checking • Design rules (Anderson & Needham) • Process Algebras, inductive proofs. • CSP (Tony Hoare) • Casper (combines algebra & model checking) • The Papa/Magill/Nuchia Calculus • A process algebra based directly on protocol engineering notation. • Reasons directly about the knowledge available to an intruder. Proofs are fundamentally inductive.
The Dolev-Yao Assumptions ownv. A hacker culture term that means to control completely. A machine broken into and under complete control of the hacker is "owned." The term has sexual overtones. - http://www.robertgraham.com/pubs/hacking-dict.html • The communications channel is owned by your opponent. • He can observe, divert, modify, fabricate and replay messages at will. • The intruder knows everything about your protocol, network, and cryptography except what you explicitly assume (and can prove) is secret.