60 likes | 223 Views
Measuring Anonymity: A Tale of Two Distributions. Nikita Borisov UIUC PET 2006 Rump Session. Anonymity measures. High level problem: characterize a probability distribution D1: 1/128, 1/128 …, 1/128 D2: 1/2, 1/8192, …, 1/8192 Which is better?. Answers. Reiter-Rubin: D1 is better
E N D
Measuring Anonymity: A Tale of Two Distributions Nikita Borisov UIUC PET 2006 Rump Session
Anonymity measures • High level problem: characterize a probability distribution • D1: 1/128, 1/128 …, 1/128 • D2: 1/2, 1/8192, …, 1/8192 • Which is better?
Answers • Reiter-Rubin: D1 is better • D1 is beyond suspicion, D2 has only probable innocence • Anonymity sets: D2 is better • Anonymity set is 4097 instead of 128 • Entropy metric (Shannon): D1 = D2 • H(D1) = H(D2) = 7 • Entropy (min): D1 is better • H_min(D1) = 7, H_min(D2) = 1
Single message case • D1 is better • Imagine hiring hit man to attack k most likely people • With D1, your ROI is low for any k • With D2, k=1 is pretty good • Is min entropy what we want? • H_min(D2) = H_min([1/2, 1/2]) • Perhaps guessing entropy is better
Multiple message case • Suppose attacker observes 2 independent messages • Model anon. system as a noisy channel • Mutual information tells us about channel capacity • I(X; Y) = H(X) - H(X | Y) • Two observations: • I(X; Y1, Y2) <= I(X; Y1) + I(X; Y2) • (equality when Y1, Y2 are independent)
Anonymity degree • It takes at least I(X;Y) / H(X) messages to learn X precisely • This is in fact 1/(1-d) where d is the Diaz et al degree of anonymity • (d = H(X|Y) / H(X)) • Normalizing makes sense!