1 / 35

Approximate List-Decoding and Uniform Hardness Amplification

Approximate List-Decoding and Uniform Hardness Amplification. Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU). Hardness Amplification. f. F. Hard function. Harder function. Given a hard function we can get an even harder function. Hardness. {0, 1} n. {0, 1} n.

chaney
Download Presentation

Approximate List-Decoding and Uniform Hardness Amplification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Approximate List-Decoding and Uniform Hardness Amplification Russell Impagliazzo (UCSD) Ragesh Jaiswal (UCSD) Valentine Kabanets (SFU)

  2. Hardness Amplification f F Hard function Harder function • Given a hard function we can get an even harder function

  3. Hardness {0, 1}n {0, 1}n s f δ.2n • A function f is called δ-hard for circuits of size s (Algorithm with running time t), if any circuit of size s (Algorithm with running time t)makes mistake in predicting the function on at least δ fraction of the inputs

  4. XOR Lemma {0, 1}nk {0, 1}n f fk f f f 0/1 k XOR 0/1 fk:{0, 1}nk {0, 1} fk(x1,…, xk) = f(x1)  …  f(xk) • XOR Lemma: If f is δ-hard for size s circuits, then fk is (1/2 - ε)-hard for size s’ circuits (ε = e-Ω(δk), s’ = s·poly(δ, ε))

  5. XOR Lemma Proof: Ideal case C (which computes fkfor at least (½ + ε) fraction of inputs) A whp C (which computes f for at least (1 - δ) fraction of inputs)

  6. XOR Lemma Proof A “lesser” nonuniform reduction C (which computes fkfor at least (½ + ε) fraction of inputs) A Advice (|Advice|=poly(1/ε)) whp C1 Cl C (which computes f for at least (1 - δ) fraction of inputs) One of them computes f for at least (1 - δ) fraction of inputs l = 2|Advice| = 2poly(1/ε)

  7. Optimal List Size • Question: What is the reduction in the list size we should target? • A good combinatorial answer using error correcting codes C A whp C1 Cl

  8. XOR-based Code [T03] Think of a binary message msg on M=2n bits as a truth-table of a Boolean function f. The code of msg is of length Mk where code(x1,…,xk) = f(x1)  …  f(xk) x (|x| = n) msg f(x) x = (x1, …, xk) code f(x1)  …  f(xk)

  9. List Decoder ≈(1/2 + ) m c w XOR Encoding Decoding channel m1,…,ml ≈ (1 - δ) • Decoder • Local • Approximate • List Information theoretically l should be O(1/2)

  10. The List Size • The proof of Yao’s XOR Lemma yields an • approximate local list-decoding algorithm for • the XOR-code defined above • But the list size is 2poly(1/) rather than the • optimal poly(1/) • Goal:Match the information theoretic bound • on list-decoding i.e. get advice of length • log(1/)

  11. The Main Result

  12. The Main Result C((½ + ε)-computes fk) A Advice(|Advice| = log(1/ε)) whp C ((1 - δ)-computes f) • ε = poly(1/k),δ = O(k-0.1) • Running time of A and size of C is at most poly(|C|, 1/ε)

  13. The Main Result C((½ + ε)-computes fk) A w.p. poly(ε) C ((1 - δ)-computes f) • ε = poly(1/k),δ = O(k-0.1) • Running time of A and size of C is at most poly(|C|, 1/ε)

  14. The Main Result C((½ + ε)-computes fk) • We get a list size of poly(1/ε) • … which is optimal but… • ε is large: ε = poly(1/k) Advice(|Advice| = log(1/ε)) A A’ w.p. poly(ε) whp C ((1 - δ)-computes f) Cl C1 l = poly(1/ε) At least one of them (1 - ρ)-computes f Advice efficient XOR Lemma

  15. Uniform Hardness Amplification

  16. Uniform Hardness Amplification • What we want f hard wrt BPP g harder wrt BPP • What we get Advice efficient XOR Lemma f hard wrt BPP/log g harder wrt BPP

  17. Uniform Hardness Amplification • What we can do: [BDCGL92] f Є NP: hard wrt BPP f’ Є NP: hard wrt BPP/log Advice efficient XOR Lemma Simple average-case reduction g Є PNP||:harder wrt BPP h Є PNP||:hard wrt BPP g Є ??harder wrt BPP 1/nc ½ - 1/nd • g not necessarily Є NP but g Є PNP|| • PNP||: poly-time TM which can make polynomially many • parallel Oracle queries to an NP oracle Trevisan gives a weaker reduction (from 1/nc to (1/2 – 1/(log n)α) hardness) but within NP.

  18. Techniques

  19. Techniques • Advice efficient Direct Product Theorem • A Sampling Lemma • Learning without Advice • Self-generated advice • Fault tolerant learning using faulty advice

  20. Direct Product Theorem {0, 1}nk {0, 1}n f fk f f f 0/1 k concatenation fk:{0, 1}nk {0, 1}k fk(x1,…, xk) = f(x1) | … | f(xk) {0, 1}k • Direct Product Theorem:If f isδ–hard for size s circuits, then fkis • (1 - ε)-hard for size s’ circuits (ε = e-Ω(δk), s’ = s·poly(δ, ε)) • Goldreich-Levin Theorem:XOR Lemma and Direct Product Theorem • are saying the same thing

  21. XOR Lemma from Direct Product Theorem C ((½ + ε)-computes fk) A1 • Using Goldreich-Levin Theorem whp CDP (poly(ε)-computes fk) A2 w.p. poly(ε) C ((1 - δ)-computes f) • ε = poly(1/k),δ = O(k-0.1)

  22. LEARN from [IW97] CDP(-computes fk) LEARN [IW97] Advice: n/2 pairs of (x, f(x)) for independent uniform x’s whp C ((1 - δ)-computes f) • ε = e-Ω(δk)

  23. Goal • We want to eliminate the advice (or the (x, f(x)) pairs). • In exchange we are ready to make some compromise on • the success probability of the randomized algorithm CDP(-computes fk) LEARN [IW97] LEARN’ Advice: n/2 pairs of (x, f(x)) for independent uniform x’s w.p. poly() whp No advice!!! C ((1 - δ)-computes f) • ε = e-Ω(δk) • ε = poly(1/k), δ = O(k-0.1)

  24. Self-generated advice

  25. Imperfect samples • We want to use the circuit CDPto generate n/2 pairs (x, f(x)) for independent uniform x’s • We will settle for n/2pairs (x,bx) • The distribution on x’s is statistically close to uniform and • for mostx’s we have bx= f(x). • Then run a fault-tolerant version of LEARN on CDP and the generated pairs (x,bx)

  26. How to generate imperfect samples

  27. A Sampling Lemma xk x1 x2 x3 2nk • D is a Uniform Distribution nk

  28. A Sampling Lemma xk x1 x2 x3 • |G| >=  2nk • Stat-Dist(D, U) <= ((log 1/)/k)1/2 G nk

  29. Getting Imperfect Samples • G: subset of inputs on which CDP(x) = fk(x) • |G| >= 2nk • Pick a random k-tuplex, then pick a random subtuplex’of size k1/2 • With probability, xlands in the “good” set G • Conditioned on this, the Sampling Lemma says that x’ is close to being uniformly distributed • If k1/2 > the number of samples required by LEARN,then done! • Else…

  30. Direct Product Amplification • CDPCDP’which poly(ε)-computes fk’ • where (k’)1/2 > n/ε2 • ?? • CDP CDP’such that for at least poly(ε) fraction of k’-tuples, x • CDP’(x) and fk’(x) agree on most bits

  31. Putting Everything Together

  32. CDP for fk CDP’ for fk’ DP Amplification Sampling pairs (x,bx) Fault tolerant LEARN with probability > poly() circuit C (1-)-computes f Repeat poly(1/) times to get a list containing a good circuit for f, w.h.p.

  33. Open Questions

  34. Open Questions • Advice efficient XOR Lemma for smaller  • For ε > exp(-kα) we get a quasi-polynomial list size • Can we get an advice efficient hardness amplification result using a monotone combination function m (instead of )? • Some results: [Buresh-Oppenheim, Kabanets, Santhanam] use monotone list-decodable codes to re-prove Trevisan’s results for amplification within NP

  35. Thank You

More Related