1 / 16

Sparse Random Linear Codes are Locally Decodable and Testable

Sparse Random Linear Codes are Locally Decodable and Testable. Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT). Error-Correcting Codes. Code C ⊆ {0,1} n - collection of vectors ( codewords ) of length n. Linear Code - codewords form a linear subspace

cael
Download Presentation

Sparse Random Linear Codes are Locally Decodable and Testable

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)

  2. Error-Correcting Codes • Code C ⊆{0,1}n - collection of vectors (codewords) of length n. • Linear Code- codewords form a linear subspace • Codeword weight – For c  C , w(c) is#non-zero’s in c. • C is • ntsparse if |C| = nt • n-ƴbiasedif n/2 – n1-ƴ w(c) n/2 + n1-ƴ(for every c  C ) • distance d if for every c  C w(c)  d

  3. Local Testing / Correcting / Decoding GivenC ⊆{0,1}n , vector v, make k queries into v: k - local testing - decide if v is in C or far from every c  C. k - local correcting- if v is close to c  C, recover c(i) w.h.p. k - local decoding- if v is close to c  C, and c encodes a message m , recover m(i) w.h.p. [C = {E(m) | m {0,1}s }, E: {0,1}s → {0,1}n , s < n] Example: Hadamard Code, Linear functions.a {0,1}logn, f(x) =ai xi (k=3) - testing: f(x)+f(y)+f(x+y) =0 ? For random x,y. (k=2) - correcting: correct f(x) by f(x+y) + f(y) for a random y. (k=2) - decoding : recover a(i) by f(ei +y) + f(y) for a random y.

  4. Brief History Local Correction: [Blum, Luby, Rubinfeld] In the context of Program Checking. Local Testability : [Blum,Luby,Rubinfeld] [Rubinfeld, Sudan], [Goldreich, Sudan] The core hardness of PCP. Local Decoding:[Katz, Trevisan], [Yekhanin] In the context of Private Information Retrieval (PIR) schemes. Most previous results (apart from [K, Litsyn] ) focus on specific codes obtained by their “nice” algebraic structures. This work: results for general codes based only on their density and distance.

  5. Our Results Theorem (local-correction): For every t, ƴ > 0 const, If C ⊆{0,1}n is ntsparse and n-ƴbiasedthen it is k=k(t, ƴ ) local corrected. Corollary (local-decoding): For every t, ƴ > 0 const, If E: {0,1}t logn → {0,1}nis a linear map such that C = {E(m) | m {0,1}t logn } is ntsparse and n-ƴbiasedthen E is k=k(t, ƴ ) local decoded. Proof: CE = {(m,E(m))| m {0,1}t logn }is k local corrected. Theorem (local-testing): For every t, ƴ > 0 const, If C ⊆{0,1}n is ntsparse with distancen/2 – n1-ƴthen it is k=k(t, ƴ ) local tested . • Recall, C is • ntsparse if |C| = nt • n-ƴbiasedif n/2 – n1-ƴ w(c) n/2 + n1-ƴ (for every c  C ) • distance d if for every c  C w(c)  d

  6. Corollaries Reproduce testability of Hadamard, dual-BCH codes. Random code - A random code C ⊆{0,1}n obtained by the linear span of a random t logn ∗ n matrix is ntsparse and O(logn/√n) biased, i.e. it is k= (t) local corrected, local decoded and local tested. Can not get denser random code: Similar random code obtained by a random(logn)2 ∗ n matrx doesn’t have such properties. There are linear subspaces of high degree polynomials that are sparse and un-biased so we can local correct, decode and test them. Example: Tr(ax^{2logn/4+1} + bx^{2logn/8+1} ) a,b  F_{2logn} Nice closure properties: Subcodes, Addition of new coordinates, removal of few coordinates.

  7. Main Idea • Studyweight distribution of “dual code” and some related codes. • Weight distribution = ? • Dual code = ? • Which related codes? • How? – MacWilliams Identities + Johnson bounds

  8. Weight Distribution, Duals • Weight distribution:(B0C,…,BnC) BkC- # of codewords of weight k in the code C. 0 k  n • Dual Code : C ┴ ⊆{0,1}n- vectors orthogonal to all codewords in C ⊆{0,1}n. Codeword v  C iff v ┴ C ┴: for every c’  C ┴,< v, c’ > = 0.

  9. C C- i i Len n-2 Len n Len n-1 C- i,j i j Which Related Codes? • Local-Correction: Duals of C, C- i,C- i j • Local-Decoding: Same applied to C’. C’ = {(m,E(m))}. E(m): {0,1}s → {0,1}n , s < n • Local-Testing: Duals of C, and of C  v

  10. Pk (0) = Krawtchouk Polynomial ~nk Pk (i) < (n-2i)k ~nk/2 0 n ~ -nk/2 n/2 -√(kn) n/2 n/2 +√(kn) ~nk n/2 –n1-γ n/2 +n1-ƴ 0 n Duals of Sparse Unbiased Codes have Many k-Weight Codewords Cis ntsparse and n-ƴbiased. BkC┴ = ? MacWilliams Transform : BkC┴ = BiCPk(i) / |C| BkC┴  [Pk (0) + n(1-ƴ) k · n t ] /|C| If k   ( t / ƴ) BkC┴ ~= Pk (0)/|C|

  11. Canonical k-Tester Goal: Decide if v is in C or far from every c  C. Tester: Pick a random c’  [C ┴] k < v, c’ > = 0 accept else reject Total number of possible tests: | [C ┴] k| = BkC┴ For vC bad tests: | [C  v┴] k| = Bk[C  v]┴ Works if number of bad tests is bounded.

  12. Pk (0) = ~nk Pk (i) < (n-2i)k ~nk/2 0 n n/2 –n1-γ n/2 +n1-ƴ ~ -nk/2 n/2 -√(kn) n/2 n/2 +√(kn) δn Johnson Bound Proof of Local Testing Theorem (un-biased) Reduces to show(Gap): for v at distance  from C: Bk[C  v]┴  (1- ) BkC┴ Using Macwilliams and the estimation BkC┴½ BiC vPk(i)  (1- ) Pk (0) Good:  loss C  v=C C +v Bad:  gain C is nt sparse and n-ƴ biased

  13. Canonical k-Corrector Goal:Given v is -close to c  C, recover c(i) w.h.p. Corrector: Pick a random c’  [C ┴] k,i k-weight words w. 1 in i‘th coordinate. Return s  1c’ – { i }vs1c’= { i | c’i = 1} A random location in v is corrupted w.p. If for every i , every other coordinate j that the corrector considers is “random” then probability of error < k

  14. C C- i i Len n Len n-1 Len n-2 C- i,j i j Proof of Self Correction Theorem Reduces to show (2-wise independence property in [C ┴] k ): For every i,j [C ┴] k , i,j / [C ┴] k , i  k/n (as if the code is random) [C ┴] k , i,( [C ┴] k , i,j ) k-weight codewords of C┴with 1 in i, (i & j) coordinates. [C ┴] k , i = [C ┴] k - [ C- i┴ ] k [C ┴] k , i,j = [C ┴] k - [ C- i┴ ] k - [ C- j┴ ] k + [ C- i j┴ ] k All involved codes are sparse and unbiased

  15. Open Issues Local Correction based on distance. Obtain general k-local correction, local-decoding local testing results for denser codes. Which denser codes?

  16. Thank You!!!

More Related