1 / 25

Randomness Vs. Memory: Prospects and Barriers

With insights courtesy of Moni Naor , Ran Raz , Luca Trevisan , Salil Vadhan , Avi Wigderson , many more …. Randomness Vs. Memory: Prospects and Barriers. Omer Reingold , Microsoft Research and Weizmann. Randomness In Computation (1). Distributed computing (breaking symmetry)

Download Presentation

Randomness Vs. Memory: Prospects and Barriers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. With insights courtesy of MoniNaor, Ran Raz, Luca Trevisan, SalilVadhan, AviWigderson, many more … Randomness Vs. Memory:Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann

  2. Randomness In Computation (1) • Distributed computing (breaking symmetry) • Cryptography: Secrets, Semantic Security, … • Sampling, Simulations, … Can’t live without you

  3. Randomness In Computation (2) • Communication Complexity (e.g., equality) • Routing (on the cube [Valiant]) - drastically reduces congestion You change my world

  4. Randomness In Computation (3) • In algorithms – useful design tool, but many times can derandomize (e.g., PRIMES in P). Is it always the case? • RL=L means that every randomized algorithm can be derandomized with only a constant factor increase in memory Do I really need you?

  5. Talk’s Premise: Many Frontiers of RL=L Barriers of previous proofs  wealth of excellent research problems. • RL in L3/2 RL=L • And Beyond

  6. RL  (NL ) L2[Savitch 70] • Configuration graph (per RL algorithm forP & inputx): 0 s = start config 1 0 poly(|x|) configs 1 0 1 1 0 t = accept config 1 0 • x P  random walk from s ends at tw.p. ≥ ½ • xPt unreachable from s • Enumerating all possible paths – too expensive. Main idea: 1st half of computation only transmits log n bits to 2nd half transitions oncurrent random bit duplicate (running time T) ≤ poly(|x|) times

  7. Oblivious Derandomiztion of RL • Pseudorandom generators that fool space-bounded algorithms [AKS87, BNS89, Nisan90, NZ93, INW94] • Nisan’s generator has seed length log2n • Proof that RL in L2 via oblivious derandomization • Major tool in the study of RL vs. L • Applications beyond [Ind00, Siv02, KNO05,…] • Open problem: PRGs with reduced seed length

  8. G x,y x, Ext(x,y) Randomness Extractors@Your Service • Basic idea [NZ93] (related to Nisan’s generator): • Let log-space A read a random100lognbit string x. • Since A remembers at most lognbits, xstill contains (roughly) 99lognbits of entropy (independent of A’s state). • Can recycle x:

  9. Randomness Extractors@Your Service • NZ generator: • Possible setting of parameters: x is O(log n) long. Each yi is O(log½n) long and have log½n yi’s. • Expand O(log n) bits to O(log3/2n) (get any poly) • Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n) G x,y1,y2, … x, Ext(x,y1), Ext(x,y2),

  10. Randomness Extractors@Your Service • NZ generator: • Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n) • Open: get any polynomial expansion w. error 1/n • Open: super polynomial expansion with logarithmic seed and constant error (partial result [RR99]). G x,y1,y2, … x, Ext(x,y1), Ext(x,y2),

  11. G x,y x, Ext(x,y) Nisan,INW Generators via Extractors • Recall basic generator: • Lets flip it …

  12. Nisan,INW Generators via Extractors Altogether: seed length = log 2 n Given state of machine in the middle, Ext(x,y) still -random Loss at each level: log n (possible entropy in state). + log 1/έfor extractor seed, where έ = /n x,y x log n Ext(x,y)

  13. Nisan,INW + NZ RL=L • Let M be an RL machine • Using [Nisan] get M’ that uses only log2n random bits • Fully derandomizeM’ using [NZ] • Or does it? • M’ is not an RL machine (access to seed of [Nisan, INW] not read once) • Still, natural approach – derandomize seed of [Nisan] Can we build PRGs from read once ingredients? Not too promising …

  14. RL  L3/2[SZ95] - “derandomized” [Nis] • Nisan’s generator has following properties: • Seed divided into h (length log2n) and x (length logn). • Given h in input tape, generator runs in L. • M, w.h.p over h, fixing h and ranging over x implies a good generator for M. • h is shorter if we generate less than n bits

  15. [SZ95] - basic idea • Fix h, divide run of M to segments: • Enumerate over x, estimate all transition probs. • Replace each segment with a single transition • Recurse using the same h • Now M’ depends on h M’ close to some t-power of M. [SZ] perturb M’ to eliminate dependency on h

  16. [SZ95] –further progress • Open: Translate [SZ] to a better generator against space bounded algorithms! • Potentially, can then recursively apply [SZ] and get better derandomization of RL (after constant number of iterations may get RL in L1+) • Armoni showed an interesting extrapolation between [NZ] and [INW] and as a result got a slight improvement (RL in L3/2/(log L)1/2)

  17. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/έfor extractor seed, where έ = /n Even for combinatorial rectangles we do not know “optimal” PRGs x,y x Ext(x,y) • Avoiding loss due to entropy in state: • [RR99] Recycle the entropy of the states. • Challenge: how to do it when do not know state probabilities? Open: better PRGs against constant width branching programs

  18. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/έfor extractor seed, where έ = /n x x,y x Ext(x,y(x)) Ext(x,y) • Avoiding loss due to extractor seeds: • Can we recycle y from previous computation? • Challenge: contain dependencies … Do we need a seed at all? Use seedless extractors instead?

  19. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/έfor extractor seed, where έ = /n x,y x Ext(x,y) • Extractor seed is long because we need to work with small error έ = /n • Error reduction for PRGs? If use error έ = /(log n) sequence still has some unpredictability property, is it usable? (Yes for SL [R04,RozVad05]!)

  20. Final Comment on Improving INW • Perhaps instead on reducing the loss per level we should reduce the number of levels? • This means that at each level the number of pseudorandom strings we have should increase more rapidly (e.g., quadraticaly). • Specific approach based on ideas from Cryptography (constructions of PRFs based on PR Synthesizers [NR]), more complicated to apply here.

  21. Its all About Graph Connectivity • Directed Connectivity captures NL • Undirected Connectivity is in L [R04]. • Oblivious derandomization: pseudo-converging walks for consistently labelled regular digraphs [R04,RTV05] • Where is RL on this scale? • Connectivityfor digraphs w/polynomial mixing time [RTV05] • Outgoing edges have labels. • Consistent labelling means that each label forms a permutation on vertices • A walk on consistently labelled graph cannot lose entropy

  22. in L Suffice toprove RL=L Towards RL vs. L? Connectivity for undirected graphs [R04] Connectivityfor regular digraphs [RTV05] • It is not about reversibility but about regularity • In fact it is about having estimates on stationary probabilities [CRV07] Pseudo-converging walks for consistently-labelled, regular digraphs [R04, RTV05] Pseudo-converging walksfor regular digraphs [RTV05] Connectivityfor digraphs w/polynomial mixing time [RTV05] RL

  23. Some More Open Problems • Pseudo-converging walks on an (inconsistently labelled) clique. (Similarly, universal traversal sequence). • Undirected Dirichlet Problem: • Input: undirected graph G, a vertex s, a set B of vertices, a function f: B → [0, 1]. • Output: estimation of f(b) where b is the entry point of the random walk into B.

  24. Conclusions • Richness of research directions and open problems towards RL=L and beyond: • PRGs against space bounded computations • Directed connectivity. Even if you think that NL=L is plain crazy, many interesting questions and some beautiful research …

  25. Widescreen Test Pattern (16:9) Aspect Ratio Test (Should appear circular) 4x3 16x9

More Related