130 likes | 159 Views
Solving Hard Problems With Light. vs. Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov. In 1994, something big happened in the foundations of computer science, whose meaning is still debated today…. Why exactly was Shor’s algorithm important?
E N D
Solving Hard Problems With Light vs Scott Aaronson (Assoc. Prof., EECS) Joint work with Alex Arkhipov
In 1994, something big happened in the foundations of computer science, whose meaning is still debated today… Why exactly was Shor’s algorithm important? Boosters: Because it means we’ll build QCs! Skeptics: Because it means we won’t build QCs! Me: For reasons having nothing to do with building QCs!
Shor’s algorithm was a hardness result for one of the central computational problems of modern science: Quantum Simulation Use of DoE supercomputers by area (from a talk by Alán Aspuru-Guzik) Shor’s Theorem: Quantum Simulation is not solvable efficiently (in polynomial time), unless Factoring is also
Today, a different kind of hardness result for simulating quantum mechanics Advantages: Based on more “generic” complexity assumptions than the hardness of Factoring Gives evidence that QCs have capabilities outside the entire “polynomial hierarchy” Requires only a very simple kind of quantum computation: nonadaptive linear optics (testable before I’m dead?) Disadvantages: Applies to relational problems (problems with many possible outputs) or sampling problems, not decision problems Harder to convince a skeptic that your computer is indeed solving the relevant hard problem Less relevant for the NSA
Bestiary of Complexity Classes P#P Counting Permanent BQP How complexity theorists say “such-and-such is damn unlikely”: “If such-and-such is true, then PH collapses to a finite level” PH xyz… NP BPP 3SAT Factoring P Example of a PH problem: “For all n-bit strings x, does there exist an n-bit string y such that for all n-bit strings z, (x,y,z) holds?” Just as they believe PNP, complexity theorists believe that PH is infinite So if you can show “such-and-such is true PH collapses to a finite level,” it’s damn good evidence that such-and-such is false
Our Results • Suppose the output distribution of any linear-optics circuit can be efficiently sampled by a classical algorithm. Then the polynomial hierarchy collapses. • Indeed, even if such a distribution can be sampled by a classical computer with an oracle for the polynomial hierarchy, still the polynomial hierarchy collapses. • Suppose two plausible conjectures are true: the permanent of a Gaussian random matrix is(1) #P-hard to approximate, and(2) not too concentrated around 0.Then the output distribution of a linear-optics circuit can’t even be approximately sampled efficiently classically, unless the polynomial hierarchy collapses. If our conjectures hold, then even a noisy linear-optics experiment can sample from a probability distribution that no classical computer can feasibly sample from
Particle Physics In One Slide There are two basic types of particle in the universe… All I can say is, the bosons got the harder job BOSONS FERMIONS Their transition amplitudes are given respectively by…
High-Level Idea Estimating a sum of exponentially many positive or negative numbers: #P-hard Estimating a sum of exponentially many nonnegative numbers: Still hard, but known to be in PH If quantum mechanics could be efficiently simulated classically, then these two problems would become equivalent—thereby placing #P in PH, and collapsing PH
So why aren’t we done? Because real quantum experiments are subject to noise Would an efficient classical algorithm that simulated a noisy optics experiment still collapse the polynomial hierarchy? Main Result: Yes, assuming two plausible conjectures about permanents of random matrices (the “PCC” and the “PGC”) Particular experiment we have in mind: Take a system of n identical photons with m=O(n2) modes. Put each photon in a known mode, then apply a Haar-random mm unitary transformation U: Then measure which modes have 1 or more photon in them U
The Permanent Concentration Conjecture (PCC) There exists a polynomial p such that for all n, Empirically true! Also, we can prove it with determinant in place of permanent
The Permanent-of-Gaussians Conjecture (PGC) Let X be an nn matrix of independent, N(0,1) complex Gaussian entries. Then approximating Per(X) to within a 1/poly(n) multiplicative error, for a 1-1/poly(n) fraction of X, is a #P-hard problem.
Experimental Prospects • What would it take to implement the requisite experiment? • Reliable phase-shifters and beamsplitters, to implement an arbitrary unitary on m photon modes • Reliable single-photon sources • Photodetector arrays that can reliably distinguish 0 vs. 1 photon • But crucially, no nonlinear optics or postselected measurements! Our Proposal: Concentrate on (say) n=20 photons and m=400 modes, so that classical simulation is nontrivial but not impossible
Summary • I often say that Shor’s algorithm presented us with three choices. Either • The laws of physics are exponentially hard to simulate on any computer today, • Textbook quantum mechanics is false, or • Quantum computers are easy to simulate classically. For all intents and purposes?