Scott Aaronson’s mini-course at UFF, 2nd lecture

Here’s my account of Scott’s second lecture, Scott was kind enough to point out some of the most glaring mistakes in an earlier version of these notes.

Resuming our discussion of the basic features of the main complexity classes, we start by noting that many problems are not even in NP, but “higher up” in the hierarchy, i.e. they  are hard to solve and also hard to verify. As examples, one can generalize games of strategy for arbitrarily large board sizes, as is common with Go. It was shown that many such games of strategy are complete for PSPACE, which is the class of all decision problems solvable by a computer using polynomial-sized memory, with no restriction on time. So the inclusions we have are:

P \subseteq NP \subseteq PSPACE \subseteq EXP.

(where EXP is the class of problems solvable in exponential time). At least one of these inclusions is strict: we can prove EXP is strictly larger than P. Even P versus PSPACE is an open problem. In complexity theory some of the most dificult problems involve comparing different kinds of resources, like time vs space.

Now for a more detailed discussion of a few complexity classes that’ll be important later for the description of the complexity of linear-optical experiments.

Class #P

This is the class of problems that involve counting the number of solutions to an NP problem. This class was introduced by Leslie Valiant in 1979, who proved that

there are problems whose decision version is in P, but whose counting version is #P-hard. Example: counting the number of perfect matchings of a bipartite graph. If you recall from last talk, matching vertices in a bipartite graph is in P. But in how many different ways can one match them? Counting the number of solutions is #P-complete. Counter-intuitively, counting the number of solutions of this (easy to solve) problem is as hard as counting the number of solutions of NP-complete problems.

Note that #P is not a class of decision problems, so it doesn’t compare directly to P, PSPACE etc. One way of comparing them is to define P with access to instantaneous answers from an oracle to #P (denoted by P^{\#P}). That class sits between NP and PSPACE.

Answering a question from yesterday: how does the definition of probabilistic Turing machines affect the Extended Church-Turing (ECT) thesis? There’s accumulating evidence for the conjecture that using randomness does not “upgrade” computational classes; effectively obtaining deterministic algorithms where only probabilistic algorithms were known is the so-called “derandomization” programme. One well-known success case is the recent deterministic AKS primality-testing algorithm.

Class BPP

This corresponds to bounded-error probabilistic polynomial time = class of problems that are feasibly solvable by a probabilistic algorithm. BPP is class of all languages for which there exists a poly-time Turing machine with access to random bits, which succeeds for most choices of the random bits. More precisely, the machine has to accept words in the language with probability >= 2/3, and reject words not in the language with prob. >=2/3. We need to bound the success probability away from 1/2 so that running the algorithm a polynomial number of times results in an exponentially good approximation of a deterministic machine.

This leads us to a revision of the ECT: physically solvable problems correspond to the class BPP.

P \subseteq BPP \subseteq P^{\#P}. The relationship between NP and BPP is not known.

The conjecture today is that P=BPP. Why? Even good pseudo-random number generators are bad for cryptographic applications, due to the adversarial nature of these protocols. Despite that, one can construct pseudo-random number generators which, based on very weak assumptions, would guarantee that BPP=P. These two classes are even related with the Riemann hypothesis: if it is true, this would imply a class of problems in BPP would be in P.

A probabilistic Turing machine can be simulated by a deterministic one with an extra input consisting of a random string. Curiously, this doesn’t extend to quantum classes, as in the quantum case there’s no way of clearly separating the randomness from the “quantum stuff” going on.

If NP \neq P, there must be intermediate classes between these two classes. There are problems which are of great practical importance which lie in this region of “complexity-class space”, for example factoring, which we now discuss.

Factoring

Factoring can be turned into a decision problem by asking the value of the n’th bit of a factor (say, the largest). Factoring is in NP – a solution can be efficiently checked. Even if you have doubts whether the factor given is really prime, now we can use the known poly-time primality testing algorithm to sort this out.

Factoring has many properties that suggest it is not NP-hard:

*  For almost all natural NP-complete problems, the fastest known algorithm takes exp(n) time. Factoring can be done in exp(sqrt(n)), and there’s  strong evidence that it can be done in ~exp(n^(1/3)).  (I.e., we “know experimentally” that it’s true; cryptanalysts like NSA actually use the ~exp(n^(1/3)) algorithm to break actual codes and it runs in that amount of time!  The only issue is that the formal analysis relies on an extremely-plausible number-theory conjecture that hasn’t yet been proved.)

*   Factoring results in a single factorization, differently from NP-complete problems, which could have multiple solutions or none.

*   Factoring is in the intersect of NP and coNP. coNP is the class for which a no answer has an NP proof.  So, NP intersect coNP is the class for which either a yes or a no answer has an NP proof.

As a consequence, if factoring were NP-complete, then NP=coNP, i.e. there would be short proofs of the non-existence of solutions, which seems unlikely.

P=NP would imply NP=coNP, but no implication in the other direction is known.

The polynomial hierarchy (PH)

There’s an important hierarchy of complexity classes, the polynomial hierarchy (PH). The zeroth level is P. The next level consists of NP and coNP. Then NP^{NP} and coNP^{NP}, where the super-index indicates access to oracles in that class. You can go on defining classes with access to more and more powerful oracles, and this hierarchy results in classes which are believed to become more and more powerful. The union of these classes, starting from the bottom and going up a constant number of levels, is what we call the polynomial hierarchy PH. A natural generalization of the P \neq NP problem is the statement that the PH is infinite. If P=NP, the PH completely collapses.

Complexity scientists tend to agree that the PH not collapsing is almost as likely as P \neq NP. This is a standard complexity-theorist yardstick for (in)plausibility. Some things that are known:

*   It can be shown that if factoring is NP=complete, then the PH would collapse to its first level.

*   If NP \subseteq BPP, then PH=NP^{NP} i.e. the PH collapses to its first level.

*   PH \subseteq P^{\#P} \subseteq PSPACE – this is evidence that the counting class #P is very powerful indeed (Toda’s theorem).

Another theorem to play a role in the future: BPP \subseteq NP^{NP} (Sipser, Gacs, Lautemann 1983).

Quantum computing and class BQP

Some of the early pioneers of QC were Feynman and Deutsch, even thought they didn’t explicitly identify proven asymptotic advantages for quantum computers. The paper that brought quantum computing  into complexity theory was Bernstein-Vazirani 1993. They defined a quantum-mechanical analogue of BPP, called BQP = bounded-error, quantum polynomial-time. They defined quantum Turing machines, which are a mess to define! Yao and others defined the quantum circuit model, which is easier to reason about.

We’ll now assume some familiarity with quantum circuits, approximating unitaries, and other technical issues associated with quantum circuits.

BQP is the class of problems solvable by a poly-time classical Turing machine M with the ability to apply a poly-size quantum circuit to the initial state |0…0> and then measure its output in the computational basis. Physically, the poly-time classical computer is necessary for controlling the quantum system, and also to find out the precise sequence of operations to be applied, for inputs of any size (the uniformity requirement). So the classical M needs to output a classical description of the quantum circuit. The demand on success probabilities is the same as those applied to the BPP class.

Scott then quickly recalled the basics of the circuit model: universal sets of gates (such as CNOT, H, Pi/8 gates), Solovay-Kitaev approximation theorem, etc.

So how does BQP fit in with the other classes we’ve defined? P \subseteq BPP \subseteq BQP.

This is because we can do classical computation (i.e. P) with a quantum computer, random bits and all (i.e. BPP). The interesting question is, can we go beyond the capabilities of classical computers? We’ll start by bounding the power of quantum computers from above; as we can simulate them with exponential time and exponential space, BQP \subseteq EXP. This, at least, reminds us that we  can’t compute uncomputable functions with a quantum computer.

Bernstein and Vazirani showed that BQP \subseteq P^{\#P} \subseteq PSPACE. The key to understanding this are Feynman’s path integrals. The computational difficulty consists in summing over all these paths, which can be done with poly-memory and exponential time.

Conjecture: BPP \neq BQP. This would imply P \neq PSPACE, which is one of the great open problems of classical computational complexity. There’s a different setting (black box complexity, similar to oracles in the classical analogue), for which an exponential separation between quantum and classical can be shown (as done in Simon’s algorithm).

In 1994 Shor showed that factoring in BQP. The conjecture is that NP is not contained in BQP, i.e. that quantum computers can give you speedup for some intermediate problems between P and NP only, but not more than a square-root speedup for NP-complete problems (via Grover’s search algorithm).

Quantum computers seem to be able to take advantage of the structure present in some problems, apparently unavailable in NP-complete problems.  We’d love to show that if P=BQP then P=NP (or some other unlikely “classical” consequence would follow), but no proof of that kind seems to be forthcoming.

BQP is a very robust class. An evidence for it are all the different models of quantum computation which have been shown to be adequately described by BQP: measurement-based quantum computation, adiabatic and topological computing among those. This gives us confidence that BQP captures well the essence of feasible quantum computation.

Is BQP in NP? Well, there may be problems for which the quantum computer outputs something that can only be checked by another quantum computer. There is, however, no compelling example of this. Slight extension of BQP: PromiseBQP, which is BQP with a promised gap, i.e. an estimation problem. It seems extremely unlikely that all of these estimation problems have NP witnesses (in other words, that PromiseBQP is contained in PromiseNP), but it is unknown how to transform a general estimation problem into a language decision problem, in order to produce a language L in BQP but not in NP.

Another big open problem: BQP \subseteq PH?

Boson sampling will require broadening our definition of computational tasks, to include sampling problems. With that, we’ll be able tacke this kind of question.

Setting the stage for boson sampling

Are there interesting classes between BPP and BQP, doable with a restricted quantum computer? Here are a few examples.

1- Bounded-depth quantum circuits. We have n qubits and log(n) time layers. This is probably not all of BQP but contains factoring, as Cleve and Watrous proved in 2000. The Cleve/Watrous result is interesting to counter criticisms of the computational power of quantum computers (by Gil Kalai and others), as non-trivial computation is achievable in a small number of time-steps, thus avoiding significant decoherence (the argument, of course, depends on the exact noise model).

2- Constant-depth quantum circuits. In a seminal paper by Terhal and DiVincenzo in 2003, it was shown that exact simulation of constant-depth quantum circuits would result in unlikely  consequences from the point of view of computational complexity theory. This, in a sense, is the precursor result to IQP, BosonSampling and other more recent results along these lines. Significantly, what was investigated here is a sampling problem, rather than a decision problem.

3- One clean qubit model (DQC1). All but one of the qubits start in the maximally-mixed state, with one initially qubit in a pure state. This set-up can solve problems which are believed to be hard for classical computers, like estimating the trace of an exponentially large unitary matrix.

These restricted models may not give us even P, but can give us something that goes beyond BPP, thus falsifying the ECT thesis. More on that in the next lecture!


8 Comments on “Scott Aaronson’s mini-course at UFF, 2nd lecture”

  1. Anon says:

    Second last sentence should be “goes beyond BPP”

  2. In the second-last sentence, shouldn’t that be ” something that goes beyond BPP” rather than “BQP”?

  3. […] Lecture 2: Classical and Quantum Complexity Theory […]

  4. […] un resumen breve del contenido, les recomiendo leer Lecture 1: The Extended Church-Turing Thesis, Lecture 2: Classical and Quantum Complexity Theory, Lecture 3: Linear Optics and Exact BosonSampling, Lecture 4: KLM, Postselection, and Approximate […]


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.