Scott in Rio – 1st lecture

Rio is willing to become, among other things, a hub for quantum information in South America. To push that further, this week Ernesto and Thiago, from the Infoptics@UFF, invited Scott Aaronson, from MIT, to lecture about the relations between complexity theory and quantum optics. Yes, a BosonSampling week!

Scott, as he said himself, can be most usually found in the deepest corners of complexity theory. Recently, however, he’s been trying to come up with physical implementations that would put in check a cornerstone of computer science, namely, the ECT… okay, let’s get started and go through all the capital letters acronyms so common in CS (gee!).

Lecture 01: The Extended Church-Turing thesis

Scott’s plan for this first lecture was to give a crash course on complexity theory. As the classroom is packed with a heterogeneous crowd (all possible combinations between {undergrad, grad-student, postdoc, professor} and {physics, math, computer science}), this was as welcomed as needed. Besides it provides all the necessary ingredients to understand the importance of the BosonSampling problem.

The starting point of the lecture is, of course, the idealization of a computer: the Turing machine. A Turing machine (TM… here we go) is a hypothetical device that locally manipulates symbols on a (possibly infinite) strip of tape. The machine has a finite number of internal states and its behavior at a given point in time is determined by the symbol read and its internal state at that moment.

The details about the TM do not really matter here, but what does matter is that a task is said to be computable if there is a Turing machine that can perform such task in a finite amount of time and halt. That is in fact the very content (in simple terms) of the Church-Turing (CT) thesis. A maybe more compelling way to put this thesis, at least for physicists, would be that “all physically computable functions can be computed by a Turing machine”.

After an mind-boggling digression, impossible to reproduce here (the video of the lecture should be posted soon), somehow connecting hypercomputation, black holes, P=NP, the holographic bound, and the universe’s computation capacity, we got to the central theme of the lecture, the Extended Church-Turing (ECT) thesis. The ECT thesis states that “all efficiently physically computable functions can be efficiently computed by a Turing machine”. The “efficient” thing in the ECT makes all the difference. For those who know about Shor’s efficient quantum algorithm for finding the prime decomposition of a number (here), a red light is probably flashing, as there is no known efficient classical algorithm for factoring to date. Keep in mind that the TM is classical. The catch, however, is that it might actually exist an efficient classical algorithm for factoring that we just don’t know about. That would save the day for the ECT for a while. Scott’s plan is to increase this tension between quantum computation and the ECT to new heights.

P, NP, NP-Hard and NP-Complete

Before we get there, we must define what we mean by “efficient”. As computers scientists learned to love, we say that a computation taking as input a n bits register is efficient if the number of steps it requires can be bounded by polynomial p(n). Nah, now that there is a hole in Pandora’s purse, we can very well wide open it and give proper definitions. A computation is said to be efficient if it’s in P, where P is the class of languages L \in \{0,1\}^n for which there exits a Turing machine M and a poly p such that \forall x \in \{0,1\}^n two things happen: i) M(x) halts after a number of steps smaller than p(|x|); and ii) M(x) accepts iff x\in L.

Whenever we accept that there are problems that we can efficiently solve, we are forced to remember of those other problems that we don’t know how to solve in a efficient way. NP, that stands for non-deterministic polynomial, is the class that contains the problems that although we are maybe not able to solve efficiently, when handed in a purported solution we can verify it efficiently. More formally, a language L is in NP if there exists a polytime Turing machine M such that \forall x \in \{0,1\}^*: if x\in L then there exists a certificate w\in \{0,1\}^{\text{poly}(|x|)} such that M(x,w) accepts; or if x\not\in L then for all certificates w\in \{0,1\}^{\text{poly}(|x|)} we have that M(x,w) rejects.

The clock was ticking when we got to this point. Scott could still introduce the NP-Hard class. L is NP-Hard if any NP problem can be reduced to L in polynomial time. And to call the day off, a problem is NP-Complete if it’s in the intersection between NP and NP-Hard classes. The complexity landscape at the end of the first lecture, if P is different from NP, is as in the picture below.
complexityRegions

As usual in Brazil, we are behind  schedule… But we are having a lot of fun!

Advertisements

4 Comments on “Scott in Rio – 1st lecture”

  1. […] (lista de vídeos). Para los que prefieren un resumen breve del contenido, les recomiendo leer Lecture 1: The Extended Church-Turing Thesis, Lecture 2: Classical and Quantum Complexity Theory, Lecture 3: Linear Optics and Exact […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s