Day 4 – VI Paraty Quantum Information Workshop

Today we had the Workshop’s eagerly anticipated boat trip. That didn’t prevent us from having a full morning session, with six talks.

The first talk of the morning was by Dario Gerace (Pavia), and it was on ways to establish long-distance entanglement between quantum dots in solid state systems. Dario started with a review of entanglement measures, some results on entanglement dynamics in open quantum systems, and bath engineering. Then he introduced the main open problem addressed by his talk: how far can two qubits be and still display entanglement in current solid state systems?

His system of choice are quantum dots, artificial atoms which can interact strongly with light inside cavities, as we’ve seen in previous Workshop talks. More precisely, Dario and collaborators study photonic crystal cavities enclosing quantum dots. The photonic crystal creates band-gaps for photons. If we have defects in the crystal, we get defect states on top of the band-gap.

The coupling between two such cavities can be engineered so as to  not decay quickly with distance. With one dot in each cavity, the dots can become entangled at a distance. Dario developed a full quantum model for this system, indicating entanglement can survive at a distance of about 900 nm. There’s a perspective of having entanglement over even longer distances using silicon waveguides (reaching distances of up to 200 optical wavelengths). Dario also briefly mentioned a second paper in which they study two dots in the same cavity, and which can become entangled even with an incoherent pump.

The next talk was by Elisa Bälmer (Zurich). She presented a nice review of Szilard’s engine, Maxwell’s demon in that context, and ways one may go about demonstrating work extraction experimentally in different systems. She described implementations of such engines by Toabe et al., Koski et al., and Vidrighin et al., pointing out their achievements but also some of the shortcomings. These included either no direct measurement of the work performed, or the investment of more work in the engine than the amount of work actually performed by it.

This helped set up the stage for a list of desired features of such a demonstration: the work performed should be directly measured, autonomous running (i.e. no feedback loop required), and we should be able to extract more work that the amount put in. With this, she described her proposed experimental realization using two internal states of a Beryllium and a Calcium atom in an ion trap, manipulated by red-sideband and Molmer-Sorensen gates. She mentioned the experimental work is under way, and hopefully will be finished in a few weeks.

Next we had a talk by Paul Erker (Svizzera Italiana), on minimal realizations of quantum clocks. He motivated the problem, recalling previous results on time-energy “uncertainty” principles, quantum speed limits, and the definition of time using correlations between separated quantum systems. He pointed out that clocks are an inherently bipartite object, in which asymmetric flow of information from the pointer (engine) to the register (clock dial) brings about irreversibility.

His description of a minimal quantum clock uses two minimal quantum heat engines, one coupled to a cold and one to a hot reservoir. These are the minimal engines described in work by Popescu and others. He then numerically investigated the main quantities of interest, the resolution (how many average ticks per unit time) and the accuracy (number of ticks before the next tick is uncertain by the average time interval between ticks). He mentioned there are more operational ways of defining those.

The results? There’s a trade-off between accuracy and resolution, and which depends on the temperature of the baths. This was obtained using numerical simulations, master equations, and analytical results in the weak-coupling regime. He illustrated the need for such a trade-off by imagining having to divide a single can of beer between two friends at Casa Lama (only those who attended Paraty will know what that is!). One is a fast calculator but makes mistakes, whereas the other is a slow, methodical calculator who rarely makes a mistake. An actual can of beer was used in this demonstration!

Among his conclusions, he pointed out that it seems that the amount of entropy produced represents a resource for measuring time. Some open questions include possible advantages of running two clocks in parallel; and differences that may come up if we use out-of-equilibrium thermodynamics.

After the coffee break, Rosanna Nichols (Nottingham) talked about estrategies for phase estimation in multiparameter quantum metrology. She started by reviewing the importance of metrology and its future use in LIGO and other high-stakes applications. She then reviewed the general set-up of metrology: preparation of probe state, interaction with the system being measured, and measurement of the final probe state to obtain an estimate of the system’s parameter (usually a phase). She described the Heisenberg limit and the advantage of quantum over classical states for metrology.

Now, she considered a model of noise over the probe qubits, consisting of a Mises-Fisher probability distribution (similar to a Gaussian, but on Bloch’s sphere). This is a depolarizing noise, parametrized by a parameter kappa.

In this instance of multiparameter quantum metrology, Rosanna described strategies to measure both the phase phi and the noise parameter kappa, always using GHZ states. She obtained strategies for different regimes in the kappa/phi plane. The optimal strategy may include schemes which are only partially parallel (involving some sequential interactions), so as to maximize accuracy while avoiding using large GHZ states with large noise-sensitivity.

The next talk was also on metrology, and was delivered by Fabricio Toscano (UFRJ). Fabricio started by describing how in order to attain optimal accuracy in metrology, we need to saturate both the classical Cramer-Rao bound, and the quantum information bound (QIB). For that, we need to work out which states to use, and which projective measurements to adopt for the final measurement of the probes. Fabricio reviewed the main results of the original Braunstein/Caves paper. One important point is that in general, to saturate the QIB it’s necessary to know the true value of the parameter to be estimated (which, frankly, seems to defeat the purpose!). In order to deal with this, one may either use adaptive measurements, which gradually adjust to the best estimate for the true value, or use another approach which he went on to describe.

First, we need to find the generator A of the dynamics between probe and system. This could be, for example, the number operator in the case of phase estimation. Fabricio’s first important result was that the Fisher information (hence, ultimate accuracy) depends on the variance of A for the initial probe state, and not on the parameter value (which is great). Then he discusses which initial states make the ultimate accuracy bound attainable, and those must be combinations of eigenstates of A with a certain symmetry amout A’s mean value. This can always be done for two-dimensional probes, but might not be feasible in probes with 3 dimensions or more. He also discussed the optimal measurements to use, which he found explicitly, and which are projective measurements.

The last talk before the conference boat trip was by Gabriel Aguilar (UFRJ). He described work done in collaboration with the Calgary group, setting up a teleportation experiment over telecom fibers deployed across the city of Calgary. The motivations are those behind quantum repeaters: QKD, blind quantum computation, etc.

The experiment was performed with a fiber distance of 11.1 Km betwen Bob and Charlie, and 6.2 Km between Alice and Charlie. Charlie was the intermediate party, doing a Bell state measurement (BSM) postselecting a single basis state, which can be done with a single beam-splitter and photodetectors. They also needed to use a side classical channel in a separate fiber to synchronize the labs and take information about the BSM to Bob. The qubits were time-bin qubits, as polarization tends to drift in long fibers. They managed to overcome various challenges in order to obtain high photon indistinguishability at Charlie, obtaining a teleportation fidelity (with respect to a few target states) consistently above the classical bound. They also used the decoy state method to obtain a lower bound on Alice’s single-photon fidelity, i.e. to be sure they didn’t often have more than one photon by attenuated pulse.

This was all for today! I have to head back to Rio today, so will be missing both the boat trip and the last morning of the workshop. I hope all participants are having a great time at Paraty, and hope to see many of you here again in 2019!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.