Suppose you are given m copies of an unknown n-qubit stabilizer state. How many copies do you need before you can figure out exactly what state it is? Just to specify the state requires about n^2/2 bits, so certainly m is at least n/2. Using only single-copy measurements, we show how to identify the state with high probability using m=O(n^2) copies. If one can make joint measurements, O(n) copies is sufficient.This is joint work with Scott Aaronson.
As became apparent during Koenraad\'s talk, there are some important subleties to concepts like \'flat prior\' and \'uniform distribution\'... especially over probability simplices and quantum state spaces. This is a key problem for Bayesian approaches. Perhaps we\'re more interested in Jeffreys priors, Bures priors, or even something induced by the Chernoff bound! I\'d like to start a discussion of the known useful distributions over quantum states & processes, and I nominate Karol Zyckowski to lead it off.
Estimation of quantum Hamiltonian systems is a pivotal challenge to modern quantum physics and especially plays a key role in quantum control. In the last decade, several methods have been developed for complete characterization of a \'superopertor\', which contains all information about a quantum dynamical process. However, it is not fully understood how the estimated elements of the superoperator could lead to a systematic reconstruction of many-body Hamiltonians parameters generating such dynamics. Moreover, it is often desirable to utilize the relevant information obtained from quantum process estimation experiments for optimal control of a quantum device. In this work, we introduce a general approach for monitoring and controlling evolution of open quantum systems. In contrast to the master equations describing time evolution of density operators, here, we develop a dynamical equation for the evolution of the superoperator acting on the system. This equation does not presume any Markovian or perturbative assumptions, hence it provides a broad framework for analysis of arbitrary quantum dynamics. As a result, we demonstrate that one can efficiently estimate certain classes of Hamiltonians via application of particular quantum process tomography schemes. We also show that, by appropriate modification in the data analysis techniques, the parameter estimation procedures can be implemented with calibrated faulty state generators and measurement devices. Furthermore, we propose an optimal control theoretic approach for manipulating quantum dynamics of Hamiltonian systems, specifically for the task of decoherence suppression.
We report an experiment on reconstructing the quantum state of bright (macroscopic) polarization-squeezed light generated in a birefringent (polarization-maintaining) fibre due to the Kerr nonlinearity. The nonlinearity acts on both H and V polarization components, producing quadrature squeezing; by controlling the phase shift between the H and V components one can make the state squeezed in any Stokes observable. The tomography is performed by measuring histograms for a series of Stokes observables, and the resulting histograms (tomograms) are processed in a way similar to the classical 3D Radon transformation. At the output, we obtain the polarization Q-function, which in the case of large photon numbers coincides with the polarization W-function. An interesting extension of the performed experiment will be going down to lower photon numbers (mesoscopic quantum states), and we expect a different behaviour of polarization W and Q functions in this case. An experiment on producing such states is discussed.
I will discuss a few case studies of coherent control experiments and how we use quantum esstimation to motivate improved experiments. Examples from NMR with physical and logical quits, electron/nuclear spin systems and persistent current flux qubits
Quantum information technologies have recorded enormous progress within the recent fifteen years. They have developed from the early stage of thought experiments into nowadays almost ready-to-use technology. In view of many possible applications the question of efficient analysis and diagnostics of quantum systems appears to be crucial. The quantum state is not an observable and as such it cannot be measured in the traditional sense of thisword. Information encoded in a quantum state may be portrayed by various ways yielding the most complete and detailed picture of the quantum object available. Due to the formal similarities between the quantum estimation and medical non-invasive 3D imaging, this method is also called quantum tomography. Many different methods of quantum tomography have been proposed and implemented for various physical systems. Experiments are being permanently improved in order to increase our ability to unravel even the most exquisite and fragile non-classical effects. Progress has been made not only on the detection side of tomography schemes. Mathematical algorithms too have been improved. The original linear methods based on the inverse Radon transformation are prone to producing artifacts and have other serious drawbacks. For example, the positivity of the reconstructed state required by quantum theory is not guaranteed. This may obviously lead to inconsistent statistical predictions about future events. For such reasons, the simple linear methods are gradually being replaced by statistically motivated methods, for example by Bayesian or maximum-likelihood (ML) [1,2] tomography methods.The quantification of all relevant errors is an indispensable but often neglected part of any tomographic scheme used for quantum diagnostic purposes. The result of quantum tomography cannot be reduced merely to finding the most likely state. What also matters is how much the other states, those being less likely ones, would be consistent with the registered data. In this sense, also states lying in the neighborhood of the most likely state should be taken into account for making future statistical predictions. For this purpose we introduce a novel resolution measure, which provides ``error bars\'\' for any inferred quantity of interest. This is illustrated with an example of the diagnostics of non-classical states based on the value of the reconstructed Wigner function at the origin of the phase space. We show that such diagnostics is meaningful only when some prior information on the measured quantum state is available. In this sense quantum tomography based on homodyne detection is more noisy and more uncertain than widely accepted nowadays. Since the error scales with the dimension, the choice of a proper dimension of the reconstruction space is vital for successful diagnostics of non-classical states. There are two concurring tendencies for the choice of this dimension. When the reconstruction space is low-dimensional, the reconstruction noise is kept low, however there may not be enough free parameters left for fitting of a possibly high-dimensional true state. In the case of high-dimensional reconstruction space, the danger of missing important components of the true state is smaller, however the reconstruction errors may easily exceed acceptable levels. These issues will be discussed in the context of penalization and constraints for maximizing the likelihood [3]. The steps described above are the necessary prerequisites for the programme of objective tomography, where all the conclusions should be derived on the basis of registered data without any additional assumptions. New resolution measure based on the Fisher information matrix may be adopted for designing optimized tomography schemes with resolution tuned to a particular purpose. Quantum state tomography may serve as a paradigm for estimating of more complex objects, for example process tomography. [1] Z. Hradil, Phys. Rev. A 55, R1561 (1997). [2] Z. Hradil, D. Mogilevtsev, and J.Rehacek, Phys. Rev. Lett. 96, 230401 (2006). [3] J.Rehacek, D. Mogilevtsev and Z. Hradil, New J. Phys 8. April, 043022 (2008)
I will briefly describe our recent progress in solving some optimization problems involving metrology with multipath entangled photon states and optimization of quantum operations on such states. We found that in the problem of super-resolution phase measurement in the presence of a loss one can single out two distinct regimes: i) low-loss regime favoring purely quantum states akin the N00N states and ii) high-loss regime where generalized coherent states become the optimal ones. Next I will describe how to optimize photon-entangling operations beyond the Knill-Laflamme-Milburn scheme and, in particular, how to exploit hyperentangled states for entanglement-assisted error correction.If time allows I will briefly review our results on generalization of the Bloch Sphere for the case of two qubits exploiting the SU(4)/Z2-SO(6) group isomorphism. References 1. D. Uskov & Jonathan P. Dowling. Quantum Optical Metrology in the Presence of a Loss (in preparation); Sean D. Huver et al, Entangled Fock States for Robust Quantum Optical Metrology, Imaging, and Sensing, arXiv:0808.1926. 2. D. Uskov et al, Maximal Success Probabilities of Linear-Optical Quantum Gates, arXiv:0808.1926. 3. M. Wilde and D. Uskov, Linear-Optical Hyperentanglement-Assisted Quantum Error-Correcting Code, arXiv:0807.4906. 4. D. Uskov and R. Rau Geometric phases and Bloch sphere constructions for SU(N), with a complete description of SU(4), Phys. Rev. A 78, 022331 (2008).