The Heisenberg limit (HL) and the standard quantum limit (SQL) are two fundamental quantum metrological limits, which describe the scalings of estimation precision of an unknown parameter with respect to N, the number of one-parameter quantum channels applied. In the first part, we show the HL (1/N) is achievable using quantum error correction (QEC) strategies when the ``Hamiltonian-not-in-Kraus-span'' (HNKS) condition is satisfied; and when HNKS is violated, the SQL (1/N^1/2) is optimal and can be achieved with repeated measurements. In the second part, we identify modified metrological limits for estimating one-parameter qubit channels in settings of restricted controls where QEC cannot be performed. We prove unattainability of the HL and further show a ``rotation-generators-not-in-Kraus-span'' (RGNKS) condition that determines the achievability of the SQL.
Empirical evidence for a gap between the computational powers of classical and quantum computers has been provided by experiments that sample the output distribution of two-dimensional quantum circuits. Many attempts to close this gap have utilized classical simulations based on tensor network techniques, and their limitations shed light on the improvements to quantum hardware required to inhibit classical simulability. In particular, state of the art quantum computers having in excess of ~50 qubits are primarily vulnerable to classical simulation due to restrictions on their gate fidelity and their connectivity, the latter determining how many gates are required (and therefore how much infidelity is suffered) in generating highly-entangled states. Here, we describe numerical evidence for the difficulty of random circuit sampling in highly connected geometries.
Quantum computers operate by manipulating quantum systems that are particularly susceptible to noise. Classical redundancy-based error correction schemes cannot be applied as quantum data cannot be copied. These challenges can be overcome by using a variation of the quantum teleportation protocol to implement those operations which cannot be easily done fault-tolerantly. This process consumes expensive resources called 'magic states'. The vast quantity of these resources states required for achieving fault-tolerance is a significant bottleneck for experimental implementations of universal quantum computers.
I will discuss a program of finding and classifying those quantum operations which can be performed with efficient use of magic state resources. I will focus on the understanding of not just qubits but also the higher-dimensional 'qudit' case. This is motivated by both practical reasons and for the resulting theoretical insights into the ultimate origin of quantum computational advantages. Research into these quantum operations has remained active from their discovery twenty-five years ago to the present. Our approach introduces the novel use of tools from algebraic geometry.
The results in this talk will include joint work with Chen, Lautsch, and Bampounis-Barbosa.
Binary constraint system games are a generalization of the Mermin-Peres magic square game introduced by Cleve and Mittal. Thanks to the recent MIP*=RE theorem of Ji, Natarajan, Vidick, Wright, and Yuen, BCS games can be used to construct a proof system for any language in MIP*, the class of languages with a multiprover interactive proof system where the provers can share entanglement. This means that we can apply logical reductions for binary constraint systems to MIP* protocols, and also raises the question: how complicated do our constraint systems have to be to describe all of MIP*? In this talk, I'll give a general overview of this subject, including an application of logical reductions to showing that all languages in MIP* have a perfect zero knowledge proof system (joint work with Kieran Mastel), and one obstacle to expressing all of MIP* with linear constraints (joint work with Connor Paddock).
I will present some recent work on the interplay between contextuality, entanglement, and magic in multiqubit systems. Taking a foundational inquiry into entanglement in the Kochen-Specker theorem as our point of departure, I will proceed to outline some questions this raises about the role of these resources in models of multiqubit quantum computation. The purpose of this talk is to raise questions that can hopefully feed into the discussion sessions.
A universal and well-motivated notion of classicality for an operational theory is explainability by a generalized-noncontextual ontological model. I will here explain what notion of classicality this implies within the framework of generalized probabilistic theories. I then prove that for any locally tomographic theory, every such classical model is given by a complete frame representation. Using this powerful constraint on the space of possible classical representations, I will then prove that the stabilizer subtheory has a unique classical representation—namely Gross's discrete Wigner function. This provides deep insights into the relevance of Gross's representation within quantum computation. It also implies that generalized contextuality is also a necessary resource for universal quantum computation in the state injection model.
Quantum Darwinism proposes that the proliferation of redundant information plays a major role in the emergence of objectivity out of the quantum world. Is this kind of objectivity necessarily classical? We show that if one takes Spekkens’s notion of noncontextuality as the notion of classicality and the approach of Brandão, Piani, and Horodecki to quantum Darwinism, the answer to the above question is “‘yes,” if the environment encodes the proliferated information sufficiently well. Moreover, we propose a threshold on this encoding, above which one can unambiguously say that classical objectivity has emerged under quantum Darwinism.
We give a simple description of rectangular matrices that can be implemented by a post-selected stabilizer circuit. Given a matrix with entries in dyadic cyclotomic number fields $\mathbb{Q}(\exp(i\frac{2\pi}{2^m}))$, we show that it can be implemented by a post-selected stabilizer circuit if it has entries in $\mathbb{Z}[\exp(i\frac{2\pi}{2^m})]$ when expressed in a certain non-orthogonal basis. This basis is related to Barnes-Wall lattices. Our result is a generalization to a well-known connection between Clifford groups and Barnes-Wall lattices. We also show that minimal vectors of Barnes-Wall lattices are stabilizer states, which may be of independent interest. Finally, we provide a few examples of generalizations beyond standard Clifford groups.
Joint work with Sebastian Schonnenbeck
Quantum mechanics forbids the creation of ideal identical copies of unknown quantum systems and, as a result, copying quantum information. This fundamental and non-classical 'unclonability' feature of nature has played a central role in quantum cryptography, quantum communication and quantum computing ever since its discovery. However, unclonability is a broader concept than just the no-cloning theorem. In this talk, I will go over different notions of quantum unclonability and show how they link to many important questions and topics in quantum applications both in quantum machine learning and quantum cryptography. I will also broadly cover the link between unclonability and other fundamental concepts, such as randomness, pseudorandomness and contextuality.
This talk will present work-in-progress towards a new programming methodology for Cliffords, where n-ary Clifford unitaries over qudits can be expressed as functions on compact Pauli. Inspired by the fact that projective Cliffords correspond to center-fixing automorphisms on the Pauli group, we develop a type system where well-typed expressions correspond to symplectic morphisms---that is, linear transformations that respect the symplectic form. This language is backed up by a robust categorical and operational semantics, and well-typed functions can be efficiently simulated and synthesized into circuits via Pauli tableaus.
Whilst tomography has dominated the theory behind reconstructing/approximating quantum objects, such as states or channels, conducting full tomography is often not necessary in practice. If one is interested in learning properties of a quantum system, side-stepping the exponential lower bounds of tomography is then possible. In this talk, we will introduce various learning models for approximating quantum objects, survey the literature of quantum learning theory and explore instances where learning can be fully time- and sample efficient.