]]>

the evolution is described by a completely positive map. However, if the system and environment are initially correlated, or entangled, such that the so-called quantum discord is non-zero, then the

evolution is described by a map which is not completely positive. Maps that are not completely positive are not as well understood and the implications of having such a map are not completely known. I will discuss a few examples and a theorem (or two) which may help us understand the implications of having maps which are not completely positive.]]>

see also: http://arxiv.org/abs/1210.5531

]]>

Joint work with N. Schuch, D. Perez-Garcia, and J. I. Cirac.]]>

I will explain how to simulate arbitrary quantum circuits on a distributed quantum computer (DQC), in which the pairs of qubits that are allowed to interact are restricted to the edges of some (connected) graph G. Even for graphs with only a modest number of long-range qubit interactions, such as the hypercube, this simulation is, in fact, efficient. Furthermore, for all graphs, the emulation scheme is very close to being optimal.

]]>Secondly I will present an efficient quantum algorithm for parallel unrestricted memory look-up. As an application, I will show that the space-time trade off for Element Distinctness and Collision Finding can be improved.

Both results arise from applying the ideas of reversible sorting networks to quantum computing.

http://arxiv.org/abs/1207.2307

Joint work with Andrea Mari and Jens Eisert.

arXiv:1207.6537

]]>

]]>

]]>

Joint work with Egloff, Renner and Vedral

http://arxiv.org/abs/1207.0434

]]>

These relative entropies act as parent quantities for tasks such as data compression, information

transmission and entanglement manipulation in one-shot information theory. Moreover, they lead us to define entanglement monotones which have interesting operational interpretations.]]>

]]>

set of states, which constitute a natural generalization of the product-state ansatz. Our state of

However, in interactive communication theory, more specifically communication complexity, it is much more recently that tools from information theory have been successfully applied. Indeed, the interactive nature of communication protocols in this setting imposes new constraints and tools specific to this setting need to be developed, both for the interactive analogue of source compression and that of coding for noisy channels. The exciting field of classical interactive information theory has been very active in recent years.

We discuss recent works for its quantum counterpart. In particular, we discuss joint work showing that a constant factor overhead is sufficient for robustly implementing interactive quantum communication over noisy channels [1]. We also discuss work introducing a new notion of quantum information complexity that exactly captures the amortized cost per copy for implementing many copies of a communication task in parallel, such that compressing to this information complexity leads to a bounded-round direct sum theorem [2].

For both of these, we further discuss many interesting potential research directions that follow.

[1] joint work with Gilles Brassard, Ashwin Nayak, Alain Tapp, Falk Unger, QIP’14, FOCS’14 [2] Merge of arXiv:1404.3733 and arXiv:1409.4391, to appear at QIP’15

]]>I will also discuss Simulated Quantum Annealing (SQA), which is a classical simulation of adiabatic optimization at non-zero temperature based on Path-Integral Quantum Monte Carlo. SQA is widely used in practice to study adiabatic optimization, but relatively little is known about the rate of convergence of the markov chain that underlies the algorithm. By focusing on a family of instances which adiabatic optimization solves in polynomial time, but require exponential time to solve using classical (thermal) simulated annealing, I will present numerical evidence as well as a work-in-progress proof that SQA can be exponentially faster than classical simulated annealing.

]]>While the calculus is not complete for general quantum mechanics, I show that it is complete for stabilizer quantum mechanics and for the single-qubit Clifford+T group. This means that within those subtheories, any equality that can be derived using matrices can also be derived graphically.

The ZX-calculus can thus be applied to a wide range of problems in quantum information and quantum foundations, from the analysis of quantum non-locality to the verification of measurement-based quantum computation and error-correcting codes. I also show how to construct a ZX-like graphical calculus for Spekkens' toy bit theory and give its associated completeness proof.

]]>We show that the bomb query complexity is asymptotically the square of the quantum query complexity. Moreover, we will show how this characterization inspires a general theorem relating classical and quantum query complexity, and derive new algorithms from this theorem.

]]>A conjecture formalizing this notion was defined by Freedman and Hastings : called NLTS - it stipulates the existence of locally-defined quantum systems that retain long-range entanglement even at high temperatures. Such a conjecture does not only present a necessary condition for quantum PCP, but also poses a fundamental question on the nature of entanglement itself. To this date, no such systems were found, and moreover, it became evident that even embedding local Hamiltonians on robust, albeit "non-physical" topologies, namely expanders, does not guarantee entanglement robustness.

In this study, refute the intuition that entanglement is inherently fragile: we show that locally-defined quantum systems can, in fact, retain long-range entanglement at high temperatures. To do this, we construct an explicit family of 7-local Hamiltonians, and prove that for such local Hamiltonians ANY low-energy state is hard to even approximately simulate by low-depth quantum circuits of depth o(log(n)). In particular, this resolves the NLTS conjecture in the affirmative, and suggests the existence of quantum systems whose low-energy states are not only highly-entangled but also "usefully"-entangled, in the computational-theoretic sense.

]]>

]]>This is joint work with Oliver Buerschaper, Robert Koenig, Fernando Pastawski and Sumit Sijher.

]]>I will present arguments to bound the synchronization time of any quantum clock as a function of its dimension. In addition, quantum coherence appears to be necessary to saturate these bounds, as the synchronization time of incoherent clocks is seen to have a worse bound. In addition, I will review simple proposals for autonomous clocks built out of thermal machines, and demonstrate that the power consumption of thermal clocks determines the limit of their accuracy. Finally, I will introduce an example of a finite quantum clock that is able to control any quantum operation up to a calculable accuracy, and discuss whether it represents a best case scenario for quantum clocks.

]]>If I have time, I will discuss some ongoing work (with Gavin Brennan) on relational time applied to topological quantum field theories, in particular, how anyonic systems with essentially trivial dynamics can still exhibit correlations that track “time".

]]>temperature T lower than the spectral gap, the density of thermal excitations is suppressed by an exponential Boltzman factor. In contrast to the topological protection however, this thermal protection is not scalable: thermal excitations do appear at constant density for any non-zero temperature and so their presence is unavoidable as the size of the computation increases. Thermally activated anyons can corrupt the encoded

data by braiding or fusing with the computational anyons.

In the present work, we generalize a fault-tolerant scheme introduced by Harrington for the toric-code to the setting of non-cyclic modular anyons. We prove that the quantum information encoded in the fusion space of a non-abelian anyon system can be preserved for arbitrarily long times with poly-log overhead. In particular, our model accounts for noise processes which lead to the creation of anyon pairs from the vacuum, anyon diffusion, anyon fusion as well as errors in topological charge measurements.

Related Arxiv #: arXiv:1607.02159 ]]>

Based on the following works:

[1] https://arxiv.org/abs/1210.3637

[2] https://arxiv.org/abs/1409.3208

[3] https://arxiv.org/abs/1409.4800

[4] https://arxiv.org/abs/1509.05806

In the talk, I will provide an overview of the color code. First, I will establish a connection between the color code and a well-studied model - the toric code. Then, I will explain how one can implement a universal gate set with the subsystem and the stabilizer color codes in three dimensions using techniques of code switching and gauge fixing. Next, I will discuss the problem of decoding the color code. Finally, I will explain how one can find the optimal error correction threshold by analyzing phase transitions in certain statistical-mechanical models.

The talk is based on http://arxiv.org/abs/1410.0069, http://arxiv.org/abs/1503.02065 and recent works with M. Beverland, F. Brandao, N. Delfosse, J. Preskill and K. Svore.

]]>On one hand, dynamical semigroups describing thermalization, namely Davies maps, have the curious property of being their own recovery map, as a consequence of a condition named quantum detailed balance. For these maps, we derive a tight bound relating the entropy production at time t with the state of the system at time 2t, which puts a strong constraint on how systems reach thermal equilibrium.

On the other hand, we also show how the Petz recovery map appears in the derivation of quantum fluctuation theorems, as the reversed work-extraction process. From this fact alone, we show how a number of useful expressions follow. These include a generalization of the majorization conditions that includes fluctuating work, Crooks and Jarzynski's theorems, and an integral fluctuation theorem that can be thought of as the second law as an equality.

Topological methods, in their various forms, have become the main contestants in the quest for succesfully overcoming noise. A good deal of their strength and versatility is due to their rather unique physical flavour, which keeps giving rise to surprising developments.

I will discuss the main driving forces behind my research on topological quantum error correction, as well as exciting recent directions, including unexpected connections with high energy physics.

]]>difficulties in recent years in the form of several ``obstructing'' results.

These are not actually no-go theorems but they do present some serious

obstacles. A further aggravation is the fact that the known topological

error correction codes only really work well in spatial dimensions higher

than three. In this talk I will present a method for modifying a higher

dimensional topological error correction code into one that can be embedded

into two (or three) dimensions. These projected codes retain at least some

of their higher-dimensional topological properties. The resulting subsystem

codes are not discrete analogs of TQFTs and as such they evade the usual

obstruction results. Instead they obey a discrete analog of a conformal

symmetry. Nevertheless, there are real systems which have these features,

and if time permits I'll discuss some of these. Many of them exhibit

strange low temperature behaviours that might even be helpful for

establishing finite temperature fault tolerance thresholds.

This research is still very much a work in progress... As such it has

numerous loose ends and open questions for further investigation. These

constructions could also be of interest to quantum condensed matter

theorists and may even be of interest to people who like weird-and-wonderful

spin models in general. ]]>

Then, we focus on low-noise quantum channels, and present recent results on the quantum capacity to leading order in the noise parameter. This in particular solves the quantum capacity problem (to leading order) for the qubit depolarizing channel, and provides a structure theorem for the capacity achieving codes. For low-noise channels, degenerate codes provide negligible superadditivity effect.

Analoguous results on the private capacity will be presented. Our results imply that for low-noise channels, there is negligible difference between coherence and privacy, and a key rate approaching the capacity can already be obtained using classical error correction and privacy amplification.

Joint work with Felix Leditzky and Graeme Smith ]]>

(2) Security from entanglement: quantum key distribution.

(3) Entanglement enhanced communication: channel capacity and additivity issues.

(4) Some open problems. ]]>

Based on arXiv:1601.07601 (with Sergey Bravyi) and work in progress with Sergey Bravyi, Dan Browne, Padraic Calpin, Earl Campbell and Mark Howard. ]]>

]]>

Reference: L. Banchi, D. Burgarth, M. J. Kastoryano, Phys. Rev. X 7, 041015 (2017)

]]>when the error correction conditions hold only approximately. We will discuss an equivalent formulation that is

robust to the approximation error. One can leverage this tool to derive the existence of approximate quantum

error correcting code at low energy subspace of CFT that reproduces aspects of the holographic quantum error

correcting code. Using the same tool, we observe that two operators with greatly differing complexity approximately

commute in an appropriate code subspace. This leads to a notion of bulk locality in the entanglement shadow,

but the precise definition of complexity seems to play an important role in determining how well these operators commute

in the subspace.

]]>Joint work with M. Christandl, C. Majenz, G. Smith, F. Speelman & M. Walter

]]>the quantum analogue of this framework provides the correct definition of the quantum evolution map, which is found to be always completely positive.

Joint work with David Schmid

]]>Quantum error correction and symmetries are concepts that are relevant in many physical systems, such as in condensed matter physics and in holographic quantum gravity, and play a central role in error correction of reference frames [Hayden et al., arXiv:1709.04471]. I will show that codes that are covariant with respect to a continuous local symmetry necessarily have a limit to their ability to serve as approximate error-correcting codes against erasures at known locations. This is because the environment necessarily gets information about the global logical charge of the encoded state due to the covariance of the code. Our bound vanishes either in the limit of large individual subsystems, or in the limit of a large number of subsystems; in either case there exist codes which approximately achieve the scaling of our bound and become good covariant error-correcting codes. Our results can be interpreted as an approximate version of the Eastin-Knill theorem, quantifying to which extent it is not possible to carry out universal computation approximately on encoded states. In the context of holographic AdS/CFT, our approach provides some insight on time evolution in the bulk versus time evolution on the boundary. We expect further implications for symmetries in holography and in condensed matter physics.

]]>If time permits, I also discuss some applications of these result to thermodynamics and speculate about consequences for quantum information theory and holography.

]]>There are a lot of indications that one dimensional spin chain are relatively simpler than their counterparts in higher dimensions. Nonetheless, I will present a construction of a family of nearest-neighbour, translationally invariant Hamiltonians on a spin chain, for which the spectral gap problem is undecidable.

]]>In this talk, we will employ Matrix Product State formalism and see how low energy eigenstates of 1D translationally invariant Hamiltonians can form quantum error-correcting codes. Before diving into the results, we will review the necessary basics of quantum error correction and matrix product states.

Joint work with Martina Gschwendtner, Robert Koenig and Eugene Tang.

]]>classical simulation of a general quantum circuit. The first obstacle, which

has been well-studied, is the difficulty of efficiently estimating the

probability associated with a particular measurement outcome. We show that

this alone does not determine whether a quantum circuit can be efficiently

simulated. The second obstacle is that, in general, there can be an

exponential number of `relevant' outcomes that are needed for an accurate

simulation, and so efficient simulation may not be possible even in

situations where the probabilities of individual outcomes can be efficiently

estimated. We show that these two obstacles are distinct, overcoming the

former obstacle being necessary but not sufficient for simulability whilst

overcoming the pair is jointly sufficient for simulability. Specifically, we

prove that a family of quantum circuits is efficiently simulable if it

satisfies two properties: one related to the efficiency of Born rule

probability estimation, and the other related to the sparsity of the outcome

distribution. We then prove a pair of hardness results (using standard

complexity assumptions and a variant of a commonly-used average case

hardness conjecture), where we identify families of quantum circuits that

satisfy one property but not the other, and for which efficient simulation

is not possible. To prove our results, we consider a notion of simulation of

quantum circuits that we call epsilon-simulation. This notion is less

stringent than exact sampling and is now in common use in quantum hardness

proofs. ]]>

References: arXiv:1806.10545 (2018) and PRE 97, 052209 (2018).

]]>post-selected quantum computation is an open problem in complexity

theory. Post-selection has proven to be a useful tool in uncovering some

of the differences between quantum and classical theories, in

foundations and elsewhere. This is no less true in the area of

computational complexity -- quantum computations augmented with

post-selection are thought to be vastly more powerful than their

classical counterparts. However, the precise reasons why this might be

the case are not well understood, and no rigorous separations between

the two have been found. In this talk, I will consider the difference in

computational power of classical and quantum post-selection in the

computational query complexity setting.

We define post-selected classical query algorithms, and relate them to

rational approximations of Boolean functions; in particular, by showing

that the post-selected classical query complexity of a Boolean function

is equal to the minimal degree of a rational function with nonnegative

coefficients that approximates it (up to a factor of two). For

post-selected quantum query algorithms, a similar relationship was shown

by Mahadev and de Wolf, where the rational approximations are allowed to

have negative coefficients. Using our characterisation, we find an

exponentially large separation between post-selected classical query

complexity and post-selected quantum query complexity, by proving a

lower bound on the degree of rational approximations to the Majority

function. ]]>

[1] J.J.Halliwell, Phys Rev A96, 012121 (2017); A93, 022123 (2016); arxiv:1811.10408.

[2] J.J.Halliwell, Phys. Rev. A94, 052114 (2016).

[3] J.J.Halliwell, Phys. Rev. A99, 022119 (2019).

]]>Next, I will describe how this complexity result can be repurposed to construct local approximate error correcting codes with (nearly) linear distance and (nearly) constant rate. By contrast, it remains an open problem to construct quantum low-density parity check (QLDPC) stabilizer codes that achieve similarly good parameters.

An interesting component of our code construction is the Brueckmann-Terhal spacetime circuit Hamiltonian construction, and a novel analysis of its spectral gap. Our codes are ground spaces of non-commuting — but frustration-free -- local Hamiltonians. We believe that our results present additional evidence that the study of approximate error correction and non-stabilizer codes may be much richer than previously thought.

This talk will have more questions than answers, and is based on two papers:

· https://arxiv.org/abs/1802.07419 (joint with Chinmay Nirkhe and Umesh Vazirani)

· https://arxiv.org/abs/1811.00277 (joint work with Thom Bohdanowicz, Elizabeth Crosson, and Chinmay Nirkhe)

]]>NYH, Bartolotta, and Pollack, accepted by Comms. Phys. (in press) arXiv:1806.04147.

]]>Next, we significantly extend the above consideration beyond "quantum" resource theories of "states"; we establish an operational characterization of general convex resource theories --- describing the resource content of not only states, but also *measurements* and *channels*, both within quantum mechanics and in *general probabilistic theories (GPTs)* --- in the context of state and channel discrimination. We find that discrimination tasks provide a *unified operational description* for *quantification* and *manipulation* of resources by showing that the family of robustness measures can be understood as the maximum advantage provided by any physical resource in several different discrimination tasks, as well as establishing that such discrimination problems can fully characterize the allowed transformations within the given resource theory. Our results establish a *fundamental connection* between the operational tasks of discrimination and core concepts of resource theories --- the geometric quantification of resources and resource manipulation --- valid for all physical theories beyond quantum mechanics with no additional assumptions about the structure of the GPT required.

References:

[1] Ryuji Takagi, Bartosz Regula, Kaifeng Bu, Zi-Wen Liu, and Gerardo Adesso, "Operational Advantage of Quantum Resources in Subchannel Discrimination", Phys. Rev. Lett. 122.140402 (2019)

[2] Ryuji Takagi and Bartosz Regula, "General resource theories in quantum mechanics and beyond: operational characterization via discrimination tasks", arXiv: 1901.08127

]]>between coherent and incoherent noise. Coherent noise can cause the

average infidelity to accumulate quadratically when a fixed channel is

applied many times in succession, rather than linearly as in the case

of incoherent noise. I will present a proof that unitary single qubit

noise in the 2D toric code with minimum weight decoding is mapped to

less coherent logical noise, and as the code size grows, the coherence

of the logical noise channel is suppressed. In the process, I will

describe how to characterize the coherence of noise using either the

growth of infidelity or the relation between the diamond distance from

identity and the average infidelity. I will explain how coherence in

the noise on physical qubits is transformed by error correction in

stabilizer codes. Then, I will sketch the proof that coherence is

suppressed for the 2D toric code. The result holds even when the

single qubit unitary rotation are allowed to have arbitrary directions

and angles, so long as the angles are below a threshold, and even when

the rotations are correlated. Joint work with John Preskill.

]]>We study the transition in complexity due to the doping of a quantum circuit by universal gates and show that the transition to complex entanglement can be obtained by just a single gate.

These results are relevant for the notions of scrambling, quantum chaos, OTOCs and operator spreading.

We conjecture that the transition to 4−design, W-D and unlearnability are one and the same.

]]>

Inspired by the classical case, we present a strategy to derive the positivity of the logarithmic Sobolev constant associated to the dynamics of certain quantum systems from some clustering conditions on the Gibbs state of a local, commuting Hamiltonian. In particular we address this problem for the heat-bath dynamics in 1D and the Davies dynamics, showing that the first one is positive under the assumptions of a mixing condition on the Gibbs state and a strong quasi-factorization of the relative entropy, and the second one under some strong clustering of correlations.

]]>This is joint work with Fernando Brandao (Caltech) and Daniel Stilck Franca (QMATH, Copenhagen).

]]>It also allows us to construct a map in the reverse direction: from local boundary Hamiltonians to the corresponding local Hamiltonian in the bulk. Under this boundary-to- bulk mapping, the bulk geometry emerges as an approximate, low-energy, effective theory living in the code-space of an (approximate) HQECC on the boundary. At higher energy scales, this emergent bulk geometry is modified in a way that matches the toy models of black holes proposed previously for HQECC. Moreover, the duality on the level of dynamics shows how these toy-model black holes can form dynamically. ]]>

This work is joint with Liujun Zou and Timothy Hsieh.

]]>It reproves the known criterion for asymptotic and catalytic majorization in terms of Rényi entropies, and generalizes it to any resource theory which satisfies a mild boundedness hypothesis. I will sketch the case of matrix majorization as an example.

]]>

Paper (with Yosi Atia) in preparation

]]>However, the area law for long-range interacting systems is still elusive as the long-range interaction results in correlation patterns similar to the ones in critical phases. Here, we show that for generic non-critical one-dimensional ground states, the area law robustly holds without any corrections even under long-range interactions [2]. Our result guarantees an efficient description of ground states by the matrix-product state in experimentally relevant long-range systems, which justifies the density-matrix renormalization algorithm. In the present talk, I will give an overview of the results, and show ideas of the proof if the time allows.

[1] J. Eisert, M. Cramer, and M. B. Plenio, ``Colloquium: Area laws for the entanglement entropy,'' Rev. Mod. Phys. 82, 277–306 (2010).

[2] T. Kuwahara and K. Saito, ``Area law of non-critical ground states in 1d long-range interacting systems,'' arXiv preprint arXiv:1908.11547 (2019),

]]>Finally, the decoding algorithm, while efficient, has non-trivial complexity and it is not clear whether it can be implemented in hardware that can keep up with the classical processing.

We present a class of low-density-parity check (LDPC) quantum codes which fix all three of the concerns mentioned above. They were first proposed in [1] and called 4D hyperbolic codes, as their definition is based on four-dimensional, curved geometries. They have the remarkable property that the number of encoded qubits grows linearly with system size, while their distance grows polynomially with system size, i.e. d~n a with 0.1 < a < 0.3. This is remarkable since it was previously conjectured that such codes could not exist [1]. Their structure allows for decoders which can deal with erroneous syndrome measurements, a property called single-shot error correction [2] as well as local decoding schemes [3].

Although [1] analyzed the encoding rate and distance of this code family abstractly, it is a non-trivial task to actually construct them. There is no known efficient deterministic procedure for obtaining small examples. Only single examples of reasonable size had been obtained previously [4]. These previous examples were part of different code families, so that it was not possible to determine a threshold. We succeeded to construct several small examples by utilizing a combination of randomized search and algebraic tools. We analyze the performance of these codes under several different local decoding procedures via Monte Carlo simulations. The decoders all share the property that they can be executed in parallel in O(1) time. Under the phenomenological noise model and including syndrome errors we obtain a threshold of ~5% which to our knowledge is the highest threshold among all local decoding schemes.

[1] A. Lubotzky, A. Guth, Journal Of Mathematical Physics 55, 082202 (2014).

[2] H. Bombin, Physical Review X 5 (3), 031043 (2015).

[3] M. Hastings, QIC 14, 1187 (2014).

[4] V. Londe, A. Leverrier, arXiv:1712.08578 (2017).

]]>Stockmeyer's theorem implies that efficient sampling allows for estimation of probability amplitudes. Therefore, hardness of probability estimation implies hardness of sampling. We prove that estimating probabilities to within small errors is #P-hard on average (i.e. for random circuits), and put the results in the context of previous works.

Some ingredients that are developed to make this proof possible are construction of the Cayley path as a rational function valued unitary path that interpolate between two arbitrary unitaries, an extension of Berlekamp-Welch algorithm that efficiently and exactly interpolates rational functions, and construction of probability distributions over unitaries that are arbitrarily close to the Haar measure. ]]>

]]>