The standard perspective on subsystems in quantum theory is a bottom-up, compositional one: one starts with individual "small" systems, viewed as primary, and composes them together to form larger systems. The top-down, decompositional perspective goes the other way, starting with a "large" system and asking what it means to partition it into smaller parts. In this talk, I will 1/ argue that the adoption of the top-down perspective is the key to progress in several current areas of foundational research; and 2/ present an integrated mathematical framework for partitions into three or more subsystems, using sub-C* algebras. Concerning the first item, I will explain how the top-down perspective becomes crucial whenever the way in which a quantum system is partitioned into smaller subsystems is not unique, but might depend on the physical situation at hand. I will display how that precise feature lies at the heart of a flurry of current hot foundational topics, such as quantum causal models, Wigner's friend scenarios, superselection rules, quantum reference frames, and debates over the implementability of the quantum switch. Concerning the second item, I will argue that partitions in (finite-dimensional) quantum theory can be naturally pinned down using sub-C* algebras. Building on simple illustrative examples, I will discuss the often-overlooked existence of non-factor C*-algebras, and how it leads to numerous subtleties -- in particular a generic failure of local tomography. I will introduce a sound framework for quantum partitions that overcomes these challenges; it is the first top-down framework that allows to consider three or more subsystems. Finally, as a display of this framework's technical power, I will briefly present how its application to quantum causal modelling unlocked the proof that all 1D quantum cellular automata admit causal decompositions.

(This is joint work with Octave Mestoudjian and Pablo Arrighi. This talk is complementary to my Causalworlds 2024 presentation, which will focus on the issue of causal decompositions.)

In this talk, I shall recall the nonlocality transitivity problem, which concerns the possibility of inferring the Bell-nonlocality of certain marginals in a multipartite scenario based on other given marginals. Then, I explain how considering this problem has led to a more general class of problems known as resource marginal problems (RMPs). More precisely, RMPs concern the possibility of having a resource-free target subsystem compatible with a given collection of marginal density matrices. We briefly discuss how a resource theory for a collection of marginal density matrices naturally arises from any given RMP and present some general features of such a theory. After that, we focus on a special case of RMPs known as the entanglement transitivity problems and explain how our progress on this problem has led to progress in the original nonlocality transitivity problem.

While quantum correlations between two spacelike-separated systems are fully encoded by the bipartite density operator associated with the joint system, what operator encodes quantum correlations across space and time? I will describe the general theory of such "quantum states over time" as well as a canonical example that encodes the expectation values of certain observables measured sequentially in time. The latter extends the theory of pseudo-density matrices to arbitrary dimensions, not necessarily restricted to multi-qubit systems. In addition, quantum states over time admit a natural proposal for a general-purpose quantum Bayes' rule. Our results specialize to many well-studied examples, such as the state-update rule, the two-state vector formalism and weak values, and the Petz recovery map. This talk is based on joint work with James Fullwood and the two papers: arXiv: 2212.08088 [quant-ph] and 2405.17555 [quant-ph].

Bayesian causal structure learning aims to learn a posterior distribution over directed acyclic graphs (DAGs), and the mechanisms that define the relationship between parent and child variables. By taking a Bayesian approach, it is possible to reason about the uncertainty of the causal model. The notion of modelling the uncertainty over models is particularly crucial for causal structure learning since the model could be unidentifiable when given only a finite amount of observational data. In this paper, we introduce a novel method to jointly learn the structure and mechanisms of the causal model using Variational Bayes, which we call Variational Bayes-DAG-GFlowNet (VBG). We extend the method of Bayesian causal structure learning using GFlowNets to learn not only the posterior distribution over the structure, but also the parameters of a linear-Gaussian model. Our results on simulated data suggest that VBG is competitive against several baselines in modelling the posterior over DAGs and mechanisms, while offering several advantages over existing methods, including the guarantee to sample acyclic graphs, and the flexibility to generalize to non-linear causal mechanisms.

For continuous-variable systems, the negativities in the s-parametrized family of quasi-probability representations on a classical phase space establish a sort of hierarchy of non-classility measures. The coherent states, by design, display no negativity for any value of -1≤s≤1, meaning that sampling from the quantum probability distribution resulting from any measurement of a coherent state can be classically simulated, placing the coherent states as the most classical states according to this particular choice of phase space.

In this talk, I will describe how to construct s-ordered quasi-probability representations for finite-dimensional quantum systems when the phase space is equipped with more general group symmetries, focusing on the fermionic SO(2n) symmetry. Along the way, I will comment on an obstruction to an analogue of Hudson's theorem, namely that the only pure states that have positive s=0 Wigner functions are Gaussian states, and a possible remedy by giving up linearity in the phase-space correspondence.

A Bell scenario can be conceptualized as a "communication" scenario with zero rounds of communication between parties, i.e., although each party can receive a system from its environment on which it can implement a measurement, it cannot send out any system to another party. Under this constraint, there is a strict hierarchy of correlation sets, namely, classical, quantum, and non-signalling. However, without any constraints on the number of communication rounds between the parties, they can realize arbitrary correlations by exchanging only classical systems. We consider a multipartite scenario where the parties can engage in at most a single round of communication, i.e., each party is allowed to receive a system once, implement any local intervention on it, and send out the resulting system once. Taking our cue from Bell nonlocality in the "zero rounds" scenario, we propose a notion of nonclassicality---termed antinomicity---for correlations in scenarios with a single round of communication. Similar to the zero rounds case, we establish a strict hierarchy of correlation sets classified by their antinomicity in single-round communication scenarios. Since we do not assume a global causal order between the parties, antinomicity serves as a notion of nonclassicality in the presence of indefinite causal order (as witnessed by causal inequality violations). A key contribution of this work is an explicit antinomicity witness that goes beyond causal inequalities, inspired by a modification of the Guess Your Neighbour's Input (GYNI) game that we term the Guess Your Neighbour's Input or NOT (GYNIN) game. Time permitting, I will speculate on why antinomicity is a strong notion of nonclassicality by interpreting it as an example of fine-tuning in classical models of indefinite causality.This is based on joint work with Ognyan Oreshkov, arXiv:2307.02565.

Information-theoretic insights have proven fruitful in many areas of quantum physics. But can the fundamental dynamics of quantum systems be derived from purely information-theoretic principles, without resorting to Hilbert space structures such as unitary evolution and self-adjoint observables? Here we provide a model where the dynamics originates from a condition of informational non-equilibrium, the deviation of the system’s state from a reference state associated to a field of identically prepared systems. Combining this idea with three basic information-theoretic principles, we derive a notion of energy that captures the main features of energy in quantum theory: it is observable, bounded from below, invariant under time-evolution, in one-to-one correspondence with the generator of the dynamics, and quantitatively related to the speed of state changes. Our results provide an information-theoretic reconstruction of the Mandelstam-Tamm bound on the speed of quantum evolutions, establishing a bridge between dynamical and information-theoretic notions.

In this talk the role of information theory in the description of physical evolutions will be discussed. After defining information quantifiers, their contractivity with respect to physical dynamics will be explained, a requirement which simply encodes the intuition that noisy transformations should lose information. The interplay between the two concepts will be exemplified for Markovian evolutions, showing how Markovianity can be defined in purely information theoretic terms. Extending on this result, we prove our main theorem: that all physical maps can be defined solely in terms of a particular metric on the space of density matrices, the Fisher information. This result should be understood in the context of reconstruction of quantum mechanics, proving once again the key role of information in shaping our description of the world.

In the near future, where only a small number of companies and institutions will have access to large-scale quantum computers, it is essential that clients are able to delegate their computations in a secure way, without their data being accessible by the server. The field of blind quantum computation has emerged in recent years to address this issue, however, the majority of work on this topic has so far been restricted to the secure computation of sequences of quantum gates acting on a quantum state. Yet, a client capable of performing quantum subroutines may want to conceal not only their quantum states but also the subroutines they perform themselves. In this work, we introduce a framework of higher-order blind quantum computation, where a client performs a quantum subroutine (for example a unitary gate), which is transformed in a functional way by a server with more powerful quantum capabilities (described by a higher-order transformation), without the server learning about the details of the subroutine performed. As an example, we show how the DQC1 algorithm for estimating the trace of a unitary gate can be implemented securely by a server given only an (extended) black-box description of the unitary gate. Finally, we extend the framework to the case where the details of the server's algorithm are also concealed from the client.

What is a measurement? This, it turns out, is the most difficult question in physics today. In this talk, I will explain why the measurement problem is important and why all attempts to solve it so far have failed. I will then discuss the obvious solution to the problem that was, unfortunately, discarded half a century ago without ever being seriously considered: Superdeterminism. After addressing some common objections to this idea, I will summarize the existing approaches to develop a theory for it.

Local tomography (or tomographic locality) is the principle that the state of a composite systems is determined by the probabilities it assigns to outcomes of experiments performed separately on the component systems. It's well known that complex quantum theory enjoys, and real quantum theory lacks, this feature. This means that a composite of two real quantum systems has additional "global" degrees of freedom. What if we could simply factor these out? In this talk, I'll describe how this can be done, not only for real quantum theory, but for essentially any probabilistic theory. The result is a locally tomographic theory we call the "locally tomographic shadow" of the original. I will also discuss what this shadow theory looks like in the case of real quantum theory. (This is joint work with Howard Barnum and Matthew Graydon).

In this talk, I will argue and practically illustrate that insights in quantum information, concretely coming from the tensor network representations of quantum many-body states, can help in devising better privacy-preserving machine learning algorithms. In the first part, I will show that standard neural networks are vulnerable to a type of privacy leak that involves global properties of the data used for training, thus being a priori resistant to standard protection mechanisms. In the second, I will show that tensor networks, when used as machine learning architectures, are invulnerable to this vulnerability. The proof of the resilience is based on the existence of canonical forms for such architectures. Given the growing expertise in training tensor networks and the recent interest in tensor-based reformulations of popular machine learning architectures, these results imply that one may not have to be forced to make a choice between accuracy in prediction and ensuring the privacy of the information processed when using machine learning on sensitive data.