The normalized-state spaces of finite-dimensional Jordan algebras constitute a relatively narrow class of convex sets that includes the finite-dimensional quantum mechanical and classical state spaces. Several beautiful mathematical characterizations of Jordan statespaces exist, notably Koecher's characterization as the bases of homogeneous self-dual cones, and Alfsen and Shultz's characterization based on the notion of spectral convex sets plus additional axioms. I will review the notion of spectral convex set and the Alfsen-Shultz characterization and discuss how these mathematical characterizationsof Jordan state spaces might be useful in developing accounts of quantum theory based on more operational principles, for example ones concerning information processing. If time permits, I will present joint work with Cozmin Ududec in which we define analogues of multiple-slit experiments in systems described by spectral convex state spaces, and obtain results on Sorkin's notion of higher-level interference in this setting. For example, we show that, like the finite-dimensional quantum systems which are a special case, Jordan state spaces exhibit only lowest-order (I_2 in Sorkin's hierarchy) interference.
The Quantum Bayesianism of Caves, Fuchs and Schack presents a distinctive starting point from which to attack the problem of axiomatising - or re-constructing - quantum theory. However, many have had the doubt that this starting point is itself already too radical. In this talk I will briefly introduce the position (it will be familiar to most, no doubt) and describe what I take to be its philosophical standpoint. More importantly, I shall seek to defend it from some bad objections, before going on to level some more substantive challenges. The background paper is: 0804.2047 on the arXiv.
Quantum Mechanics (QM) is a beautiful simple mathematical structure--- Hilbert spaces and operator algebras---with an unprecedented predicting power in the whole physical domain. However, after more than a century from its birth, we still don't have a "principle" from which to derive the mathematical framework. The situation is similar to that of Lorentz transformations before the advent of the relativity principle. The invariance of the physical law with the reference system and the existence of a limiting velocity, are not just physical principles: they are mandatory operational principles without which one cannot do experimental Physics. And it is a very seductive idea to think that QM could be derived from some other principle of such epistemological kind, which is either indispensable or crucial in dramatically reducing the experimental complexity. Indeed, the large part of the formal structure of QM is a set of formal tools for describing the process of gathering information in any experiment, independently on the particular physics involved. It is mainly a kind of "information theory", a theory about our knowledge of physical entities rather than about the entities themselves. If we strip off such informational part from the theory, what would be left should be a "principle of the quantumness" from which QM should be derived. In my talk I will analyze the consequences of two possible candidates for the principle of quantumness: 1) PFAITH: the existence of a pure bipartite state by which we can calibrate all local tests and prepare all bipartite states by local tests; 2) PURIFY: the existence of a purification for all states. We will consider the two postulates within the general context of probabilistic theories---also called test-theories. Within test-theories we will introduce the notion of "time-cascade" of tests, which entails the identifications "events=transformations" and "evolution=conditioning", and derive the general matrix-algebra representation of such theories, with particular focus on theories that satisfy the "local discriminability principle". Some of the concepts will be illustrated in some specific test-theories, including the usual cases of classical and quantum mechanics, the extended versions of the PR boxes, the so-called "spin-factors", and quantum mechanics on a real (instead of complex) Hilbert spaces. After the brief tutorial on test-theories, I will analyze all the consequences of the two candidate postulates. We will see how postulate PFAITH implies the "local observability principle" and the tensor-product structure for the linear spaces of states and effects, along with a remarkable list of additional features that are typically quantum, including purification for some states, the impossibility of bit commitment, and many others. We will see how the postulate is not satisfied by classical mechanics, and a stronger version of the postulate also exclude theories where we cannot have teleportation, e.g. PR-boxes. Finally we will analyze the consequences of postulate PURIFY, and show how it is equivalent to the possibility of dilating any probabilistic transformation on a system to a deterministic invertible transformation on the system interacting with an ancilla. Therefore PURIFY is equivalent to the general principle that "every transformation can be in-principle inverted, having sufficient control on the environment". Using a simple diagrammatic representation we will see how PURIFY implies general theorems as: 1) deterministic full teleportation; 2) inverting a transformation upon an input state (i.e. error-correction) is equivalent to the fact that environment and reference remain uncorrelated; 3) inverting some transformations by reading the environment; etc. We will see that some non-quantum theories (e.g. QM on real Hilbert spaces) still satisfy PURIFY. Finally I will address the problem on how to prove that a test-theory is quantum. One would need to show that also the "effects" of the theory---not just the transformations---make a matrix algebra. A way of deriving the "multiplication" of effects is to identify them with atomic events. This can be done assuming the atomicity of evolution in conjunction with the Choi-Jamiolkowski isomorphism. Suggested readings: 1. arXiv:0807.4383, to appear in "Philosophy of Quantum Information and Entanglement", Eds A. Bokulich and G. Jaeger (Cambridge University Press, Cambridge UK, in press) 2. G. Chiribella, G. M. D'Ariano, and P. Perinotti (in preparation) 3. G. M. D'Ariano, A. Tosini (in preparation
Recent advances in quantum computation and quantum information theory have led to revived interest in, and cross-fertilisation with, foundational issues of quantum theory. In particular, it has become apparent that quantum theory may be interpreted as but a variant of the classical theory of probability and information. While the two theories may at first sight appear widely different, they actually share a substantial core of common properties; and their divergence can be reduced to a single attribute only, their respective degree of agent-dependency. I propose a mathematical description for this ?degree of agent-dependency? and show how assuming different values allows one to derive the classical and the quantum case from their common core. Finally, I explore ? and eventually dismiss ? the possibility that beyond quantum theory there might be other variants of classical probability theory that are relevant to physics.
The starting point of the reconstruction process is a very simple quantum logical structure on which probability measures (states) and conditional probabilities are defined. This is a generalization of Kolmogorov's measure-theoretic approach to probability theory. In the general framework, the conditional probabilities need neither exist nor be uniquely determined if they exist. Postulating their existence and uniqueness becomes the major step in the reconstruction process. A certain new mathematical structure can then be derived, and examples immediately reveal that probability conditionalization is identical with the Lüders - von Neumann measurement process. Some further postulates bring us to Jordan algebras, and the consideration of composite systems finally shows why these algebras must be the self-adjoint parts of von Neumann algbras such that they can be represented as linear operators on Hilbert spaces over the complex numbers. This is why the approach gets ahead of other ones that are not able to justify the need for the complex Hilbert space or the Jordan operator algebras. The mathematical structure of quantum mechanics can thus be reconstructed from a few probabilistic basic principles and becomes a non-Boolean extension of classical probability theory. Its link to physics is that probability conditionalization in this structure is identical with the Lüders - von Neumann measurement process.
In this talk we provide four postulates that are more natural than the usual postulates of QT. The postulates require the predefinition of two integers, K and N, for a system. K is the number of probabilities that must be listed to specify the state. N is the maximum number of states that can be distinguished in a single shot measurement and consequently log N is the information carrying capacity. The postulates areP1 Information: Systems having, or constrained to have, a given information carrying capacity have the same properties. P2 Composites: For a composite system, AB, we have N_AB=N_A N_B and K_AB=K_A K_B. P3 Continuity: There exists a continuous reversible transformation between any two pure states. P4 Simplicity: For each N, K takes the smallest value consistent with the other postulates. Note that P2 is equivalent to requiring that information carrying capacity be additive and that the state of a composite system can be determined by measurements on the components alone (local tomography is possible). We can prove a reconstruction theorem: the standard formalism of QT (for finite N) follows from these postulates. This includes the properties that quantum states can be represented by density operators on a complex Hilbert space, evolution is given by completely positive maps (of which unitary evolution is a special case), and that composite systems are formed using the tensor product. We derive the Born rule (or, equivalently, the trace rule) for calculating probabilities. If the single word “continuous” is dropped from P3 the postulates are consistent with both Classical Probability Theory and Quantum Theory. In this talk we will place particular emphasis on laying the operational foundations for such postulates. Then we will provide some highlights of the proof. Finally we will speculate on what needs to be changed for a theory of quantum gravity.
In our approach, rather than aiming to recover the 'Hilbert space model' which underpins the orthodox quantum mechanical formalism, we start from a general `pre-operational' framework, and verify how much additional structure we need to be able to describe a range of quantum phenomena. This also enables us to investigate which mathematical models, including more abstract categorical ones, enable one to model quantum theory. Till now, all of our axioms only refer to the particular nature of how compound quantum systems interact, rather that to the particular structure of state-spaces. This is in sharp contrast with other approaches of this kind which aim to recover quantum theory out of a much broader class of theories. A more abstract quantum mechanical model has other many advantages. It elucidates which are the key ingredients that make `the Hilbert space model' work. Since it relies on monoidal categories, it comes with a high-level diagrammatic description (which we think of as `the' mathematical formalism). It moreover removes the dependency on continuous underlying mathematical structures, paving the way for discrete combinatorial models, which might blend better with the other ingredients required for a theory of quantum gravity.
It will be shown that the conventional (i.e. real or complex Hilbert space) model of quantum mechanics can be deduced from the indistinguishability of the simplest types of statistical mixtures. The result does not have the low dimension exclusion of the quantum logic approach.
In a quantum-Bayesian delineation of quantum mechanics, the Born Rule cannot be interpreted as a rule for setting measurement-outcome probabilities from an objective quantum state. (A quantum system has potentially as many quantum states as there are agents considering it.) But what then is the role of the rule? In this paper, we argue that it should be seen as an empirical addition to Bayesian reasoning itself. Particularly, we show how to view the Born Rule as a normative rule in addition to usual Dutch-book coherence. It is a rule that takes into account how one should assign probabilities to the outcomes of various intended measurements on a physical system, but explicitly in terms of prior probabilities for and conditional probabilities consequent upon the imagined outcomes of a special counterfactual reference measurement. This interpretation is seen particularly clearly by representing quantum states in terms of probabilities for the outcomes of a fixed, fiducial symmetric informationally complete (SIC) measurement. We further explore the extent to which the general form of the new normative rule implies the full state-space structure of quantum mechanics. It seems to go some way.
I will discuss a set of strong, but probabilistically intelligible, axioms from which one can {\em almost} derive the appratus of finite dimensional quantum theory. These require that systems appear completely classical as restricted to a single measurement, that different measurements, and likewise different pure states, be equivalent up to the action of a compact group of symmetries, and that every state be the marginal of a bipartite state perfectly correlating two measurements. This much yields a mathematical representation of measurements, states and symmetries that is already very suggestive of quantum mechanics. One final postulate (a simple minimization principle, still in need of a clear interpretation) forces the theory's state space to be that of a formally real Jordan algebra
We review situations under which standard quantum adiabatic conditions fail. We reformulate the problem of adiabatic evolution as the problem of Hamiltonian eigenpath traversal, and give cost bounds in terms of the length of the eigenpath and the minimum energy gap of the Hamiltonians. We introduce a randomized evolution method that can be used to traverse the eigenpath and show that a standard adiabatic condition is recovered. We then describe more efficient methods for the same task and show that their implementation complexity is close to optimal.