After reframing the question of the status of the quantum state in terms of J.S. Bell's "beables", I will sketch out a new theory which -- though nonlocal in the sense required by Bell's theorem -- posits exclusively local beables. This is a theory, in particular, in which the quantum mechanical wave function plays no role whatsoever -- i.e., a theory according to which nothing corresponding to the wave function actually exists. It provides, therefore, a concrete example of how the wave function might be regarded as (at best) "epistemic".
Bell and experimental tests of his inequality showed that it is impossible to explain all of the predictions of quantum mechanics using a theory which satisfies the basic concepts of locality and realism, but which (if not both) is violated is still an open question. As it seems impossible to resolve this question experimentally, one can ask how plausible realism -- the idea that external properties of systems exist prior to and independent of observations -- is, by considering the amount of resources consumed by itself and its non-local features. I will construct an explicit realistic model in which the number of hidden-variable states scales polynomially with the number of possible quantum measurements. In the limit of a large number of measurements, the model recovers the result of Montina, that no hidden-variable theory that agrees with quantum predictions could use less hidden-variable states than the straightforward model in which every quantum state is associated with one such hidden state. Thus, for any given system size, realistic theories cannot describe nature more efficiently than quantum theory itself. I will then turn to the problem of "non-locality" in realistic theories showing that every such theory that agrees with quantum predictions allows superluminal signaling at the level of hidden variable states.
All known hidden variable theories that completely reproduce all quantum predictions share the feature that they add some information to the quantum state "psi". That is, if one knew the "state of reality" given by the hidden variable(s) "lambda", then one could infer the quantum state - the hidden variables are additional to the quantum state. However, for the case of a single 2-dimensional quantum system Kochen and Specker gave a model which does not have this feature – the non-orthogonality of two quantum states is manifested as overlapping probability distributions on the hidden variables, and teh model could be termed “psi-epistemic”. A natural question arises whether a similar model is possible for higher dimensional systems. At the time of writing this abstract I have no clue. I will talk about various constraints on such theories (in particular on how they manifest contextuality) and I'll present some examples of failed attempts to construct such models for a 3-dimensional system. I will also discuss a very artificial tweaking of Bell’s original hidden variable model which renders it psi-epistemic for some (though not all) of the corresponding quantum states.
The subject of this conference is the Quantum State --- what the hell it is. A central issue is whether quantum states describe reality (the ontic view) or an agent's knowledge of reality (the epistemic view). Advocates of the epistemic view maintain that many quantum puzzles and conundra are artifacts of an inappropriate reification of strictly epistemic concepts. To provide a broader context for such considerations, I argue that even in classical physics we have got into major trouble by inappropriately conferring physical reality on the abstractions we have used to organize what we know.
The normalized-state spaces of finite-dimensional Jordan algebras constitute a relatively narrow class of convex sets that includes the finite-dimensional quantum mechanical and classical state spaces. Several beautiful mathematical characterizations of Jordan statespaces exist, notably Koecher's characterization as the bases of homogeneous self-dual cones, and Alfsen and Shultz's characterization based on the notion of spectral convex sets plus additional axioms. I will review the notion of spectral convex set and the Alfsen-Shultz characterization and discuss how these mathematical characterizationsof Jordan state spaces might be useful in developing accounts of quantum theory based on more operational principles, for example ones concerning information processing. If time permits, I will present joint work with Cozmin Ududec in which we define analogues of multiple-slit experiments in systems described by spectral convex state spaces, and obtain results on Sorkin's notion of higher-level interference in this setting. For example, we show that, like the finite-dimensional quantum systems which are a special case, Jordan state spaces exhibit only lowest-order (I_2 in Sorkin's hierarchy) interference.
The Quantum Bayesianism of Caves, Fuchs and Schack presents a distinctive starting point from which to attack the problem of axiomatising - or re-constructing - quantum theory. However, many have had the doubt that this starting point is itself already too radical. In this talk I will briefly introduce the position (it will be familiar to most, no doubt) and describe what I take to be its philosophical standpoint. More importantly, I shall seek to defend it from some bad objections, before going on to level some more substantive challenges. The background paper is: 0804.2047 on the arXiv.
Quantum Mechanics (QM) is a beautiful simple mathematical structure--- Hilbert spaces and operator algebras---with an unprecedented predicting power in the whole physical domain. However, after more than a century from its birth, we still don't have a "principle" from which to derive the mathematical framework. The situation is similar to that of Lorentz transformations before the advent of the relativity principle. The invariance of the physical law with the reference system and the existence of a limiting velocity, are not just physical principles: they are mandatory operational principles without which one cannot do experimental Physics. And it is a very seductive idea to think that QM could be derived from some other principle of such epistemological kind, which is either indispensable or crucial in dramatically reducing the experimental complexity. Indeed, the large part of the formal structure of QM is a set of formal tools for describing the process of gathering information in any experiment, independently on the particular physics involved. It is mainly a kind of "information theory", a theory about our knowledge of physical entities rather than about the entities themselves. If we strip off such informational part from the theory, what would be left should be a "principle of the quantumness" from which QM should be derived. In my talk I will analyze the consequences of two possible candidates for the principle of quantumness: 1) PFAITH: the existence of a pure bipartite state by which we can calibrate all local tests and prepare all bipartite states by local tests; 2) PURIFY: the existence of a purification for all states. We will consider the two postulates within the general context of probabilistic theories---also called test-theories. Within test-theories we will introduce the notion of "time-cascade" of tests, which entails the identifications "events=transformations" and "evolution=conditioning", and derive the general matrix-algebra representation of such theories, with particular focus on theories that satisfy the "local discriminability principle". Some of the concepts will be illustrated in some specific test-theories, including the usual cases of classical and quantum mechanics, the extended versions of the PR boxes, the so-called "spin-factors", and quantum mechanics on a real (instead of complex) Hilbert spaces. After the brief tutorial on test-theories, I will analyze all the consequences of the two candidate postulates. We will see how postulate PFAITH implies the "local observability principle" and the tensor-product structure for the linear spaces of states and effects, along with a remarkable list of additional features that are typically quantum, including purification for some states, the impossibility of bit commitment, and many others. We will see how the postulate is not satisfied by classical mechanics, and a stronger version of the postulate also exclude theories where we cannot have teleportation, e.g. PR-boxes. Finally we will analyze the consequences of postulate PURIFY, and show how it is equivalent to the possibility of dilating any probabilistic transformation on a system to a deterministic invertible transformation on the system interacting with an ancilla. Therefore PURIFY is equivalent to the general principle that "every transformation can be in-principle inverted, having sufficient control on the environment". Using a simple diagrammatic representation we will see how PURIFY implies general theorems as: 1) deterministic full teleportation; 2) inverting a transformation upon an input state (i.e. error-correction) is equivalent to the fact that environment and reference remain uncorrelated; 3) inverting some transformations by reading the environment; etc. We will see that some non-quantum theories (e.g. QM on real Hilbert spaces) still satisfy PURIFY. Finally I will address the problem on how to prove that a test-theory is quantum. One would need to show that also the "effects" of the theory---not just the transformations---make a matrix algebra. A way of deriving the "multiplication" of effects is to identify them with atomic events. This can be done assuming the atomicity of evolution in conjunction with the Choi-Jamiolkowski isomorphism. Suggested readings: 1. arXiv:0807.4383, to appear in "Philosophy of Quantum Information and Entanglement", Eds A. Bokulich and G. Jaeger (Cambridge University Press, Cambridge UK, in press) 2. G. Chiribella, G. M. D'Ariano, and P. Perinotti (in preparation) 3. G. M. D'Ariano, A. Tosini (in preparation
Recent advances in quantum computation and quantum information theory have led to revived interest in, and cross-fertilisation with, foundational issues of quantum theory. In particular, it has become apparent that quantum theory may be interpreted as but a variant of the classical theory of probability and information. While the two theories may at first sight appear widely different, they actually share a substantial core of common properties; and their divergence can be reduced to a single attribute only, their respective degree of agent-dependency. I propose a mathematical description for this ?degree of agent-dependency? and show how assuming different values allows one to derive the classical and the quantum case from their common core. Finally, I explore ? and eventually dismiss ? the possibility that beyond quantum theory there might be other variants of classical probability theory that are relevant to physics.