I will report my efforts to describe elementary Quantum behaviours, specifically single-particle interference and two-particle entanglement, in an accelerating frame.
Entanglement swapping is such a powerful technique for dealing with EPR problems, that it can handle inefficient counters and Bell Theorems without inequalities, even for two particles. We will examine some of the results and pitfalls.
An experimental realization of our spin-1/2 particle version of the Einstein-Podolsky-Rosen (EPR) experiment will be briefly reviewed. In the proposed experiment, two 199Hg atoms in the ground 1S0 electronic state, each with nuclear spin I=1/2, are generated in an entangled state with total nuclear spin zero. Such a state can be obtained by dissociation of a 199Hg2 molecule (dimer) using a spectroscopically selective stimulated Raman process. From symmetry considerations, the nuclear spin singlet state is guaranteed if the initial 199Hg2 molecule is in a rotational state with an even quantum number. Consequently, a thorough investigation and analysis of the rotational structure of the 199Hg2 molecule is required; results of this analysis will be presented.
Feynman was probably correct to say that the only mystery of quantum mechanics is the principle of superposition. Although we may never know which slit a photon has been passing in a Youngs double-slit experiment, we do have a corresponding classical concept in classical electromagnetic theory: the superposition of electromagnetic fields at a local space-time point is a solution of the Maxwell equations. In the case of joint photo-detection measurement of two photons, however, the superposition involves the addition of two-photon amplitudes, different yet indistinguishable alternatives resulting in a click-click joint photo-detection event. There is no counterpart of such concept in classical electromagnetic theory and the superposition may happen at distance. It is the two-photon superposition responsible for the mysteries of EPR by means of reality and causality. This talk will analyze the physics of based on several recent experiments.
According to a widely accepted view, the emergence of macroscopic behavior is related to the loss of quantum mechanical coherence. Opinions on the possible cause of this loss diverge. In the present talk it will be shown how a small, assessable amount of indeterminacy in the structure of space-time may lead to the emergence of macroscopic behavior, in agreement with empirical evidence.
After giving an introduction to the Continuous Spontaneous Localization
(CSL) theory of dynamical wave function collapse, I shall discuss 10 problems of dynamical collapse models, 5 of which were resolved by CSL's advent, and 5 of which have been subsequently attacked with varying success.
If one is worried by the quantum measurement problem,a natural question to ask is: Does the quantum-mechanical description of the world retain its validity when its application leads to superpositions of states which by some reasonable criterion are _macroscopically distinct_? Or rather, does any such superposition automatically get "collapsed", even in the absence of "measurement" by a human observer, into one or other of its branches? Scenarios which predict the latter (for example the GRWP theory) may be denoted generically by the term "macrorealistic".
Even if one believes that QM remains the whole truth at the macrolevel, it is clear that to the extent that environmental decoherence destroys the delicate phase relations characterizing the superposition, the predictions of QM will be indistinguishable experimentally from those of the class of macrorealistic theories (a remark which is often taken, in my opinion quite erroneously, as "solving" the measurement problem). Thus, to
distinguish experimentally between QM and macrorealism one needs a system in which decoherence is low enough that (given that QM is correct) one has a realistic chance of observing _quantum interference of macroscopically distinct states_ ("QIMDS"). Over the last few years, a surprising variety of candidate systems has emerged; however, while all experiments to date have been consistent with the continued validity of QM, none has so far refuted macrorealism outright. In this talk I review the systems in question and discuss the prospects for a truly definitive experiment.
The way we combine operators in quantum theory depends on the causal relationship involved. For spacelike separated spacetime regions we use the tensor product. For immediately sequential regions of spacetime we use the direct product. In the latter case we lose information that is we cannot go from the direct product of two operators to the two original operators. This is a kind of compression. We will see that such compression is associated with causal adjacency. We will situate this in the context of a much broader framework potentially suitable for developing a theory of quantum gravity.
Once again the problem of indistinguishability has been recently tackled. The question is why indistinguishability, in quantum mechanics but not in classical one, forces a changes in statistics. Or, what is able to explain the difference between classical and quantum statistics? The answer given regards the structure of their state
spaces: in the quantum case the measure is discrete whilst in the classical case it is continuous. Thus the equilibrium measure on classical phase space is continuous, whilst on Hilbert space it is discrete. Put in other words, this difference goes along the way followed for a long time, it refers to the different nature of elementary particles. Answer of this type completely obscure the probabilistic side of the question. We are able to give in fully probability terms a deduction of the equilibrium probability distribution for the elements of a finite abstract system. Specializing this distribution we reach equilibrium distributions for classical particles, bosons and fermions. Moreover we are able to deduce Gentile's parastatistics too.
Taking for granted that the mathematical apparatus for describing probabilities in quantum mechanics is well-understood via work of von Neumann, Lüders, Mackey, and Gleason, we present an overview of different interpretations of probability in quantum mechanics bearing on physics and experiment, with the aim of clarifying the meaning and place of so-called objective interpretations of quantum probability.
The dichotomy objective/subjective is unfortunate, we argue, as we should distinguish two different dimensions integral to the concept of probability. The first concerns the values of probability functions, viz. what the real numbers measure, e.g. relative frequencies of experimental outcomes, or strengths of physical dispositions (objective-1), vs. degrees of belief of idealized agents (subjective-1), etc. But a second dimension is also important, concerning the domain of definition, the events or bearers of probability, what the probabilities are probabilities of. Relative frequencies of what, described how, or strengths of dispositions to do what, described how, degrees of belief in what, etc. Reminding ourselves of the quantum mechanical phenomenon of incompatible observables, we recall that contradictions are standardly avoided by describing probabilities as pertaining to measurement outcomes rather than possessed properties: thereby, subjective elements are introduced into the very description of the events. (Interpretations qualify as objective-2 if they avoid bad words like measurement as primitive, in favor of possessed properties or physical interactions; as subjective-2 if such terms are employed in an essential way.)
This leads to a two-by-two matrix of interpretative possibilities. The remainder of our talk consists in filling in the blanks (which the reader is invited to try for him/herself) and providing commentary on the relative advantages and disadvantages, which go to the heart of the problem of interpreting quantum theory. Given our scheme, it turns out that objective version of Copenhagen makes good sense; this is one locus of propensities, which can be made sense of, we claim, along the lines of pre-hidden-variables Bohm (his 1951 text), not to be confused with Popper. We close by noting a serious deficiency in recent Bayesian approaches to quantum probability (lying in the subj-1, subj.-2 quadrant), viz. its explanatory impoverishment. But Ive already given too many hints.