Collapse models are one of the most promising attempts to overcome the measurement problem of quanum mechanics: they descibe, within one single framework, both the quantum properties of microscopic systems and the classical properties of macroscopic objects, and in particular they explain why measurements always have definite outcomes, distributed according to the Born probability rule. We will discuss some recent developments in this field: i) we will show how it is possible to formulate collapse models in such a way that the mean energy of physical system does non increse indefinitely, a typical feature of the models first proposed in the literature; ii) we will discuss recent experiments aiming at testing the validity of the superposition principle, thus of collapse models, at the mesoscopic level.
We will postulate a novel notion of probability; this will involve introducing an extra axiom of probability that seems natural from a Bayesian perspective. We will then provide an analogue of Gleason's theorem for these probabilities. We will also discuss why this approach may be useful for generalizations of quantum theory such as quantum gravity theories; this will involve discussing an analogy between Bayesian approaches and relational approaches.
The problem of associating beables (hidden variables) to QFT, in the spirit of what Bohm did for nonrelativistic QM, is not trivial. In 1984, John Bell suggested a way of solving the problem, according to which the beables are the positions of fermions, in a discretized version of QFT, and obey a stochastic evolution that simulates all predictions of QFT. In the continuum limit, it will be shown that the Bell model becomes deterministic and that it is related to the choice of the charge density as a beable. Moreover, the charge superselection rule is a consequence of the Bell model. The non-relativistic limit and the derivation of Bohm's first quantized interpretation in this limit are also studied. I will also consider whether the Bell model can be applied to bosons.
Natural critical phenomena are characterized by laminar periods separated by events where bursts of activity take place, and by the interrelated self-similarity of space-time scales and of the event sizes. One example are earthquakes: for this case a new approach to quantify correlations between events reveals new phenomenology. By linking correlated earthquakes one creates a scale-free network of events, which can have applications in hazard assessment. Solar flares are another example of critical phenomenon, where event sizes and time scales are part of a single self-similar scenario: rescaling time by the rate of events with intensity greater than an intensity threshold, the waiting time distributions conform to scaling functions that are independent of the threshold. The concept of self-organized criticality (SOC) is suitable to describe critical phenomena, but we highlight problems with most of the classical models of SOC (usually called sandpiles) to fully capture the space-time complexity of real systems. In order to fix this shortcoming, we put forward a strategy giving good results when applied to the simplest sandpile models.