This plenary session examines ways to manage energy demand and advance
technological ideas that may improve efficiency at every stage of the
energy system. Speakers include: Linda Nazar, Jillian Buriak, Alán
Aspuru-Guzik, and Cathy Foley
What happens when we run out of cheap energy and what lessons are we
learning along the way? Energy experts participate in a moderated
discussion presented live on TVO’s The Agenda with Steve Paikin. Speakers include: Vaclav Smil, Zoë Caron, Marlo Raynolds, Robin Batterham, and Alán Aspuru-Guzik
Energy transitions: a future without fossil energies is desirable,
and it is eventually inevitable, but the road from today's
overwhelmingly fossil-fueled civilization to a new global energy system
based on efficient conversions of renewable flows will be neither fast
nor cheap.
Distinguished Professor and author Vaclav Smil explores technological
transitions of past, present and future that are critical for
understanding how to shift to a low carbon future.
This plenary session examines today’s emerging energy technologies as
well as the challenges associated with meeting global demand as we move
toward an increasingly electrified energy system. Speakers include: Jay
Apt, Yacine Kadi, Craig Dunn, and Greg Naterer
If we are to advance technological systems for a low carbon and
electrified future, how do we measure progress? Through animated videos
and a panel discussion with energy experts, this kick-off session
introduces some of the technological challenges and implementation
hurdles to be overcome if we are to meet our future electricity needs.
Speakers include:Barry Brook, Walt Patterson, Jatin Nathwani, and Robin Batterham
His Excellency the Right Honourable David Johnston, Governor General of
Canada, officially launches the Equinox Summit: Energy 2030 with a
challenge for Summit participants, as well as those across the globe, to
inspire innovation through intense collaboration and explore tools and
strategies to lessen our impact on the Earth.
We address the problem of testing the dimensionality of classical and quantum systems in a âÂÂblack-boxâ scenario. Imagine two uncharacterized devices. The first one allows an experimentalist to prepare a physical system in various ways. The second one allows the experimentalist to perform some measurement on the system. After collecting enough statistics, the experimentalist obtains a âÂÂdata tableâÂÂ, featuring the probability distribution of the measurement outcomes for each choice of preparation (of the system) and of measurement. Here, we develop a general formalism to assess the minimal dimensionality of classical and quantum systems necessary to reproduce a given data table. To illustrate these ideas, we provide simple examples of classical and quantum âÂÂdimension witnessesâÂÂ. In general quantum systems are more economical than classical ones in terms of dimensionality, in the sense that there exist data tables obtainable from quantum systems of dimension d which can only be generated from classical systems of dimension strictly greater than d. By drawing connections to communication complexity one can find data tables for which this classical/quantum separation is dramatic. Finally, these ideas can also be used to demonstrate security of one-way QKD in a semi-device-independent scenario, in which devices are uncharacterized, but only assumed to produce quantum systems of a given dimension.
A seminal work by Cleve, Høyer, Toner and Watrous (quant-ph/0404076) proposed a close connection between quantum nonlocality and computational complexity theory by considering nonlocal games and multi-prover interactive proof systems with entangled provers. It opened up the whole area of study of the computational nature of nonlocality. Since then, understanding nonlocality has been one of the major goals in computational complexity theory in the quantum setting. This talk gives a survey of this exciting area.
In this talk, I'll survey various "foils" of BQP (Bounded-Error Quantum Polynomial-Time) that have been proposed: that is, changes to the quantum model of computation that make it either more or less powerful. Possible topics include: postselected quantum computing, quantum computing with nonlinear Schrodinger equation, quantum computing with non-unitary linear transformations, quantum computing with hidden variables, linear-optical quantum computing, quantum computing with restricted gate sets, quantum computing with separable mixed states, quantum computing over finite fields, and more depending on audience interest.
Operational theories [1], defined in terms of the actions and observations of an experimenter, have been extremely successful as foils to quantum mechanics, providing a generic framework in which families of theories may be compared and classified. One area of particular interest has been in the non-classical correlations (often referred to non-locality) which can arise in quantum (and generalised) theories, when measurements are space-like separated. In the context of non-locality, one usually considers the correlations in separated measurements on isolated systems. A similar setting arises in In quantum computation theory, in measurement-based quantum computation, a model of computation of equivalent power to standard circuit model quantum computation. Measurements are made on isolated non-interacting quantum systems, and the non-classical correlations which arise embody (in some loose sense) the mechanism via which the computation is executed. These measurements are adaptive, meaning that bases are chosen dependent upon the outcome of prior measurements, but apart from this, the setting is essentially identical to a multi-party Bell non-locality experiment (e.g. [2]).
In this talk I will review some recent work [3] in which Bell-type correlations are studied from the perspective of computation - in particular drawing parallels with measurement-based quantum computation. In particular, I shall give examples of results [3] which appear naturally in this setting, while being not so self-evident in more conventional approaches. Finally, I shall discuss approaches to and challenges in developing non-trivial models of correlation-based quantum computation in general operational theories.
[1] See e.g. H. Barnum, J. Barrett, M. Leifer and A. Wilce, Phys. Rev. Lett., 99, 240501 (2007).
[2] See e.g. R. F. Werner and M. M. Wolf, Phys. Rev. A 64, 032112 (2001), M. Zukowski, C. Brukner, Phys. Rev. Lett. 88 210401 (2002).
[3] M.J. Hoban and D.E. Browne, http://arxiv.org/abs/1102.1438, M.J. Hoban et al, http://arxiv.org/abs/1009.5213, J. Anders and D.E. Browne http://arxiv.org/abs/0805.1002 .
David Deutsch re-formulated the Church-Turing thesis as a physical principle, asserting that "every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means". Such principle can be regarded as a new theoretical paradigm, whereby the entire Physics is emerging from a quantum computation. But for a theory to be a good one, it must explain a large class of phenomena based on few general principles. Taking as a general principle the topological homogeneity of the computational network with graph-dimension equal to the space-time dimension corresponds to replacing quantum field theory (QFT) with a numerable set of quantum systems in local interaction. This means to consider QFT as a kind of Fermi-scale thermodynamic" limit of a deeper Planck-scale theory, with the quantum field replaced by a giant quantum computer. In the talk, I will illustrate mechanisms of emergence of physics from the quantum computation in 1+1 dimensions. We will see that Dirac's is just the equation describing the free flow of information, leading to an informational definition of inertial mass and Planck constant. I will then illustrate the emergence mechanism of Minkowsian space-time from the computation, how the field Hamiltonian comes out, and how quantum fields are actually eliminated in favor of qubits. We will see that the digital nature of the field leads to an in-principle observable consequence in terms of a mass-dependent refraction index of vacuum, with the information becoming stationary at the Planck mass. Such refraction index of vacuum is a general phenomenon due to unitariety in the discrete, and can also help in solving the speed-of-light isotropy conundrum posed by digitalization of the field in more than 1 space dimensions. We will also see how the quantum nature of the processed information plays a crucial role in other practical informational issues, e.g. the possibility of driving the information in different directions, without the need of increasing the complexity of the circuit. Finally I will briefly comment about gravity as emergent from the quantum computation, and the connection with Verlinde-Jacobson approach.
A central question in our understanding of the physical world is how our knowledge of the whole relates to our knowledge of the individual parts. One aspect of this question is the following: to what extent does ignorance about a whole preclude knowledge of at least one of its parts? Relying purely on classical intuition, one would certainly be inclined to conjecture that a strong ignorance of the whole cannot come without significant ignorance of at least one of its parts. Indeed, we show that this reasoning holds in any non-contextual hidden variable model (NC-HV). Curiously, however, such a conjecture is false in quantum theory: we provide an explicit example where a large ignorance about the whole can coexist with an almost perfect knowledge of each of its parts. More specifically, we provide a simple information-theoretic inequality satisfied in any NC-HV, but which can be arbitrarily violated by quantum mechanics. Our inequality has interesting implications for quantum cryptography.