Density functional theory is a widely used electronic structure method for simulating and designing nanoscale systems based on first principles. I will outline our recent efforts to improve density functionals using deep learning. Improvement would mean achieving higher accuracy, better scaling (with respect to system size), improved computational parallelizability, and achieving reliable performance transferability across different electronic environments.
To this end, we have generated a large and diverse dataset of 2d simulations of electrons (http://clean.energyscience.ca/datasets) with a varying number of electrons in confining potentials for several (approximate) density functionals. As a proof-of-principal, we have used extensive deep neural networks to reproduce the results of these simulations to high accuracy at significantly reduced computational cost. By learning the screening length-scale of the electrons directly from the data, we are able to train on small-scale calculations, yet perform inference at effectively arbitrary length-scales at only O(N) cost. This overcomes a key-scaling limitation of Kohn-Sham DFT (which scales as O(N^3)), paving the way for accurate, large scale ab initio enabled design of nanoscale components and devices.
In the first part of this presentation, I will present supervised machine-learning studies of the low-lying energy levels of disordered quantum systems. We address single-particle continuous-space models that describe cold-atoms in speckle disorder, and also 1D quantum Ising glasses. Our results show that a sufficiently deep feed-forward neural network (NN) can be trained to accurately predict low-lying energy levels. Considering the long-term prospect of using cold-atoms quantum simulator to train neural networks to solve computationally intractable problems, we consider the effect of random noise in the training data, finding that the NN model is remarkably resilient. We explore the use of convolutional NN to build scalable models and to accelerate the training process via transfer learning.
In the second part, I will discuss how generative stochastic NN, specifically, restricted and unrestricted Boltzmann machines, can be used as variational Ansatz for the ground-state many-body wave functions. In particular, we show how to employ them to boost the efficiency of projective quantum Monte Carlo (QMC) simulations, and how to automatically train them within the projective QMC simulation itself.
SP, P. Pieri, Scientific Reports 9, 5613 (2019)
E. M. Inack, G. Santoro, L. Dell’Anna, SP, Physical Review B 98, 235145 (2018)
Prospective near-term applications of early quantum devices rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates. This problem was, at least in theory, remedied with the advent of quantum error correction. However, the overhead that is needed to implement a fully fault-tolerant gate set with current codes and current devices seems prohibitively large. In turn, steady progress is made in improving the quality of the quantum hardware, which leads to the believe that in the foreseeable future machines could be build that cannot be emulated by a conventional computer. In light of recent progress mitigating the effect of decoherence on expectation values, it becomes interesting to ask what these noisy devices can be used for. In this talk we will present our advances in finding quantum machine learning applications for noisy quantum computers.
High-dimensional quantum systems are vital for quantum technologies and are essential in demonstrating practical quantum advantage in quantum computing, simulation and sensing. Since dimensionality grows exponentially with the number of qubits, the potential power of noisy intermediate-scale quantum (NISQ) devices over classical resources also stems from entangled states in high dimensions. An important family of quantum protocols that can take advantage of high-dimensional Hilbert space are classification tasks. These include quantum machine learning algorithms, witnesses in quantum information processing and certain decision problems. However, due to counter-intuitive geometrical properties emergent in high dimensions, classification problems are vulnerable to adversarial attacks. We demonstrate that the amount of perturbation needed for an adversary to induce a misclassification scales inversely with dimensionality. This is shown to be a fundamental feature independent of the details of the classification protocol. Furthermore, this leads to a trade-off between the security of the classification algorithm against adversarial attacks and quantum advantages we expect for high-dimensional problems. In fact, protection against these adversarial attacks require extra resources that scale at least polynomially with the Hilbert space dimension of the system, which can erase any significant quantum advantage that we might expect from a quantum protocol. This has wide-ranging implications in the use of both near-term and future quantum technologies for classification.
Variational algorithms for a gate-based quantum computer, like the QAOA, prescribe a fixed circuit ansatz --- up to a set of continuous parameters --- that is designed to find a low-energy state of a given target Hamiltonian. After reviewing the relevant aspects of the QAOA, I will describe attempts to make the algorithm more efficient. The strategies I will explore are 1) tuning the variational objective function away from the energy expectation value, 2) analytical estimates that allow elimination of some of the gates in the QAOA circuit, and 3) using methods of machine learning to search the design space of nearby circuits for improvements to the original ansatz. While there is evidence of room for improvement in the circuit ansatz, finding an ML algorithm to effect that improvement remains an outstanding challenge.
Computer simulations are extremely useful in providing insight on the physical and chemical processes taking places in nature. Very often simulations are complementary to experimental investigations, providing the interpretations and the molecular level understanding that experiments struggle to deliver. Yet, simulations are useful only when their results may be relied upon, that is, when they can accurately model the physical system and the forces therein.
Thriving nanotechnologies and exciting experiments pose a big challenge to computational approaches, especially when dealing with solid-liquid interfaces. On the one hand, the systems to be simulated are large and often long molecular dynamics simulations are needed. On the other hand, extremely high accuracy is required.
We discuss here an approach to deliver high accuracy at low computational cost using quantum Monte Carlo and Machine Learning.
Quantum anomalies are violations of classical scaling symmetries caused by quantum fluctuations. Although they appear prominently in quantum field theory to regularize divergent physical quanti- ties, their influence on experimental observables is difficult to discern. Here, we discovered a striking manifestation of a quantum anomaly in the momentum-space dynamics of a 2D Fermi superfluid of ultracold atoms. We measured the position and pair momentum distribution of the superfluid during a breathing mode cycle for different interaction strengths across the BEC-BCS crossover. Whereas the system exhibits self-similar evolution in the weakly interacting BEC and BCS limits, we found a violation in the strongly interacting regime. The signature of scale-invariance breaking is enhanced in the first-order coherence function. In particular, the power-law exponents that char- acterize long-range phase correlations in the system are modified due to this effect, indicating that the quantum anomaly has a significant influence on the critical properties of 2D superfluids.
In the first half, I will demonstrate an efficient and general approach for realizing non-trivial quantum states, such as quantum critical and topologically ordered states, in quantum simulators. In the second half, I will present a related variational ansatz for many-body quantum systems that is remarkably efficient. In particular, representing the critical point of the one-dimensional transverse field Ising model only requires a number of variational parameters scaling logarithmically with system size. Though optimizing the ansatz generally requires Monte Carlo sampling, our ansatz potentially enables a partial mitigation of the sign problem at the expense of having to optimize a few parameters.
Various optimization problems that arise naturally in science are frequently solved by heuristic algorithms. Recently, multiple quantum enhanced algorithms have been proposed to speed up the optimization process, however a quantum speed up on practical problems has yet to be observed. One of the most promising candidates is the Quantum Approximate Optimization Algorithm (QAOA), introduced by Farhi et al. I will then discuss numerical and exact results we have obtained for the quantum Ising chain problem and compare the performance of the QAOA and the Quantum Annealing algorithm. I will also briefly describe the landscape that emerges from the optimization problem and how techniques borrowed from machine learning can be used to improve the optimization process.
Successful implementation of error correction is imperative for fault-tolerant quantum computing. At present, the toric code, surface code and related stabilizer codes are state of the art techniques in error correction.
Standard decoders for these codes usually assume uncorrelated single qubit noise, which can prove problematic in a general setting.
In this work, we use the knowledge of topological phases of modified toric codes to identify the underlying Hamiltonians for certain types of imperfections. The Hamiltonian learning is employed to adiabatically remove the underlying noise and approach the ideal toric code Hamiltonian. This approach can be used regardless of correlations. Our method relies on a neural network reconstructing the Hamiltonian given as input a linear amount of expectation values. The knowledge of the Hamiltonian offers significant improvement of standard decoding techniques
Eliska Greplova, Agnes Valenti, Evert van Nieuwenburg, Sebastian Huber
In this talk I will discuss how (unsupervised) machine learning methods can be useful for quantum experiments. Specifically, we will consider the use of a generative model to perform quantum many-body (pure) state reconstruction directly from experimental data. The power of this machine learning approach enables us to trade few experimentally complex measurements for many simpler ones, allowing for the extraction of sophisticated observables such as the Rényi mutual information. These results open the door to integration of machine learning architectures with intermediate-scale quantum hardware.