PIRSA:23050036

Tensor-Processing Units and the Density-Matrix Renormalization Group

APA

Ganahl, M. (2023). Tensor-Processing Units and the Density-Matrix Renormalization Group. Perimeter Institute. https://pirsa.org/23050036

MLA

Ganahl, Martin. Tensor-Processing Units and the Density-Matrix Renormalization Group. Perimeter Institute, May. 18, 2023, https://pirsa.org/23050036

BibTex

          @misc{ pirsa_PIRSA:23050036,
            doi = {10.48660/23050036},
            url = {https://pirsa.org/23050036},
            author = {Ganahl, Martin},
            keywords = {Other},
            language = {en},
            title = {Tensor-Processing Units and the Density-Matrix Renormalization Group},
            publisher = {Perimeter Institute},
            year = {2023},
            month = {may},
            note = {PIRSA:23050036 see, \url{https://pirsa.org}}
          }
          

Martin Ganahl Sandbox AQ

Talk Type Scientific Series
Subject

Abstract

Tensor Processing Units are application specific integrated circuits (ASICs) built by Google to run large-scale machine learning (ML) workloads (e.g. AlphaFold). They excel at matrix multiplications, and hence can be repurposed for applications beyond ML. In this talk I will explain how TPUs can be leveraged to run large-scale density matrix renormalization group (DMRG) calculations at unprecedented size and accuracy. DMRG is a powerful tensor network algorithm originally applied to computing ground-states and low-lying excited states of strongly correlated, low-dimensional quantum systems. For certain systems, like one-dimensional gapped or quantum critical Hamiltonians, or small, strongly correlated molecules, it has today become the gold standard method for computing e.g. ground-state properties. Using a TPUv3-pod, we ran large-scale DMRG simulations for a system of 100 spinless fermions, and optimized matrix product state wave functions with a bond dimension of more than 65000 (a parameter space with more than 600 billion parameters). Our results clearly indicate that hardware accelerator platforms like Google's latest TPU versions or NVIDIAs DGX systems are ideally suited to scale tensor network algorithms to sizes that are beyond capabilities of traditional HPC architectures.

Zoom link:  https://pitp.zoom.us/j/99337818378?pwd=SGZvdFFValJQaDNMQ0U1YnJ6NU1FQT09