PIRSA:22050040

Neural annealing and visualization of autoregressive neural networks

APA

Inack, E. (2022). Neural annealing and visualization of autoregressive neural networks. Perimeter Institute. https://pirsa.org/22050040

MLA

Inack, Estelle. Neural annealing and visualization of autoregressive neural networks. Perimeter Institute, May. 18, 2022, https://pirsa.org/22050040

BibTex

          @misc{ pirsa_PIRSA:22050040,
            doi = {10.48660/22050040},
            url = {https://pirsa.org/22050040},
            author = {Inack, Estelle},
            keywords = {Condensed Matter},
            language = {en},
            title = {Neural annealing and visualization of autoregressive neural networks},
            publisher = {Perimeter Institute},
            year = {2022},
            month = {may},
            note = {PIRSA:22050040 see, \url{https://pirsa.org}}
          }
          

Estelle Maeva Inack

Perimeter Institute for Theoretical Physics

Talk number
PIRSA:22050040
Talk Type
Abstract
Artificial neural networks have been widely adopted as ansatzes to study classical and quantum systems. However, some notably hard systems such as those exhibiting glassiness and frustration have mainly achieved unsatisfactory results despite their representational power and entanglement content, thus, suggesting a potential conservation of computational complexity in the learning process. We explore this possibility by implementing the neural annealing method with autoregressive neural networks on a model that exhibits glassy and fractal dynamics: the two-dimensional Newman-Moore model on a triangular lattice. We find that the annealing dynamics is globally unstable because of highly chaotic loss landscapes. Furthermore, even when the correct ground state energy is found, the neural network generally cannot find degenerate ground-state configurations due to mode collapse. These findings indicate that the glassy dynamics exhibited by the Newman-Moore model caused by the presence of fracton excitations in the configurational space likely manifests itself through trainability issues and mode collapse in the optimization landscape.