PIRSA:22110120

Interpretable Quantum Advantage in Neural Sequence Learning

APA

Anschuetz, E. (2022). Interpretable Quantum Advantage in Neural Sequence Learning. Perimeter Institute. https://pirsa.org/22110120

MLA

Anschuetz, Eric. Interpretable Quantum Advantage in Neural Sequence Learning. Perimeter Institute, Nov. 30, 2022, https://pirsa.org/22110120

BibTex

          @misc{ pirsa_PIRSA:22110120,
            doi = {10.48660/22110120},
            url = {https://pirsa.org/22110120},
            author = {Anschuetz, Eric},
            keywords = {Quantum Information},
            language = {en},
            title = {Interpretable Quantum Advantage in Neural Sequence Learning},
            publisher = {Perimeter Institute},
            year = {2022},
            month = {nov},
            note = {PIRSA:22110120 see, \url{https://pirsa.org}}
          }
          

Eric Anschuetz

Massachusetts Institute of Technology (MIT)

Talk number
PIRSA:22110120
Abstract

Quantum neural networks have been widely studied in recent years, given their potential practical utility and recent results regarding their ability to efficiently express certain classical data. However, analytic results to date rely on assumptions and arguments from complexity theory. Due to this, there is little intuition as to the source of the expressive power of quantum neural networks or for which classes of classical data any advantage can be reasonably expected to hold. In this talk, I will discuss my recent results (arXiv:2209.14353) studying the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements. We explicitly show that quantum contextuality is the source of an unconditional memory separation in the expressivity of the two model classes. Additionally, we use this intuition to study the relative performance of our introduced model on a standard translation data set exhibiting linguistic contextuality and show that the quantum model outperforms state-of-the-art classical models even in practice. I will also briefly discuss connections to my previous work studying the trainability of variational quantum algorithms (arXiv:2109.06957, arXiv:2205.05786).