Hardness of correcting errors on a stabilizer code
APA
Iyer, P. (2014). Hardness of correcting errors on a stabilizer code. Perimeter Institute. https://pirsa.org/14010099
MLA
Iyer, Pavithran. Hardness of correcting errors on a stabilizer code. Perimeter Institute, Jan. 22, 2014, https://pirsa.org/14010099
BibTex
@misc{ pirsa_PIRSA:14010099, doi = {10.48660/14010099}, url = {https://pirsa.org/14010099}, author = {Iyer, Pavithran}, keywords = {Quantum Information}, language = {en}, title = {Hardness of correcting errors on a stabilizer code}, publisher = {Perimeter Institute}, year = {2014}, month = {jan}, note = {PIRSA:14010099 see, \url{https://pirsa.org}} }
Collection
Talk Type
Subject
Abstract
Problems in computer science are often classified based on the scaling of the runtimes for algorithms that can solve the problem. Easy problems are efficiently solvable but often in physics we encounter problems that take too long to be solved on a classical computer. Here we look at one such problem in the context of quantum error correction. We will further show that no efficient algorithm for this problem is likely to exist. We will address the computational hardness of a decoding problem, pertaining to quantum stabilizer codes considering independent X and Z errors on each qubit. Much like classical linear codes, errors are detected by measuring certain check operators which yield an error syndrome, and the decoding problem consists of determining the most likely recovery given the syndrome. The corresponding classical problem is known to be NP-Complete, and a similar decoding problem for quantum codes is known to be NP-Complete too. However, this decoding strategy is not optimal in the quantum setting as it does not take into account error degeneracy, which causes distinct errors to have the same effect on the code. Here, we show that optimal decoding of stabilizer codes is computationally much harder than optimal decoding of classical linear codes, it is #P-Complete.