Quantum Error Correction
Quantum Error Correction
- Todd A. BrunTodd A. BrunElectrical and Computer Engineering, University of Southern California
Summary
Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state.
Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters , where is the number of physical qubits, is the number of encoded logical qubits, and is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state.
Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.
Keywords
Subjects
- Quantum Information
1. Introduction
Quantum error correction (QEC) is a set of techniques to protect quantum states from the effects of environmental noise, or decoherence (Gaitan, 2008; Lidar & Brun, 2013; Nielsen & Chuang, 2000; Suter & Alvarez, 2016). In less than 25 years the subject has gone from a time when many prominent quantum theorists doubted that QEC was even possible to a large field with a well-developed theory, thousands of published papers, and international conferences. It has been a remarkable trajectory, which has both paralleled and made possible the trajectory of the broader field of quantum information science. Without QEC, quantum computers would be restricted to problems so small that they would be of little use or interest. With QEC, there are strong theoretical reasons to believe that quantum computations of any size can be done without requiring vast improvements in technology. Indeed, starting around 2014 and accelerating since then, companies have begun competing with each other to build the first small, noisy quantum computers, hoping to pave the way toward the powerful quantum technologies of tomorrow.
QEC is built on the theory of quantum codes, which was first developed in the mid-1990s and which has been greatly expanded and elaborated since then. A standard quantum code stores quantum information—a quantum state in a (usually) finite-dimensional Hilbert space—in a subspace of a higher-dimensional Hilbert space. For a code to protect a quantum state against a set of errors, that subspace must be chosen so that each error transforms the state in such a way that it is possible to deduce which error occurred (by performing a suitable measurement) without acquiring any information about the state that was stored in the code, which would necessarily disturb it. It is remarkable that this is possible at all, but, as this article shows, it is not only possible but can be done both robustly and efficiently.
This article briefly introduces the most important aspects of QEC. Given the size and diversity of the field, it is impossible to describe every important idea in an article of this length, let alone the technical details. But it should serve as a starting point for anyone who wants to understand a key building block of future quantum information technology.
2. A Brief History of Quantum Error Correction
The idea of QEC was driven by the explosion of interest in quantum computers. A handful of people, starting in the 1980s, began to ask if a computer operating according to the laws of quantum mechanics might be more powerful than ordinary classical computers, which obey classical laws. A slow trickle of results (Benioff, 1980; Deutsch, 1985, 1989; Deutsch & Josza, 1992; Feynman, 1982; Manin, 1980; Simon, 1994; Yao, 1993) showed that there were certain problems where a quantum computer could outperform a classical computer. Widespread interest was created in 1994 when Peter Shor (then at AT&T Research) proved that a quantum computer could factor large integers into their prime factors efficiently (Shor, 1994). The computational difficulty of factoring underlies common public-key encryption systems (like RSA) that guarantee the security of Internet transactions, so Shor’s result sparked both interest and concern.
It also sparked widespread skepticism (Landauer, 1996; Unruh, 1995). Shor’s algorithm assumed that the quantum computer evolved ideally, in perfect isolation from its environment. The early 1990s also saw the general realization that large quantum systems are almost impossible to isolate, producing environmental decoherence that would cause errors and destroy the quantum evolution required for the algorithm.
Of course, classical computers are also subject to errors, and there is a well-known cure: error-correcting codes. The simplest example of such a code is the repetition code (or majority rule code), in which a single bit value 0 or 1 is duplicated multiple times: and . If an error flips one of the bits, it can be detected by comparing the values of all three bits and corrected by taking a majority vote. The question was raised whether quantum computers could also be protected by error-correcting codes.
Naïvely, it seemed like this should not be possible. A quantum state cannot be copied redundantly (i.e., ) because of the no-cloning theorem (Dieks, 1982; Park, 1970; Wootters & Zurek, 1982). Measuring the system, so it could be copied classically, would disturb the state, just as decoherence does. Two prominent quantum experimenters, Serge Haroche and Michel Raimond (1996), published an article in Physics Today titled “Quantum Computing: Dream or Nightmare?” in which they cast doubt on the practicality of quantum computers. (Serge Haroche later won the Nobel Prize for his groundbreaking work bringing quantum computers closer to reality.)
Fortunately, the naïve perspective was not the final word. In 1995, Peter Shor published a paper in which he demonstrated a nine-qubit quantum error-correcting code (QECC) that could correct an arbitrary error on any single qubit. Rather than copying the state, the encoding spreads the quantum state nonlocally over all nine of the qubits in an entangled state, in such a way that local errors do not irreparably destroy the stored information. Working independently, Andrew Steane (1996a) published a seven-qubit code in 1996 with a structure based on a classical linear code, the Hamming code. Laflamme, Miquel, Paz, and Zurek (1996) and independently Bennett, DiVincenzo, Smolin, and Wootters (1996) found a five-qubit code in 1996 and proved that this is the shortest possible code that can correct a general quantum error. Knill and Laflamme (1997), and independently Bennett et al., also found a criterion for when a QECC could correct a given set of errors (see also Ekert & Macchiavello, 1996).
These results only proved that quantum information could be stored with protection against errors; for a quantum computer, that information would also have to be processed. The way for this was paved in a series of papers (Aharonov, 1999; Aharonov & Ben-Or, 1997, 1999; Aharonov, Ben-Or, Impagliazzo, & Nisan, 1996; Aliferis, Gottesman, & Preskill, 2006; DiVincenzo & Shor, 1996; Gottesman, 1998; Kitaev, 1997a, 1997b; Knill, Laflamme, & Zurek, 1998a, 1998b; Preskill, 1997, 1998a, 1998b; Reichardt, 2005a; Shor, 1996) showing that quantum computation can be done fault-tolerantly: that is, that protection against errors can be maintained during the processing of quantum information, and even during the error-correction process itself. This work did not convince all the skeptics at once. Indeed, there are still a handful of prominent quantum-computing skeptics (Dyakonov, 2019; Kalai, 2011; Moskvitch, 2018). But the theory of QEC has become ever more powerful and convincing over time, and experimental tests have given cause for optimism. As of this writing (in 2019), essentially everyone in the field understands both the capabilities and the requirements of QEC, and large-scale efforts are underway to realize them.
3. Decoherence and Quantum Noise
3.1 Environmental Decoherence
The Schrödinger equation
describes the evolution of quantum systems in isolation, where is the state vector (written in Dirac notation). These closed systems have a well-defined Hamiltonian operator , which gives complete information about how these systems evolve. The resulting evolution is unitary: the evolution of the state is given by a linear map where . Note that these Hamiltonians may “come from outside” the system; for instance, we can turn external fields on and off, shine lasers, and so on. What makes a quantum system closed is that it does not act back on the external world. The external fields, lasers, and so on can all be treated as classical potentials.
The unfortunate reality is that this idealization is a fiction. All real quantum systems interact with the outside world at least weakly, and the existence of interactions that allow us to manipulate a system (as needed for quantum information processing) also allows the system to interact with the external environment. This environmental interaction is called decoherence.
Two things happen in decoherence. First, random influences from the outside can perturb the system’s evolution, as if some random Hamiltonian was turned on, in addition to the usual Hamiltonian. Second, the interaction between the system and environment can cause information about the system to leak into the environment. This information leakage leaves the system correlated with the environment. The effect on the system is as if unwanted measurements have been performed (without, in general, our knowing the measurement results).
In fact, these two processes generally both occur, and the practical effects of them often look similar. Indeed, in quantum mechanics there is no sharp distinction between them. If decoherence persists long enough, it is possible for all information about the original state of the system to be lost. In the shorter term, decoherence can destroy quantum effects such as interference and entanglement (on which quantum information processing depends). Indeed, decoherence is the main reason why quantum effects are not perceived at familiar classical scales; any large-scale superposition (like an alive-and-dead cat) would decohere almost instantaneously, leaving a mixed state that acts just like a classical probability distribution.
3.2 Completely Positive, Trace-Preserving Maps
In quantum information processing, time evolution is usually treated as discrete, representing the total evolution over a finite time interval (e.g., one computational step). Decoherence is therefore treated as a discrete-time map. In general, we must describe the state of a quantum system in terms of a density matrix rather than a state vector , to allow for the possibility that the state is mixed (i.e., contains uncertainties that represent missing information) rather than pure (isolated and perfectly known). A pure state has a density operator description , which is a rank-1 projector. A density matrix is a positive Hermitian operator with trace equal to 1 (representing the total probability). Maps that represent decoherence must preserve these properties: they are completely positive, trace-preserving (CPTP) maps. CPTP maps can be written in the form
where the operators are called Kraus operators.
In general, the Kraus decomposition of a CPTP map is not unique, but one can approximately think of the map as the state being multiplied by one of the operators chosen at random with probability . Since one does not know which operator has multiplied the state, one uses a mixture of all of them.
3.3 Random Unitaries
In a similar way, if an unknown influence is applied to the quantum system from the outside, we can model that as a set of unitaries that occur with respective probabilities . Here, again, one would describe the state of the system as a mixture of all possible evolved states:
In this case again we have a CPTP map, and we can define the Kraus operators to be . Note that the randomness in the unitary evolution need not be due to outside influence: it could also be from uncertainty of the Hamiltonian, due to imperfect control of the system or any other reason. CPTP maps give a unified description of all possible sources of Markovian (i.e., time-local) noise, and in quantum information science one does not usually make a sharp distinction between different noise sources.
4. Qubits and Pauli Operators
4.1 Qubits
The canonical quantum system used in quantum computation and quantum information is the quantum bit or qubit (Schumacher, 1995). This is a system with two distinct levels, whose state is in a two-dimensional complex Hilbert space . Examples of such systems are the spin of a spin-1/2 particle (like an electron whose spin can be up or down along an axis in space) or the polarization of a single photon (which can be horizontally or vertically polarized). We choose a standard basis for the Hilbert space of a single qubit (often called the computational basis), which is orthonormal: . The standard basis is also often called the basis, because it is the eigenbasis of the Pauli operator (see the definition in the next subsection). represent column vectors:
4.2 The Pauli Operators
We can write any operator on as a complex matrix. Any such matrix can be written as a linear combination of the identity matrix and the three Pauli operators, :
The Pauli operators—first introduced to describe the algebra of spin-1/2 particles—have interesting algebraic properties. They are Hermitian, unitary, and traceless, with eigenvalues ±1. They mutually anticommute and generate a closed group:
4.3 Quantum Registers and the Pauli Group
While the mathematics of a single qubit is surprisingly rich, it is still very limited in its use: there is not much information processing that can be done with a single quantum bit. More generally one has a collection of qubits, called a quantum register or a quantum codeword. This joint system has an associated Hilbert space , the n-fold tensor product of the single-qubit space, which has dimension . We can identify a standard basis for the quantum register:
In QEC we often need to consider multiplying these basis states for n qubits by tensor products of Pauli operators. To do this it is convenient to define the Pauli Group on n qubits as the set of all n-fold tensor products of Pauli operators and the identity: , where , , and . It is not hard to see that this set of operators is closed under multiplication and forms a group. Every operator in this group has eigenvalues either or . For compactness we often omit the tensor-product symbol when the meaning is clear. For instance, for , we can write , , and so forth. We use this notation for the rest of this article.
5. Error Models and Simple Quantum Error-Correcting Codes
5.1 Error Models
Before defining the notion of a QECC, we should first establish what we mean by an error or an error model. Recall that quantum systems subject to decoherence and other sources of noise evolve by CPTP maps and that these can be thought of as a set of Kraus operators that multiply the state of the system with some probabilities. We define an error set as a set of operators proportional to Kraus operators. Generally, at least one of these operators (usually ) is taken to be the identity (at least to a good approximation), which corresponds to no error occurring, while the others represent possible errors.
5.2 The Bit-Flip Code
Let us restrict ourselves for the present to systems of qubits, and let us further assume that these qubits undergo independent, identically distributed noise, meaning that the CPTP map acting on the quantum register is the product of identical CPTP maps acting on the individual qubits. We can then define the simplest possible QECC based on the example of the classical repetition code. Suppose that each of the qubits is independently subject to bit-flip noise:
The Pauli operator acts as a bit flip, because and , and is the probability of a bit flip per timestep. We protect the qubit state by encoding it as a three-qubit codeword . The set of all possible codewords forms a two-dimensional subspace of the eight-dimensional Hilbert space . This subspace is called the code space. (Note that most states in this space are highly entangled.)
All three of these qubits are subject to identical bit-flip noise. This error model has the error set where denotes the Pauli acting on qubit :
For this error model, the weight-1 errors ( all have probability , the weight-2 errors () have probability , the weight-3 error ( has probability , and the weight-0 (identity) error has probability .
Classically, single bit-flip errors are corrected by measuring the three bits of the codeword and taking a majority vote. Clearly this would violate the purpose of a QECC: measuring the bits would project the system into one of the basis states with probability or , destroying the superposition state that we are trying to protect. So how can error correction be done? Note what happens to the codeword state under the identity and the three weight-1 errors:
All four of these states are mutually orthogonal for all values of . They lie in orthogonal subspaces. Therefore, there is a quantum measurement that will tell which of these four subspaces the state is in without projecting onto a basis state and thereby destroying the superposition. Once one knows which subspace the state is in, it is possible to transform the state back to by applying one of the operators , which are all unitary; applying this operator also does not require us to know what state is being stored. This is the key insight that makes QEC possible: for a properly designed QECC, there is a measurement that reveals the error without revealing any information about the encoded state.
For this particular code, it is not hard to see that measuring which subspace the state is in is equivalent to measuring the two commuting observables and . The four states mentioned earlier are eigenstates of both of these observables with eigenvalues . Measuring (or ) is equivalent to measuring the parity of qubits 1 and 2 (or 2 and 3); moreover these are joint observable on two qubits, which can be measured without measuring the values of on the individual qubits. These observables and are called stabilizer generators of this code, and the values of these two observables give the error syndrome that identifies the error that occurred. For this code that would be one of the four outcomes . (More details can be found in the subsection on stabilizer groups and their generators and in the caption of Figure 2.)
This bit-flip code has a correctable error set with four error operators: . However, the full error set of this error model contained eight error operators. The three errors of weight-2 and one error of weight-3 are uncorrectable errors. They produce states in the same four subspaces, and it is easy to see that in the case of those high-weight errors the correction procedure will produce the erroneous state . In fact, the weight-3 error will not even be recognized as an error: it is an undetectable error. This is a general property of QECCs: no QECC can correct every possible error. (This is also true of classical error-correcting codes.) In practice, the goal is to choose a code that can correct the most likely errors. For the error model discussed here, the probability of success is the probability of either no error or a weight-1 error: . The probability of failure is . If the original single qubit state had been left unencoded, it would have probability of success and probability of failure . So this code gives an improved success probability if , corresponding to an error probability per qubit of .
5.3 The Phase-Flip Code
For a classical bit channel, the only errors that are possible are flipping a bit from 0 to 1 or vice versa. That is decidedly not the case for qubit channels, where any two-dimensional operator could constitute an error. One particular type of error that has no classical equivalent is a phase-flip error: , which applies a relative phase factor of –1 between the two basis states. The code we designed against bit flips is useless against this type of error: for any phase-flip error . To protect against errors, we can use a code expressed in the basis, rather than the standard basis:
So in this basis, Z acts like a bit flip. In terms of these basis states, we can encode the state as . Using this code to detect and correct phase-flip errors without disturbing the encoded state works exactly like the bit-flip code but with the basis change , . The error syndrome for this code is determined by measuring the two stabilizer generators and .
While the bit-flip and phase-flip codes prove that it is possible to design a QECC capable of correcting a finite set of errors, this seems like a very limited result. Each of them is essentially the classical repetition code in a particular choice of basis; they can correct their own set of errors but are useless against each other’s. Moreover, the set of all possible quantum errors forms a continuum, since any linear operator could, in principle, be an error. On the face of it, it might seem like QEC is too limited to provide real protection to quantum states in the presence of realistic decoherence. It turns out, however, that QECCs can be far more powerful than these examples might suggest.
5.4 The Shor Code
In 1995, Peter Shor published a nine-qubit QECC that was capable of correcting any arbitrary error on a single qubit and could protect one logical qubit state. A qubit state was encoded as . The code-space basis is
where the + sign goes with the state and the – sign with . A little unpacking is needed to see how this code works. The two basis states both have the form of the phase-flip code, where each of the three qubits of that code has then been encoded in the bit-flip code. A nested structure like this is called a concatenated code. We can easily see how this code can correct a single bit-flip () error. Each of the three triplets of qubits (123, 456, and 789) is a bit-flip code; one can detect and correct a single error without destroying the overall codeword state. Less obviously, if a phase-flip () error acts on one of the qubits of this code, it will flip the sign of that qubit’s triplet from + to – or vice versa. This moves the state to an orthogonal subspace, which can be detected and corrected without destroying the encoded state.
This code can therefore correct any single error and any single error. But this allows it to do even more. Since , a error can be thought of a single error and a single error acting on the same qubit, up to an irrelevant global phase. So this code can correct any Pauli error acting on a single qubit. However, even this is not the limit. Note that any operator on a single qubit can be written as a linear combination for some complex numbers . This operator acting on one qubit of the encoded state will produce a superposition of four orthogonal states: the original state with no error, the state with a single error, the state with a single error, and the state with both an and a error. The error correction procedure will project the codeword into one of these states and then apply a correction that will return it to the original state .
This is one of the most important properties of QECCs, which makes them capable of handling more than idealized error models: if a QECC has a correctable error set then it can also correct any linear combination of these errors, . Thus QECCs can correct a continuous set of errors.
Because the codeword is nine qubits long, it gives a benefit only for a lower value of the error probability per qubit than the three-qubit repetition code. If the error probability per qubit is , then one will be better off encoding in the Shor code if , which is true for .
5.5 The Steane Code
Shor constructed a code that could correct an arbitrary single-qubit error by concatenating a code for correcting errors with a code for correcting errors. Andrew Steane (1996a), working independently, showed that it was possible to do this in a code without this concatenated structure. Steane’s construction encodes a single logical qubit state into a seven-qubit codeword with the following basis vectors:
By the same argument as in the Shor code, this code can also correct an arbitrary error on a single qubit. In this case, it takes considerable effort to verify from the codewords that any single X error moves the state to an orthogonal subspace, as does any single error and any combination of a single error and a single error. Since this codeword is only seven qubits long, it shows a benefit for modestly higher values of the error probability per qubit : .
5.6 The Error Correction Condition
Knill and Laflamme (1997) and independently Bennett et al. (1996) found a very general condition for a set of errors to be correctable by a given quantum code. Let be some arbitrary set of error operators, and be a projector onto the code space. The code specified by can correct the set of errors if they satisfy the condition for all and , where is a set of complex numbers that form a Hermitian matrix. The proof relies on diagonalizing this matrix and using that similarity transformation to construct a new set of errors that are linear combinations of the original set and which map the state into orthogonal error spaces.
6. Stabilizer Codes
The two examples of the Shor (1995) and Steane (1996a) codes show that powerful QECCs can be constructed that can encode general states and correct arbitrary errors on some number of qubits. But it is clear that better methods are needed for finding and describing these codes and their error-correction methods. Listing the code-space basis states, as done in the examples considered so far, rapidly becomes unwieldy, since the dimension of the Hilbert space grows exponentially with the number of qubits. Fortunately, a powerful formalism exists that can describe a large set of practical QECCs and their encoding and correction procedures in a very compact form. These are the stabilizer codes (Calderbank, Rains, Shor, & Sloane, 1997; Calderbank & Shor, 1996; Gottesman, 1996, 1997; Shor & Laflamme, 1997; Steane, 1996b). These codes use an error-correcting structure based on classical linear codes.
6.1 Classical Linear Codes
Suppose that we wish to encode classical bits into a binary codeword that is an n-bit string. We can think of the classical bits as being a vector in a k-dimensional binary vector space. We encode that into a codeword which is a vector in an n-dimensional binary space. This is a linear code if there is an full-rank binary matrix such that . This matrix is called the generator matrix of the code. Any valid codeword of the codes is a linear combination of the columns of , so the code forms a k-dimensional subspace of the n-dimensional binary space .
For a linear code, we describe errors on the codeword by an n-dimensional binary error vector . In binary arithmetic, adding a 1 to a bit flips it from 0 to 1 or 1 to 0, so each element of that is 1 represents a bit flip. (This model of errors is called additive noise.) To detect and correct errors, we define a second full-rank binary matrix called the parity-check matrix, which satisfies the equation . If we multiply an erroneous codeword by , we will get . The code should be designed so that the error syndrome (or parity check) takes distinct, nonzero values for all of the most likely errors . From the error syndrome one can deduce which bits of the codeword have been flipped and flip them back to correct the error. For a small code, this diagnosis can be done by finding the error syndrome in a look-up table; for larger codes, this rapidly becomes impractical, and some form of decoding algorithm is necessary. In general, decoding for an arbitrary linear code is a computationally hard problem. But some codes have structure that allows efficient decoding.
Since and must satisfy , we can specify a linear code by giving either its generator matrix or its parity-check matrix. For the purposes of QEC, it is most convenient to use the parity-check matrix. The classical three-bit repetition code is a simple linear code, where the parity-check matrix is
Looking at the rows of this matrix one can see that their structure is echoed by the stabilizer generators and of the quantum bit-flip code and and of the phase-flip code.
There are much more sophisticated linear codes than the repetition code, which encode larger numbers of bits and/or correct more errors. A well-known example is the seven-bit Hamming code:
This classical linear code is closely related to the seven-qubit Steane code (as can be seen in the subsection on Calderbank-Shor-Steane codes). A classical linear code is often described in terms of three parameters , where n is the number of physical bits (or length) of the code, is the number of logical (or encoded) bits, and is the minimum distance of the code—that is, the minimum number of 1s in any valid codeword other than the zero vector, or the minimum number of bits that must be flipped to transform one valid codeword into another. A code with minimum distance can correct any error that flips fewer than bits. The Hamming code has parameters . These parameters all have equivalents for QECCs.
6.2 Stabilizer Groups and Their Generators
Consider a subgroup of the Pauli group on n qubits with the following two properties: (a) the subgroup is Abelian (i.e., all operators in the subgroup commute) and (b) the subgroup does not contain the element . Then there is a subspace of the n-qubit Hilbert space whose vectors are simultaneous eigenvectors of all the elements of . We call a stabilizer group, and is the code space corresponding to that stabilizer group.
We could specify by listing all of its elements. For small enough n this can be possible; for large , a typical stabilizer group has an exponentially large number of elements. However, we can specify much more compactly by listing a set of stabilizer generators. It is not hard to show that any stabilizer group on qubits must have elements, where r is an integer between 0 and . Every element in this group can be generated by a set of at least r properly chosen elements of .
Consider a simple example. The bit-flip code is a stabilizer code, whose stabilizer group is given by . We can choose two nonidentity elements to be the stabilizer generators: for instance, and . Then the four elements of the stabilizer are given by products of the generators: , where or 1. Raising an operator to the zero power yields the identity; raising an operator to the first power is the operator itself. For the bit-flip code the four elements of the stabilizer are:
where in the last expression we used the fact that . This example generalizes for any stabilizer code. It is important to note that the elements of a stabilizer group are always Hermitian operators with eigenvalues and therefore can be measured as observables. Similarly, the phase-flip code has generators XXI and IXX.
6.3 Error Syndromes and Error Correction
To understand how stabilizer codes detect and correct errors, it is helpful to assume that the set of errors also consists of operators from the Pauli group . As we have seen in the examples of the Shor and Steane codes, this does not necessarily limit the types of errors that these codes can correct, since they will also be able to correct linear combinations of Pauli operators.
An important property of the Pauli group is that any two Pauli operators either commute or anticommute. For example, the Pauli operators ZZI and XXX commute (because anticommuting Xs and Zs overlap at an even number of locations), while ZIZ and YII anticommute (because anticommuting Ys and Zs overlap at an odd number of locations).
Based on this property we can see how a stabilizer code detects and corrects errors. A valid codeword will be a eigenvector of all the stabilizer generators. Suppose that an error operator (which is also an element of the Pauli group) multiplies the state. It will anticommute with some of the stabilizer generators and commute with others. Multiplying by changes the codeword to a new eigenstate of the stabilizer generators, where the eigenvalue is still for all the generators that commute with but is for those generators that anticommute with . This new eigenstate will always be orthogonal to the original codeword unless the error operator commutes with all the stabilizer generators.
Let us see how this works for the bit-flip code. It has generators and . The three weight-one errors are , , . We can see that anticommutes with and commutes with ; anticommutes with both and ; and anticommutes with and commutes with . Since and are commuting observables, we can measure them to diagnose which error happened (or no error). The measured values for each generator are the error syndrome. Since Pauli operators are unitary and square to the identity, we can then undo the effects of the error by applying the appropriate Pauli operator again.
This is the general prescription for correcting errors with a stabilizer code. One measures the values of the stabilizer generators; from the resulting error syndrome, one deduces which error occurred and applies the inverse (which for Pauli operators means just applying the error again). If the true error operator was actually a linear combination of Pauli operators, measuring the stabilizer generators will project the state into a joint eigenspace, and one proceeds exactly as if the error had been a Pauli operator. Just like linear codes, for a small stabilizer code one can use a look-up table of error syndromes; for a larger code, a decoding algorithm is needed.
Also like a classical linear code, a stabilizer code has three parameters, generally written , where n is the number of physical qubits of the code and k is the number of logical qubits encoded (meaning that the code space has dimension ). The number of stabilizer generators is . The third parameter d is the minimum weight of any Pauli operator (other than the identity) that commutes with all the stabilizer generators. (The weight of a Pauli operator is the number of operators in the tensor product that are not the identity.) If a stabilizer code has distance d, it can correct an arbitrary error of weight (Lidar & Brun, 2013). If errors act independently on the qubits, and the error probability per qubit p is not too large, then with high probability any errors that occur will be of low weight. The Shor code is ; the Steane code is ; and the bit-flip and phase-flip codes are both . (They have distance 1 because they cannot detect arbitrary errors.)
6.4 Calderbank-Shor-Steane Codes
It turns out that the Shor and Steane codes are also stabilizer codes. With some work, one can identify a set of stabilizer generators for each of them. For the Shor code, a set of stabilizer generators is:
Note the asymmetry in form between the generators involving operators and those involving operators. This is because of the concatenated structure of the code. By contrast, we can find a set of stabilizer generators for the Steane code that are highly symmetric:
Here, the generators involving operators (which are used in detecting and correcting errors) and the generators involving operators (which are used in detecting and correcting errors) have exactly the same form; moreover, the pattern of Is and Xs (or Zs) exactly matches the pattern of 0s and 1s in the three rows of the parity-check matrix for the Hamming code.
Both the Shor and Steane codes have sets of stabilizer generators where one subset involves only Is and Xs and the other involves only Is and Zs. Codes with this structure are called Calderbank-Shor-Steane (CSS) codes (Calderbank & Shor, 1996; Shor, 1995; Steane, 1996a, 1996b). The QECC can be thought of as the intersection of two classical linear codes, one in the Z basis (which can correct X errors) and one in the X basis (which can correct Z errors). In some cases—like the Steane code—these two linear codes are the same (the Hamming code in this case). In other cases—like the Shor code—the two linear codes are different. It is possible to separately correct X and Z errors just as one would for a classical code (though that is generally not optimal, since X and Z errors may be correlated). The size of the intersection determines the number of logical qubits; the Hamming code encodes four bits, but the Steane code encodes only one qubit because the intersection is only two-dimensional.
However, there is a very important constraint on how codes can be combined in this way to make a CSS code, because the final set of stabilizer generators must all commute. This constraint turns out to be equivalent to an orthogonality condition for the original codes: if the code for correcting bit-flips has parity-check matrix and the code for correcting phase-flips has parity-check matrix , then they must satisfy the binary matrix equation . This means that for a code like the Steane code that uses the same classical code for both X and Z, the parity-check matrix must be self-orthogonal: .
CSS codes are not the only way to construct stabilizer codes from classical linear codes. A more general method gives the Calderbank-Rains-Shor-Steane (CRSS) codes (Calderbank et al., 1997). The parity-check matrices in this case obey a more general self-orthogonality condition analogous to the CSS case.
6.5 Degenerate Codes
Stabilizer codes inherit many of the properties of classical linear codes, but there are certain properties unique to quantum codes. These arise in part because, while the codes are modeled on codes with an additive structure, the noise is actually multiplicative. One such property is known as degeneracy. Consider, again, the Shor code. The errors all transform the code space in exactly the same way. They all have the same error syndrome and can be corrected by the same unitary correction operator. This degeneracy arises because these operators differ by an element of the stabilizer group: . Since the operators and stabilize the codeword, these three errors have identical effects. These errors cannot be distinguished from each other by the code, but they are all corrected by applying . This can happen with any stabilizer code: an error E has the same effect as ES where S is any element of the stabilizer group.
We say a given code is degenerate if its correctable error set contains degenerate errors. For an stabilizer code, the correctable error set is generally taken to include all Pauli errors with weight (which of course implies the ability to correct arbitrary errors of that weight and lower). For example, both the Shor code and Steane code have minimal distance , which means they can correct all errors of weight 1; but the Shor code is degenerate, while the Steane code is not.
6.6 Logical Operators
An stabilizer code has independent stabilizer generators. But there are operators in the Pauli group that are not elements of the stabilizer group but commute with every stabilizer generator. These are called logical operators, because they act directly on the encoded logical qubits. Such operators could represent encoded quantum gates, but they can also represent undetectable errors. Any code with has logical operators.
It is common to write down a set of canonical logical operators for a code, comprising k anticommuting pairs, one for each encoded logical qubit. For example, the bit-flip code has logical operators XXX and ZII, which act as Pauli operators X and Z on the encoded qubit. For the phase-flip code the equivalent operators are XII and ZZZ, and for the Steane code they are XXXXXXX and ZZZZZZZ. It is important to remember, however, that one can multiply a logical operator by any element of the stabilizer group without changing its action on the codewords. So for the Steane code, the operators XXXIIII and ZZZIIII are an equivalent pair of logical operators. The lowest-weight logical operator (aside from the identity) has a weight equal to d.
6.7 The Five-Qubit Code and General Stabilizer Codes
All of the codes examined so far have been CSS codes, which might give the impression that most stabilizer codes are CSS codes. This is certainly not the case (though CSS codes are widely used in fault-tolerant quantum computation). The first non-CSS code discovered was the five-qubit code, discovered independently by Bennett et al. (1996) and by Laflamme et al. (1996). This code has parameters ; there are several variations of this code, but they are all equivalent in their properties. One version of this code has stabilizer generators
None of these stabilizer generators involves only Xs or only Zs, and it is not hard to show that there is no set of generators for this code that does. Codes like this cannot be interpreted as the intersection of a code for errors and a code for errors.
This code encodes a single logical qubit in five physical qubits and can protect against an arbitrary error on any one qubit. It is shorter than the Steane code, and, like the Steane, code it is not degenerate. In fact, one can show that no quantum code that can protect against an arbitrary single qubit error can be shorter than five qubits. If a code has n physical qubits, it must be able to distinguish distinct error syndromes (, , and on each physical qubit, plus the identity). If it has stabilizer generators there are distinct error syndromes. So we must have . For the smallest n that satisfies this inequality is , where the two sides are equal. For this reason, the five-qubit code is sometimes called a “perfect” code.
6.8 Encoding and Decoding Circuits
This discussion of quantum codes and error correction has been rather abstract in terms of encoding into subspaces and measurements of observables. How would this be done in practice? In quantum information science, one generally decomposes unitary transformations and measurements into quantum circuits, which are sequences of quantum gates: unitary transformations that act on only one or two qubits at a time. There are standard sets of quantum gates that are widely used. On one qubit, common gates include the Hadamard (H), Phase (S) and π/8 (T) gates,
as well as the usual Pauli operators (Nielsen & Chuang, 2000). On two qubits the most common gates are the controlled-NOT (CNOT) and controlled-phase (CZ) gates. Using these simple gates, we can write down encoding circuits for the bit-flip and phase-flip codes (see Fig. 1). Extracting the error syndromes can also be done by a quantum circuit (see Fig. 2). These error syndromes specify the correction to be done, if any.

Figure 1. (a) The encoding circuit for the 3-qubit bit-flip code, involving two CNOT gates. (b) The encoding circuit for the 3-qubit phase-flip code. Encoding involves two CNOT gates and three Hadamard gates.

Figure 2. The syndrome extraction circuit for the 3-qubit bit-flip code. Two CNOTs put each parity check in one of the two extra qubits at the bottom (called ancillary qubits or ancillas), which is then measured in the standard basis. The outcome of these measurements is an error syndrome, which tells what correction should be applied (if any). For syndrome , no correction is done; for , X is applied to qubit 3; for , X is applied to qubit 1; and for , is applied to qubit 2.
An interesting property of stabilizer codes is that their encoding and syndrome-reading circuits can always be written using just three kinds of quantum gates: the CNOT, the , and the . These three gates generate (up to a global phase) a subgroup of the unitary group called the Clifford group. This is the set of unitaries that preserve the Pauli group under similarity transformations: So one can think of encoding and decoding circuits as transforming one stabilizer group into another. Another interesting property is that, given the ability to do arbitrary Clifford unitaries, plus any one unitary outside the Clifford group, one can generate any arbitrary unitary transformation on n qubits. Such a gate set is universal. But encoding and decoding can be done just with the Clifford gates (Gottesman, 1997).
7. Fault-Tolerance and Error Correction for Quantum Computation
Up to this point we have only considered the question of protecting static quantum information from noise. This is a typical model of quantum communication or storage: it is assumed that the encoding and correction/decoding of the quantum information are error-free and that decoherence happens only during transmission through a noisy channel. Ignoring the errors during encoding and decoding is an idealization, but it is reasonable to separate the effects of errors due to imperfect quantum circuits from the unavoidable errors in passing through the channel.
This separation does not make sense when we consider quantum computation. Here we are not only storing information but processing it. If left unchecked, errors can accumulate and spread during processing until they are uncorrectable, which would restrict the size of computations that could be done. To combat this, repeated error corrections are required, so one must consider the effects of errors during the correction process itself. Fault-tolerant quantum computation (FTQC) is the set of principles that allows the use of repeated error correction during a long quantum computation without introducing more errors than are corrected (Aharonov, 1999; Aharonov et al., 1996; Aharonov & Ben-Or, 1997, 1999; Aliferis et al., 2006; DiVincenzo & Shor, 1996; Gottesman, 1997, 1998; Kitaev, 1997a, 1997b; Knill et al., 1998a, 1998b; Knill, Laflamme, & Viola, 2000; Preskill, 1997, 1998a, 1998b; Reichardt, 2005a; Shor, 1996).
7.1 Principles of Fault Tolerance
FTQC is a very large topic in itself, so we can only touch on some of the basic ideas. Here are a few of the guiding principles:
Never decode the quantum information. All operations must be done on the encoded quantum data.
Quantum circuits acting on encoded data should be robust against errors. The circuits should not cause a correctable error to spread until it is an uncorrectable error.
The encoded information should be corrected periodically, to catch and remove errors before they accumulate.
Error correction circuits also should not spread errors.
It is impossible ever to remove all errors; but any residual errors should be correctable, so they can be caught and removed in the next error-correction step.
The requirements of fault tolerance are quite stringent, and not every QECC can meet them. One QECC that looks more powerful than another “on paper” may be less suitable for FTQC. For example, it is much harder to do encoded operations on the five-qubit “perfect” code than on the seven-qubit Steane code. CSS codes are widely used in FTQC because their additional structure makes it easier to design fault-tolerant circuits for them.
7.2 Encoded Gates
FTQC usually starts with a QECC (or a family of QECCs) and designs fault-tolerant circuits for a universal set of encoded gates. As mentioned at the end of the subsection on encoding and decoding circuits, the ability to do CNOTs, H gates, P gates, and any one non-Clifford gate implies universality. A very common approach to FTQC is to use a code that allows efficient encoded Clifford gates and then use a more difficult technique to do a non-Clifford gate (most often the π/8 gate). For example, the Steane code allows all Clifford gates to be done transversally, that is, by applying a gate to each qubit separately.
Transversal gates are particularly useful for fault tolerance because they do not spread errors from one qubit to another. In the case of the transversal CNOT, a single-qubit error on one codeword can spread to a different codeword, but it cannot spread to a second qubit on the same codeword. Various approaches are available to do non-Clifford gates fault-tolerantly; one of the most common is to prepare a special state, called a magic state, which can be input into a circuit using only Clifford gates to effectively produce a non-Clifford gate. Low-error encoded versions of these states are prepared by a process called magic state distillation (Bravyi & Haah, 2012; Bravyi & Kitaev, 2005; Haah, Hasting, Poulin, & Wecker, 2017; Knill, 2004a, 2004b; Reichardt, 2005b).
7.3 Threshold Theorems
A key element underlying FTQC is a set of results called threshold theorems. These theorems prove that if the physical rate of errors is below a certain value, called the error threshold, then it is possible to perform a quantum computation of arbitrary size (Aharonov, 1999; Aharonov et al., 1996; Aharonov & Ben-Or, 1997, 1999; Aliferis et al., 2006; Gottesman, 1997; Knill et al., 1998a, 1998b; Preskill, 1998b; Reichardt, 2005a; Shor, 1996). The additional overhead for all the extra error correction and fault-tolerant design (over an ideal, error-free quantum computer) scales like a polynomial in the log of the size of the ideal circuit (i.e., the total number of gates). This scaling grows quite slowly, so in principle one can scale up to very large quantum computations with only relatively modest overhead.
The first threshold theorems used the idea of a concatenated code. Each qubit of the ideal circuit is replaced by a code word (e.g., in the Steane code), and each gate is replaced by a circuit for an encoded gate. After each encoded gate, an error correction step is performed. If the success probability is still too low, one iterates this encoding process as many times as necessary to bring the success probability up to a desired value.
A simple back-of-the-envelope argument shows why this can work. Suppose that the error rate per gate is , and the ideal circuit has N gates, so the success probability scales like . Suppose we encode each qubit in a code that can correct any single-qubit error. Then the probability of an uncorrectable error per encoded gate becomes roughly , where C is a constant representing the increased size of the circuit. If we iterate this process k times (k levels of concatenation), then the rate of uncorrectable errors becomes . So if , then the rate of uncorrectable errors goes down doubly exponentially while the size of the circuit grows only singly exponentially (roughly like ).
Underlying these theorems is a set of assumptions that might or might not hold in realistic quantum computers. Typically, these assumptions resemble the following:
The quantum computer allows parallel operations (in particular, error correction can be done in parallel throughout the computer).
Errors are not strongly correlated across the qubits in the computer (the probability of a high-weight error should fall off like a binomial distribution).
Errors on quantum gates affect the qubits taking part in the gates but not other unrelated qubits. Two-qubit gates can be done between any pair of qubits.
Qubit measurements can be done quickly, and their error rates are not much higher than the error rates for quantum gates.
Memory errors occur at a rate less than the rate of errors for quantum gates.
Information about the error syndromes for the quantum codes can be processed (classically) quickly and without errors.
This set of assumptions can be relaxed in various ways while still producing a threshold theorem, but this can adversely affect the size of the threshold. The threshold depends strongly on the code being used, the particular error model, and the choice of fault-tolerant methods. Early threshold theorems estimated error thresholds of errors per gate or smaller (Aliferis et al., 2006), a very difficult goal to meet experimentally. Improvements in both fault-tolerant methods and proofs have gradually raised these thresholds, so that concatenated codes can now have thresholds of or higher (Chamberland, Jochym-O’Connor, & Laflamme, 2016). It is also possible (up to a point) to trade off a higher threshold for larger overhead (Knill, 2004a, 2004b). However, the exponential growth of the circuit size is a large technological barrier, even if the asymptotic scaling rate is reasonable.
More recent work has improved matters considerably by looking at different classes of codes: the topological codes (Bravyi & Kitaev, 1998b; Freedman & Meyer, 1998; Kitaev, 1997a, 2003; Preskill, 1997), especially the surface code (Bravyi, Suchara, & Vargo, 2014; Fowler, Mariantoni, Martinis, & Cleland, 2012; Fowler, Stephens, & Groszkowski, 2009; Fowler, Wang, & Hollenberg, 2011), but also color codes, among others (Bombin, 2015; Brown, Nickerson, & Browne, 2016; Kubica & Beverland, 2015; Landahl, Anderson, & Rice, 2011). These encode quantum information in a (usually two-dimensional) lattice of qubits and are designed to mimic the topologically-protected properties of two-dimensional quantum field theories with non-Abelian anyons (Kitaev, 2003). These are generally CSS codes, but the stabilizer generators are all local operators on the lattice with bounded weight, making them much easier to measure. (See Fig. 3.)

Figure 3. The qubit lattice for the surface code. The qubits are located on the edges; the stabilizers comprise two types of local operators. operators are located around plaquettes (P in the figure), while operators are located around vertices (V in the figure).
Encoded operations are done by a combination of code deformation (such as braiding lattice defects around each other) and magic state distillation. Threshold theorems have been proven for these codes that suggest an error threshold of or so, but numerical evidence suggests that the practical threshold may be even higher than that (on the order of 0.5%) (Stephens, 2014). Moreover, the distances of these codes can be scaled up linearly, rather than in the exponential leaps of concatenated codes, which makes their overhead scale more nicely. These properties have made surface codes and other topological codes the subject of intense theoretical and numerical research, as well as of experimental plans (Barends et al., 2014).
With the recent development of small, noisy quantum computers, other approaches (Brun, Zheng, Hsu, Job, & Lai, 2015; Chao & Reichardt, 2018a, 2018b; Lai, Zheng, & Brun, 2017; Reichardt, 2018; Steane, 1999, 2003; Zheng, Lai, & Brun, 2018) have also been explored, which might reduce the overhead enough for FTQC to be possible in near-term experiments. FTQC is a very active field, with constant progress being made; it is fair to say that we do not know what methods practical quantum computers of the future will use.
8. Variations and Generalizations of Quantum Error Correction
8.1 Other Kinds of Quantum Noise
In practice, the physical systems used as qubits are often not truly two-level systems. For instance, in quantum communication it is common to use the polarization states of a single photon to represent a qubit, but such photonic channels are subject to photon loss, in which the system goes to the vacuum state. Another common qubit uses two hyperfine levels of a trapped ion as a qubit; it is possible for the ion to make an unintended transition to an excited state. Such errors (where the system leaves the qubit subspace spanned by ) are generically called leakage errors. It is quite possible to design QECCs to deal with leakage errors as well as the kind of qubit errors considered in this article. Indeed, leakage errors are sometimes easier to correct; a measurement can reveal that a qubit is not in the usual subspace and then treat that as an erasure error (which, being at a known location, is easier to correct).
It is also commonly assumed that the errors are Markovian, so the qubits do not interact repeatedly with the same environment degrees of freedom. Many realistic systems, however, are non-Markovian. This is not always a serious problem, but it certainly makes it harder to model the error process.
8.2 Convolutional Codes
The QECCs described in this article are all block codes: that is, they encode a fixed number of logical qubits k into a fixed number of physical qubits n. There is another type of code, called a convolutional code, which works differently: the logical qubits arrive in a steady stream, and, as they arrive, they are encoded and transmitted. The ratio of logical to physical qubits is fixed, but the length of the codeword can vary tremendously. Classical convolutional codes are widely used in communication; quantum convolutional codes (Forney, Grassl, & Guha, 2007; Grassl & Rötteler, 2006; Ollivier & Tillich, 2003) have been explored for similar use in quantum communication.
8.3 Generalizations and Extensions of Stabilizer Codes
The QECCs described in this article have, for the most part, been qubit codes. But there has also been work on error-correcting codes for d-dimensional quantum systems, or qudits (Gottesman, 1999; Rains, 1999). The Pauli operators can be generalized in a number of ways to sets of matrices and connections made to linear codes over different finite fields.
For qubit codes, the basic stabilizer formalism described in this article can be generalized in a number of ways. One is in terms of operator or subsystem codes (Kribs, Laflamme, & Poulin, 2005; Kribs, Laflamme, Poulin, & Lesosky, 2006; Kribs & Spekkens, 2006; Nielsen & Poulin, 2007; Poulin, 2005). One can think of these codes as including, in addition to the logical qubits that are protected from noise, some additional gauge qubits that are not protected. No information can be stored in these gauge qubits, but their presence can be used to reduce the complexity of encoded operations (Bacon & Casaccino, 2006).
Another extension is to entanglement-assisted codes (Bennett et al., 1996; Bowen, 2002; Brun, Devetak, & Hsieh, 2006, 2014), which can be used for quantum communication. These codes assume that the sender and the receiver share some number of maximally entangled states prior to communication. This entanglement can dramatically boost the power of quantum codes, increasing their rate or their ability to correct errors, or both. It can also relax the self-orthogonality constraint in constructing QECCs from classical linear codes. It is also possible to construct entanglement-assisted operator codes (Brun, Devetak, & Hsieh, 2007; Hsieh, Devetak, & Brun, 2007).
A larger class of codes—which include the stabilizer codes as a special case—are the codeword stabilized (CWS) codes (Cross, Smith, Smolin, & Zeng, 2009; Van den Nest, Dehaene, & De Moor, 2004). These codes have received a limited amount of study; in some ways they can be more powerful than stabilizer codes, but general methods for constructing large CWS codes are not known, nor are efficient methods for encoding and decoding them.
8.4 Other Methods of Error Mitigation
QECCs and fault tolerance are the main methods known to protect quantum computers from decoherence, but there are other methods that can make a significant improvement in some cases. Decoherence-free subspaces and noiseless subsystems (Bacon, Kempe, Lidar, & Whaley, 2000; Lidar, Chuang, & Whaley, 1998; Zanardi, 2001; Zanardi & Rasetti, 1998) are encodings in which all errors are corrected passively: that is, stored information is immune to the effects of noise. Unfortunately, such subspaces (or subsystems) are not guaranteed to exist; they are generally only present when the noise process has special symmetries that can be exploited. Dynamical decoupling (DD) (Byrd & Lidar, 2002a; Duan & Guo, 1999; Khodjasteh & Lidar, 2005; Viola & Lloyd, 1998; Viola, Knill, & Lloyd, 1999; Viola, Lloyd, & Knill, 1999; Vitali & Tombesi, 1999; Zanardi, 1999), by contrast, is an active technique in which fast unitary pulses are applied to the quantum system in a regular pattern that causes the noise to average away to zero. DD has great advantages: it does not require measurements and feedback and can be applied to an entire set of qubits at once. But it also has limitations: it is only effective against non-Markovian noise, and it cannot increase the purity of a quantum state. However, it is possible to combine DD and QEC into a powerful hybrid approach to controlling errors in quantum systems (Byrd & Lidar, 2002b; Ng, Lidar, & Preskill, 2011).
9. Experimental Implementation of Quantum Error Correction
While the theory of QEC has been worked out in considerable detail, the experimental challenges in realizing it are daunting. In encoding quantum data, one uses a redundant representation that will in general undergo more noise than the original data would have done if left unencoded. For instance, encoding a single qubit in a [[7,1,3]] Steane code might well increase the overall rate of errors sevenfold. The encoding circuit and the circuits for measuring error syndromes and applying corrections will all be subject to errors. Unless the intrinsic error rate for all these operations is low, the net effect of error correction will be worse than doing nothing. Fault tolerance is even more demanding, since it requires the processing as well as the storage of encoded data and generally needs the preparation of high-quality ancillary states.
However, these difficulties also hold a promise: if error rates can be reduced to a sufficiently low level, then QEC can effectively make them as low as one desires. It will be possible to scale quantum computers to allow computations of unlimited size. That is the promise that supports and propels the entire field.
A number of experiments have been done to prove the principles of QEC. Many of these experiments applied artificial errors to a codeword and showed that error correction produced an improvement in the quality of the state (Chiaverini et al., 2004; Cory et al., 1998; Knill, Laflamme, Martinez, & Negrevergne, 2001; Pittman et al., 2005; Reed et al., 2012). Others have demonstrated the necessary operations for QEC but not shown an actual extension in the lifetime of the stored quantum state (Kelly et al., 2015; Schindler et al., 2011). As of this writing (in 2019), only one experiment has implemented QEC against the native noise in its system and shown a net gain over not using QEC at all (Ofek et al., 2016). This experiment encoded qubits as nonclassical states of a superconducting resonator with particular symmetry properties and showed a moderately longer coherence lifetime than an unencoded qubit. This result, modest though it is, shows that experimental systems are approaching the point where the methods of QEC will become viable technologies.
10. Conclusions and Open Questions
QEC is the centerpiece of the current effort toward quantum information processing, both for quantum computers and quantum communication. Despite early naïve intuitions to the contrary, error correction turns out to be both possible and practical for quantum systems; by careful design of QECCs, it is possible to deduce and correct an error on a quantum system without gaining any information about (and hence disturbing) the protected quantum state. The most widely used and studied class of quantum codes are the stabilizer codes, which inherit much of their error-correcting structure from classical linear codes. Stabilizer codes are just one element of fault-tolerant quantum computation, which should allow quantum computations of arbitrary size to be done with high success probability, providing that the physical error rate is below an error threshold.
QEC is an extremely active field of research, and it is fair to say that we do not yet know the true limits of these techniques. Threshold theorems have been proven for a variety of codes and error models, but we really do not know how high a threshold can be achieved; numerical evidence suggests that an error threshold as high as a few percent might be possible. Moreover, many of the methods that have been studied have been aimed at proving asymptotic scaling results and may require highly unrealistic amounts of overhead (Fowler et al., 2012; Knill, 2004b; Lai, Paz, Suchara, & Brun, 2014; Reichardt, 2004; Steane, 2003; Svore, Terhal, & DiVincenzo, 2005). As small quantum computers have started to become available, the focus is turning toward greatly reducing this overhead while maintaining error-correcting power (Brun et al., 2015; Chao & Reichardt, 2018a, 2018b; Reichardt, 2018). In practice, we will only ever do computations of a finite size; ultimate scaling limits are of largely theoretical interest. A more practical question is this: For a given level of noise in a finite-sized quantum computer, how large a computation can be done with reasonable probability of success?
Prototype quantum computers have raised another important question: How well does the decoherence in real devices resemble the idealized assumptions underlying proofs of threshold theorems? There is strong evidence that current qubits are very far from having independent Markovian noise. Fortunately, recent theoretical and experimental work (Huang, Doherty, & Flammia, 2019; Knill, 2005; Viola & Knill, 2005; Wallman & Emerson, 2016; Ware et al., 2018) also suggests that new fault-tolerant methods (such as randomized Pauli frames) may be able to transform quite general error models to resemble the Markovian Pauli error models commonly used in QEC research.
It is likely that the best codes and decoding algorithms for quantum computation have yet to be discovered. As larger and less noisy quantum computers become available, we will increasingly be able to test the performance of error-correction ideas on real machines and to build on the dramatic theoretical progress that we have already seen. In less than 25 years, QEC has gone from a few scattered ideas, beset by skepticism and misunderstanding, to the large, vibrant field it is today. There is every reason to think that we are only at the beginning of what can be done.
Further Reading
For a broad overview of QEC, the book Quantum Error Correction, edited by Lidar and Brun (2013), contains chapters on a wide range of topics in quantum error correction and error mitigation, contributed by top experts in the field.
There is also a textbook, Quantum Error Correction and Fault Tolerant Quantum Computing, by Frank Gaitan (2008), which presents the theory of QEC in a systematic fashion, with exercises, aimed at graduate students, advanced undergraduates, or anyone with a good technical background who wants to learn the field.
On a more narrowly-focused level, Daniel Gottesman’s (1997) dissertation Stabilizer Codes and Quantum Error Correction was a key contribution to the field of QEC and is still one of the best introductions to stabilizer codes.
There is a recent review article on QEC and other methods of error prevention in Reviews of Modern Physics, “Protecting Quantum Information Against Environmental Noise,” by Suter and Alvarez (2016).
There are several excellent textbooks on quantum information science that include useful introductions to QEC. Still the most useful, nearly 20 years after it was written, is Quantum Computation and Quantum Information, by Michael A. Nielsen and Isaac L. Chuang (2000). While our understanding has advanced significantly since this book appeared, especially on experimental systems, as a presentation of the foundations of the field it is by far the best and most comprehensive.
An excellent, much shorter introduction at the undergraduate level is Quantum Computer Science: An Introduction, by N. David Mermin (2007).
For those who want a more physics-centered approach to the subject, the textbook Principles of Quantum Computation and Information, by Benenti, Casati, Rossini, and Strini (2019) is very good. A somewhat eclectic but very interesting textbook is Explorations in Quantum Computing, by Colin P. Williams (2011). For a more computer-science-based view, there is Quantum Computing for Computer Scientists, by Yanofsky and Mannucci (2008) and the interesting collection of musings in Quantum Computing Since Democritus by Scott Aaronson (2013).
Quantum Information and Computation by Lo, Popescu and Spiller (1998) is an older book but is still interesting and gives insight into the rapid development of the field in the 1990s.
References
- Aaronson, S. (2013). Quantum computing since Democritus. Cambridge, U.K.: Cambridge University Press.
- Aharonov, D. A. (1999). Noisy quantum computation (Doctoral dissertation). The Hebrew University, Jerusalem, Israel.
- Aharonov, D. A., & Ben-Or, M. (1997). Fault tolerant computation with constant error. In F. T. Leighton & P. W. Shor (Eds.), Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing (pp. 176–188). New York, NY: ACM.
- Aharonov, D. A., & Ben-Or, M. (1999). Fault-tolerant quantum computation with constant error rate. SIAM Journal on Computing, 38(4), 1207–1282.
- Aharonov, D., Ben-Or, M., Impagliazzo, R., & Nisan, M. (1996). Limitations of noisy reversible computation. arXiv.
- Aliferis, P., Gottesman, D., & Preskill, J. (2006). Quantum accuracy threshold for concatenated distance-3 codes. Quantum Information and Computation, 6, 97–165.
- Bacon, D., & Casaccino, A. (2006). Quantum error correcting subsystem codes from two classical linear codes. arXiv.
- Bacon, D., Kempe, J., Lidar, D. A., & Whaley, K. B. (2000). Universal fault-tolerant computation on decoherence-free subspaces. Physical Review Letters, 85(8), 1758–1761.
- Barends, R., Kelly, J., Megrant, A., Veitia, A., Sank, D., Jeffrey, E., . . . Martinis, J. M. (2014). Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature, 508, 500–503.
- Benenti, G., Casati, G., Rossini, D., & Strini, G. (2019). Principles of quantum computation and information (2nd ed.). Singapore: World Scientific.
- Benioff, P. (1980). The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. Journal of Statistical Physics, 22(5), 563–591.
- Bennett, C. H., DiVincenzo, D. P., Smolin, J. A., & Wootters, W. K. (1996). Mixed state entanglement and quantum error correction. Physical Review A, 54, 3824–3851.
- Bombin, H. (2015). Gauge color codes: Optimal transversal gates and gauge fixing in topological stabilizer codes. New Journal of Physics, 17, 083002.
- Bowen, G. (2002). Entanglement required in achieving entanglement-assisted channel capacities. Physical Review A, 66(5), 052313.
- Bravyi, S., & Haah, J. (2012). Magic state distillation with low overhead. Physical Review A 86, 052329.
- Bravyi, S., & Kitaev, A. Y. (2005). Universal quantum computation with ideal Clifford gates and noisy ancillas. Physical Review A, 71, 022316.
- Bravyi, S., Suchara, M., & Vargo, A. (2014). Efficient algorithms for maximum likelihood decoding in the surface code. Physical Review A, 90(3), 032326.
- Brown, B. J., Nickerson, N. H., & Browne, D. E. (2016). Fault-tolerant error correction with the gauge color code. Nature Communications, 7, 12302.
- Brun, T. A., Devetak, I., & Hsieh, M.-H. (2006). Correcting quantum errors with entanglement. Science, 314(5798), 436–439.
- Brun, T. A., Devetak, I., & Hsieh, M.-H. (2007). General entanglement-assisted quantum error-correcting codes. In 2007 IEEE International Symposium on Information Theory (pp. 2101–2105). Piscataway, NJ: IEEE.
- Brun, T. A., Devetak, I., & Hsieh, M.-H. (2014). Catalytic quantum error correction. IEEE Transactions on Information Theory, 60(6), 3073–3089.
- Brun, T. A., Zheng, Y.-C., Hsu, K.-C., Job, J., & Lai, C.-Y. (2015). Teleportation-based fault-tolerant quantum computation in multi-qubit large block codes. arXiv.
- Byrd, M. S., & Lidar, D. A. (2002a). Bang-bang operations from a geometric perspective. Quantum Information Processing, 1, 19.
- Byrd, M. S., & Lidar, D. A. (2002b). Comprehensive encoding and decoupling solution to problems of decoherence and design in solid-state quantum computing. Physical Review Letters, 89(4), 047901.
- Calderbank, A. R., Rains, E. M., Shor, P. W., & Sloane, N. J. A. (1997). Quantum error correction and orthogonal geometry. Physical Review Letters, 78, 405–408.
- Calderbank, A. R., & Shor, P. W. (1996). Good quantum error-correcting codes exist. Physical Review A, 54, 1098–1105.
- Chamberland, C., Jochym-O’Connor, T., & Laflamme, R. (2016). Thresholds for universal concatenated quantum codes. Physical Review Letters, 117(1), 010501.
- Chao, R., & Reichardt, B. W. (2018a). Fault-tolerant quantum computation with few qubits. NPJ Quantum Information, 4, 42.
- Chao, R., & Reichardt, B. W. (2018b). Quantum error correction with only two extra qubits. Physical Review Letters, 121(5), 050502.
- Chiaverini, J., Leibfried, D., Schaetz, T., Barrett, M. D., Blakestad, R. B., Britton, J., . . . Wineland, D. J. (2004). Realization of quantum error correction. Nature, 432, 502–605.
- Cory, D. G., Mass, W., Price, M., Knill, E., Laflamme, R., Zurek, W. H., . . . Somaroo, S. S. (1998). Experimental quantum error correction. Physical Review Letters, 81, 2152–2155.
- Cross, A., Smith, G., Smolin, J. A., & Zeng, B. (2009). Codeword stabilized quantum codes. IEEE Transactions on Information Theory, 55(1), 433.
- Deutsch, D. (1985). Quantum theory, the Church–Turing Principle and the universal quantum computer. Proceedings of the Royal Society A, 400(1818), 97.
- Deutsch, D. (1989). Quantum computational networks. Proceedings of the Royal Society A, 425(1868), 73.
- Deutsch, D., & Josza, R. (1992). Rapid solution of problems by quantum computation. Proceedings of the Royal Society A, 439(1907), 553.
- Dieks, D. (1982). Communication by EPR devices. Physics Letters A, 92(6), 271–272.
- DiVincenzo, D. P., & Shor, P. W. (1996). Fault-tolerant error correction with efficient quantum codes. Physical Review Letters, 77, 3260–3263.
- Duan, L.-M., & Guo, G. (1999). Suppressing environmental noise in quantum computation through pulse control. Physics Letters A, 261(3–4), 139.
- Dyakonov, M. (2019, March). The case against quantum computing. IEEE Spectrum.
- Ekert, A., & Macchieavello, C. (1996). Error correction in quantum communication. Physical Review Letters, 77, 2585–2588.
- Feynman, R. P. (1982). Simulating physics with computers. International Journal of Theoretical Physics, 21(6–7), 467–488.
- Forney, G. D., Grassl, M., & Guha, S. (2007). Convolutional and tail-biting quantum error-correcting codes. IEEE Transactions on Information Theory, 53(3), 865–880.
- Fowler, A. G., Mariantoni, M., Martinis, J. M., & Cleland, A. N. (2012). Surface codes: Towards practical large-scale quantum computation. Physical Review A, 86(3), 032324.
- Fowler, A. G., Stephens, A. M., & Groszkowski, P. (2009). High threshold universal quantum computation on the surface code. Physical Review A, 80(5), 052312.
- Fowler, A. G., Wang, D. S., & Hollenberg, L. C. L. (2011). Surface code quantum error correction incorporating accurate error propagation. Quantum Information and Computation, 11, 8.
- Gaitan, F. (2008). Quantum error correction and fault tolerant quantum computing. Boca Raton, FL: CRC Press.
- Gottesman, D. (1996). Class of quantum error-correcting codes saturating the quantum Hamming bound. Physical Review A, 54, 1862–1868.
- Gottesman, D. (1997). Stabilizer codes and quantum error correction (Doctoral dissertation). California Institute of Technology, Pasadena.
- Gottesman, D. (1998). Theory of fault-tolerant quantum computation. Physical Review A, 57(1), 127–137.
- Gottesman, D. (1999). Fault-tolerant quantum computation with higher-dimensional systems. Chaos, Solitons & Fractals, 10, 1749–1758.
- Grassl, M., & Rötteler, M. (2006). Noncatastrophic encoders and encoder inverses for quantum convolutional codes. In 2006 IEEE International Symposium on Information Theory. Piscataway, NJ: IEEE.
- Haah, J., Hasting, M. B., Poulin, D., & Wecker, D. (2017). Magic state distillation with low space overhead and optimal asymptotic input count. Quantum, 1, 31.
- Haroche, S., & Raimond, J.-M. (1996). Quantum computing: Dream or nightmare? Physics Today, 49(8), 51.
- Hsieh, M.-H., Devetak, I., & Brun, T. A. (2007). General entanglement-assisted quantum error-correcting codes. Physical Review A, 76(6), 062313.
- Huang, E., Doherty, A. C., & Flammia, S. (2019). Performance of quantum error correction with coherent errors. Physical Review A, 99(2), 022313.
- Kalai, G. (2011). How quantum computers fail: Quantum codes, correlations in physical systems, and noise accumulation. arXiv.
- Kelly, J., Barends, R., Fowler, A. G., Megrant, A., Jeffrey, E., White, T. C., . . . Martinis, J. M. (2015). State preservation by repetitive error detection in a superconducting quantum circuit. Nature, 519, 66–69.
- Khodjasteh, K., & Lidar, D. A. (2005). Fault-tolerant quantum dynamical decoupling. Physical Review Letters, 95(18), 180501.
- Kitaev, A. Y. (1997a). Quantum computations: Algorithms and error correction. Russian Mathematical Survey, 52(6), 1191–1249.
- Kitaev, A. Y. (1997b). Quantum error correction with imperfect gates. In A. S. Holevo, O. Hirota, & C. M. Caves (Eds.), Quantum communication, computing, and measurement (pp. 181–188). New York, NY: Plenum Press.
- Kitaev, A. Y. (2003). Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1), 2–30.
- Knill, E. (2004a). Fault-tolerant postselected quantum computation: Schemes. arXiv.
- Knill, E. (2004b). Fault-tolerant postselected quantum computation: Threshold analysis. arXiv.
- Knill, E. (2005). Quantum computing with realistically noisy devices. Nature, 434, 39–44.
- Knill, E., & Laflamme, R. (1997). Theory of quantum error-correcting codes. Physical Review A, 55(2), 900–911.
- Knill, E., Laflamme, R., Martinez, R., & Negrevergne, C. (2001). Benchmarking quantum computers: The five-qubit error correcting code. Physical Review Letters, 86, 5811–5814.
- Knill, E., Laflamme, R., & Viola, L. (2000). Theory of quantum error correction for general noise. Physical Review Letters, 84, 2525–2528.
- Knill, E., Laflamme, R., & Zurek, W. H. (1998a). Resilient quantum computation. Science, 279(5349), 342–345.
- Knill, E., Laflamme, R., & Zurek, W. H. (1998b). Resilient quantum computation: Error models and thresholds. Proceedings of the Royal Society A, 454(1969), 19980166.
- Kribs, D. W., Laflamme, R., & Poulin, D. (2005). Unified and generalized approach to quantum error correction. Physical Review Letters, 94(18), 180501.
- Kribs, D. W., Laflamme, R., Poulin, D., & Lesosky, M. (2006). Operator quantum error correction. Quantum Information and Computation, 6, 383–399.
- Kribs, D. W., & Spekkens, R. W. (2006). Quantum error-correcting subsystems are unitarily recoverable subsystems. Physical Review A, 74(4), 042329.
- Kubica, A., & Beverland, M. E. (2015). Universal transversal gates with color codes: A simplified approach. Physical Review A, 91(3), 032330.
- Laflamme, R., Miquel, C., Paz, J.-P., & Zurek, W. H. (1996). Perfect quantum error correction code. Physical Review Letters, 77(1), 198–201.
- Lai, C.-Y., Paz, G., Suchara, M., & Brun, T. A. (2014). Performance and error analysis of Knill’s postselection scheme in a two-dimensional architecture. Quantum Information and Computation, 14(9–10), 807–822.
- Lai, C.-Y., Zheng, Y.-C., & Brun, T. A. (2017). Fault-tolerant preparation of stabilizer states for quantum Calderbank-Shor-Steane codes by classical error-correcting codes. Physical Review A, 95(3), 032339.
- Landahl, A. J., Anderson, J. T., & Rice, P. R. (2011). Fault-tolerant quantum computing with color codes. arXiv.
- Landauer, R. (1996). The physical nature of information. Physics Letters A, 217(4–5), 188–193.
- Lidar, D. A., & Brun, T. A. (Eds.). (2013). Quantum error correction. Cambridge, U.K.: Cambridge University Press.
- Lidar, D. A., Chuang, I. L., & Whaley, K. B. (1998). Decoherence-free subspaces for quantum computation. Physical Review Letters, 81(12), 2594–2597.
- Lo, H.-K., Popescu, S., & Spiller, T. (Eds.). (1998).Quantum information and computation. Singapore: World Scientific.
- Manin, Y. (1980). Computable and uncomputable (in Russian). Sovetskoye Radio, Moscow.
- Mermin, N. D. (2007). Quantum computer science: An introduction. Cambridge, U.K.: Cambridge University Press.
- Moskvitch, K. (2018, February 7). The argument against quantum computers. Quanta Magazine.
- Ng, H. K., Lidar, D. A., & Preskill, J. (2011). Combining dynamical decoupling with fault-tolerant quantum computation. Physical Review A, 84(1), 012305.
- Nielsen, M. A., & Chuang, I. L. (2000). Quantum computation and quantum information. Cambridge, U.K.: Cambridge University Press.
- Nielsen, M. A., & Poulin, D. (2007). Algebraic and information-theoretic conditions for operator quantum error correction. Physical Review A, 75(6), 064304.
- Ofek, N., Petrenko, A., Heeres, R., Reinhold, P., Leghtas, Z., Vlastakis, B., . . . Schoelkop, R. J. (2016). Extending the lifetime of a quantum bit with error correction in superconducting circuits. Nature, 536, 441–445.
- Ollivier, H., & Tillich, J.-P. (2003). Description of a quantum convolutional code. Physical Review Letters, 91(17), 177902.
- Park, J. L. (1970). The concept of transition in quantum mechanics. Foundations of Physics, 1(1), 23–33.
- Pittman, T. B., Jacobs, B. C., & Franson, J. D. (2005). Demonstration of quantum error correction using linear optics. Physical Review A, 71, 052332.
- Poulin, D. (2005). Stabilizer formalism for operator quantum error correction. Physical Review Letters, 95(23), 230504.
- Preskill, J. (1997). Fault-tolerant quantum computation. Caltech Report number CALT-68-2150.
- Preskill, J. (1998a). Fault-tolerant quantum computation. In H.-K. Lo, T. Spiller, & S. Popescu (Eds.), Quantum information and computation (pp. 213–269). Singapore: World Scientific.
- Preskill, J. (1998b). Reliable quantum computers. Proceedings of the Royal Society A, 454(1969), 385–410.
- Rains, E. M. (1999). Nonbinary quantum codes. IEEE Transactions on Information Theory, 45(6), 1827–1832.
- Reed, M. D., DiCarlo, L., Nigg, S. E., Funzio, L., Girvin, S. M., & Schoelkopf, J. (2012). Realization of three-qubit quantum error correction with superconducting circuits. Nature 482, 382–385.
- Reichardt, B. W. (2004). Improved ancilla preparation scheme increases fault-tolerant threshold. arXiv.
- Reichardt, B. W. (2005a). Fault-tolerance threshold for a distance-three quantum code. arXiv.
- Reichardt, B. W. (2005b). Quantum universality from magic states distillation applied to CSS codes. Quantum Information Processing, 4(3), 251–264.
- Reichardt, B. W. (2018). Fault-tolerant quantum error correction for Steane’s seven-qubit color code with few or no extra qubits. arXiv.
- Schindler, P., Barreiro, J. T., Monz, T., Nebendahl, V., Nigg, D., Chwalla, M., . . . Blatt, R. (2011). Experimental repetitive quantum error correction. Science, 332, 1059–1061.
- Schumacher, B. (1995). Quantum coding. Physical Review A, 51(4): 2738–2747.
- Shor, P. W. (1994). Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of the 35th Annual Symposium on Foundations of Computer Sciences (pp. 124–134). Los Alamitos, CA: IEEE Press.
- Shor, P. W. (1995). Scheme for reducing decoherence in quantum computer memory. Physical Review A, 52(4), 2493–2496.
- Shor, P. W. (1996). Fault-tolerant quantum computation. In Proceedings of the 37th Annual Symposium on Foundations of Computer Science (pp. 56–65). Los Alamitos, CA: IEEE Press.
- Shor, P. W., & Laflamme, R. (1997). Quantum analog of the MacWilliams identities for classical coding theory. Physical Review Letters, 78(8), 1600–1602.
- Simon, D. R. (1994). On the power of quantum computation. In Proceedings of the 35th Annual Symposium on Foundations of Computer Science (pp. 116–123). Los Alamitos, CA: IEEE Press.
- Steane, A. M. (1996a). Error correcting codes in quantum theory. Physical Review Letters, 77(5), 793–797.
- Steane, A. M. (1996b). Multiple-particle interference and quantum error correction. Proceedings of the Royal Society A, 452(1954), 2551–2576.
- Steane, A. M. (1999). Efficient fault-tolerant quantum computing. Nature, 399, 124–126.
- Steane, A. M. (2003). Overhead and noise threshold of fault-tolerant quantum error correction. Physical Review A, 68(4), 042322.
- Stephens, A. M. (2014). Fault-tolerant thresholds for quantum error correction with the surface code. Physical Review A, 89(2), 022321.
- Suter, D., & Álvarez, G. A. (2016). Protecting quantum information against environmental noise. Reviews of Modern Physics, 88, 041001.
- Svore, K. M., Terhal, B. M., & DiVincenzo, D. P. (2005). Local fault-tolerant quantum computation. Physical Review A, 72, 022317.
- Unruh, W. G. (1995). Maintaining coherence in quantum computers. Physical Review A, 51, 992–997.
- Van den Nest, M., Dehaene, J., & De Moor, B. (2004). Graphical description of the action of local Clifford transformations on graph states. Physical Review A, 69(2), 022316.
- Viola, L., & Knill, E. (2005). Random decoupling schemes for quantum dynamical control and error suppression. Physical Review Letters, 94(6), 060502.
- Viola, L., Knill, E., & Lloyd, S. (1999). Dynamical decoupling of open quantum systems. Physical Review Letters, 82(12), 2417–2421.
- Viola, L., & Lloyd, S. (1998). Dynamical suppression of decoherence in two-state quantum systems. Physical Review A, 58(4), 2733–2744.
- Viola, L., Lloyd, S., & Knill, E. (1999). Universal control of decoupled quantum systems. Physical Review Letters, 83(23), 4888–4891.
- Vitali, D., & Tombesi, P. (1999). Using parity kicks for decoherence control. Physical Review A, 59(6), 4178–4186.
- Wallman, J. J., & Emerson, J. (2016). Noise tailoring for scalable quantum computation via randomized compiling. Physical Review A, 94(5), 052325.
- Ware, M., Ribeill, G., Riste, D., Ryan, C. A., Johnson, B., & da Silva, M. P. (2018). Experimental demonstration of Pauli-frame randomization on a superconducting qubit. arXiv.
- Williams, C. P. (2011). Explorations in quantum computing (2nd ed.). London, U.K.: Springer-Verlag.
- Wootters, W. K., & Zurek, W. H. (1982). A single quantum cannot be cloned. Nature, 299(5886), 802–803.
- Yanofsky, N. S., & Mannucci, M. A. (2008). Quantum computing for computer scientists. New York, NY: Cambridge University Press.
- Yao, A. C. (1993). Quantum circuit complexity. In Proceedings of the 34th Annual Symposium on Foundations of Computer Science (pp. 352–361). Los Alamitos, CA: IEEE Press.
- Zanardi, P. (1999). Symmetrizing evolutions. Physics Letters A, 258(2–3), 77–82.
- Zanardi, P. (2001). Stabilizing quantum information. Physical Review A, 63, 012301.
- Zanardi, P., & Rasetti, M. (1998). Noiseless quantum codes. Physical Review Letters, 79(17), 3306–3309.
- Zheng, Y.-C., Lai, C.-Y., & Brun, T. A. (2018). Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation. Physical Review A, 97(3), 032331.