Tuesday, March 7, 2006

Quantum error correction

I only have one minute. The second talk of John Preskill was about quantum error correction and it was also very interesting. I hope to write a few more words after the dinner with the speaker...

The dinner was fun except that the discussion was focusing on the governance of Harvard (after some initial intriguing debates before the dinner about the relation between dualities in string theory and computer science). At the beginning of the dinner, I evaluated the situation and told myself: Luboši, shut up, drink wine, and enjoy the comments about the "conservative idiocy" and listen to larry-bashing, wisse-bashing, mansfield-bashing, and other bashings. :-)

This far left-wing silliness around is nothing else than the price for the dinner that is otherwise for free - what a great deal. You may not be at a politically reasonable, diverse, or free university, but you are still in a free country and you would be free to express your feelings after the dinner. If you could live in the environment of political diversity of the Pyongyang parliament for one half of your life, using the words of the Wall Street Journal, three more hours can't kill you.

I can effectively transform and rationalize the character of a political discussion if my minority viewpoint represents at least 25% of the environment, but this time, the actual number was very far from (below) this figure, so I had to enjoy the anti-conservative atmosphere just like I did so many times in the communist Czechoslovakia: we're back before 1989. But we had fun. And I learned many things, including news from the today's FAS faculty meeting that I did not attend. After Summers has announced his resignation, the situation at Harvard resembles anarchy a bit.

Back to the Loeb lecture. Preskill was speaking about the error corrections. First, one must understand the classical error correction algorithms. You need some redundant bits for each cluster of bits and they can guarantee that you can reconstruct the full information if at most one error occurs within the group. For qubits, the situation is more complex because there are different kinds of errors. Discretely, they can be classified as a flip of the eigenvalue of sigmaX, sigmaY, or sigmaZ Pauli matrices. You can make several simultaneous commuting measurements on a group of bits, determine what error occured, and perform the appropriate action that corrects the error.

For example, you can choose a group of five qubits to "emulate" a single qubit. One may choose four commuting operators M1,M2,M3,M4, each of them being a tensor product of the identity or Pauli matrix operators acting on each of the five qubits. By measuring the eigenvalues +-1 of these four operators, you get one of 16 possibilities. 16 is more than 15, and there are 15 different kinds of errors that can occur: a kind of error is described by choosing one of the 5 qubits where the error took place, and choosing one of three Pauli matrices whose eigenvalue was reverted. From the 4 bits of classical information you obtain from the measurements, you can determine exactly what error has occured.

In the case of 5 qubits, I was of course imagining how these operators M1,M2,M3,M4 may be represented as products of gamma matrices of SO(10) in the spinorial basis. ;-) But the number 5 is not so special since there are other error correcting codes with a different number of qubits such as 7 (where the 3+3 M-operators containing only 1,sigmaX and/or only 1,sigmaZ are decoupled).

If you want to make really long calculations, you need to reduce the error rate to a very small number. That's achieved by representing each qubit in your quantum computer by - for example - a group of five qubits, and each of these five qubits is emulated by another, embedded group of five qubits, and so forth. You need just a couple of levels in this hierarchy to make reasonably long calculations reliable enough: the error rate goes down as a power of some "elementary error rate" where the exponent grows exponentially with the number of levels in the hierarchy: the dependence is "doubly exponential".

These algorithms are able to fix a single error that can occur but they of course fail if you encounter more than one qubit error in the group. After the error corrections become a part of your calculation, all the overall probabilities of failure become proportional to "epsilon^2" where "epsilon" determines the rate of a single error. You can show that by making the computations increasingly hierarchical, you can arbitrarily improve the reliability of your quantum calculation if the probability of error per gate is reduced below a certain critical value.

For the Preskill et al. particular realization of the error codes, the probability of error per gate must be below 27.3 parts per million. Someone else has rigorously proved that a slightly more efficient algorithm of error correction allows you to use computers with the probability of error per gate being as high as 75 parts per million or so which is almost 10^{-4}. If your technology - and Preskill considers linear optics and ion traps to be the most developed and tested technologies - decreases under this threshold, a couple of levels of your hierarchy will allow you to build a usable and reliable quantum computer.

Actually the requirements on the reliability of the individual gates quantified above are pretty pessimistic - they follow from the worst case scenario. In reality, a computer might work even if the microscopic error rate were slightly higher than the figures above. A quantum computer and its algorithms should be fault tolerant which means that the problems with the errors are solvable. There are various mathematical theorems showing that goodness of a particular error correcting framework implies (apolitical) correctness. Such a proof can be done at the leading level, and there are tricks emulating the step of the mathematical induction that tell you that if "goodness implies correctness" at one level, the same statement holds at a slightly finer level.

There were various questions after the talk. Someone has effectively proposed to trash the whole talk - to trash the error correction algorithms - and consider a computer without any error correction mechanism in it, and just try to repeat the calculation many times to find the correct result. Of course, Preskill has explained that you would need exponentially large numbers of repetitions to find the result with such a flawed quantum computer and such an approach would not be too different from random guessing. In other words, an error correcting mechanism is a necessary part of any realistic quantum computer we can think of. The level corrections slow down the calculations by a small amount: the exponential speed-up is unaffected. It is only corrected by a power law "overhead".

There are various subtle details about the efficiency of the error correcting algorithms. The operations that only act on a single gate are the easiest ones to be fixed. To build a set of operations necessary for the "universal" calculations, you need several more operations that act on several bits simultaneously and transform them according to some unitary operator. One of the rules is that these operations must be local in space and time to make the error correcting algorithms usable.

Quantum computation is in some sense analogous to string theory because it is based on mathematically formulated physical laws that we believe to be almost certainly correct, but whose validity in this realm has not been verified experimentally and there exists a justifiable skepticism that a direct experimental realization or test of these ideas may be impossible in the foreseeable future.

I have drunk some wine so I apologize if the text above makes no sense whatsoever. ;-)