1-6 of 6 Results  for:

  • Quantum Information x
Clear all

Article

Erica K. Grant and Travis S. Humble

Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches. Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity. A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems. The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.

Article

Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.

Article

Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics. A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so. By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.

Article

John Bartholomew and Cornelius Hempel

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. Quantum bits – or qubits – are the central concept on which all quantum information science and technology are built. In contrast to the binary digits (bits) in classical information processing, where voltages across a capacitor encode either a logical state 0 or 1, qubits encode information distributed across two levels of a quantum mechanical system. Building hardware to harness the power of the qubit concept is no longer restricted to academic research but has become a focus for commercial entities ranging from small start-ups to large, established corporations, in a high-stakes race to capitalize on what is often called the second quantum revolution. This name refers to the targeted use of two quantum effects, “superposition” and “entanglement”, to obtain a computational advantage, providing a significant algorithmic speed-up in certain applications or even enabling previously intractable calculations. Simultaneously, however, the two quantum coherent effects are at the core of the fragility or “decoherence” of the information encoded by qubits. As a consequence, qubit realizations are being pursued in many different modalities, e.g., single atoms, atom-like structures or defects in solids, superconducting circuits, and photons. Each type of physical qubit has specific advantages and disadvantages captured by a range of figures of merit, e.g., the time over which coherence can be maintained and speed, as well as error rates in operations and measurement. As the number of qubits in quantum computing systems grows, collective effects in addition to individual qubit properties quickly make hardware characterization and error analysis intractable, especially in the presence of unavoidable noise and control errors. To enable fault-tolerant operation at scales large enough to provide the sought-after advantage over classical computation, multiple physical qubits can be combined into so-called logical qubits, an algorithmic redundancy that facilitates quantum error correction. As the overhead of this approach is directly related to the underlying individual physical qubit error rates and controllability, research into a diverse array of qubit modalities continues unabated and, as of yet, with no clear winner in sight.

Article

Joel Wallman, Steven Flammia, and Ian Hincks

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. Quantum systems may outperform current digital technologies at various information processing tasks, such as simulating the dynamics of quantum systems and integer factorization. Quantum Characterization, Verification, and Validation (QCVV) is the procedure for estimating the quality of physical quantum systems for use as information processors. QCVV consists of three components. Characterization means determining the effect of control operations on a quantum system, and the nature of external noise acting on the quantum system. The first characterization experiments (Rabi, Ramsey, and Hahn-echo) were developed in the context of nuclear magnetic resonance. As other effective two-level systems with varying noise models have been identified and couplings become more complex, additional techniques such as tomography and randomized benchmarking have been developed specifically for quantum information processing. Verification involves verifying that a control operation implements a desired ideal operation to within a specified precision. Often, these targets are set by the requirements for quantum error correction and fault-tolerant quantum computation in specific architectures. Validation is demonstrating that a quantum information processor can solve specific problems. For problems whose solution can be efficiently verified (e.g., prime factorization), validation may involve running a corresponding quantum algorithm (e.g., Shor’s algorithm) and analyzing the time taken to produce the correct solution. For problems whose solution cannot be efficiently verified, for example, quantum simulation, developing adequate techniques is an active area of research. The essential features that make a device useful as a quantum information processor also create difficulties for QCVV, and specialized techniques have been developed to surmount these difficulties. The field is now entering a mature phase where a broad range of techniques can address all three tasks. As quantum information processors continue to scale up and improve, these three tasks look to become increasingly relevant, and many challenges remain.

Article

Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state. Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters [ [ n , k , d ] ] , where n is the number of physical qubits, k is the number of encoded logical qubits, and d is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has n > k ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state. Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.