Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems.
The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.

### Article

## Adiabatic Quantum Computing and Quantum Annealing

### Erica K. Grant and Travis S. Humble

### Article

## Circuit Model of Quantum Computation

### James Wootton

Quantum circuits are an abstract framework to represent quantum dynamics. They are used to formally describe and reason about processes within quantum information technology. They are primarily used in quantum computation, quantum communication, and quantum cryptography—for which they provide a machine code–level description of quantum algorithms and protocols. The quantum circuit model is an abstract representation of these technologies based on the use of quantum circuits, with which algorithms and protocols can be concretely developed and studied.
Quantum circuits are typically based on the concept of qubits: two-level quantum systems that serve as a fundamental unit of quantum hardware. In their simplest form, circuits take a set of qubits initialized in a simple known state, apply a set of discrete single- and two-qubit evolutions known as “gates,” and then finally measure all qubits. Any quantum computation can be expressed in this form through a suitable choice of gates, in a quantum analogy of the Boolean circuit model of conventional digital computation.
More complex versions of quantum circuits can include features such as qudits, which are higher level quantum systems, as well as the ability to reset and measure qubits or qudits throughout the circuit. However, even the simplest form of the model can be used to emulate such behavior, making it fully sufficient to describe quantum information technology. It is possible to use the quantum circuit model to emulate other models of quantum computing, such as the adiabatic and measurement-based models, which formalize quantum algorithms in a very different way.
As well as being a theoretical model to reason about quantum information technology, quantum circuits can also provide a blueprint for quantum hardware development. Corresponding hardware is based on the concept of building physical systems that can be controlled in the way required for qubits or qudits, including applying gates on them in sequence and performing measurements.

### Article

## Measurement-Based Quantum Computation

### Tzu-Chieh Wei

Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.

### Article

## Philosophy of Quantum Mechanics

### David Wallace

If the philosophy of physics has a central problem, it is the quantum measurement problem: the problem of how to interpret, make sense of, and perhaps even fix quantum mechanics. Other theories in physics challenge people’s intuitions and everyday assumptions, but only quantum theory forces people to take seriously the idea that there is no objective world at all beyond their observations—or, perhaps, that there are many. Other theories in physics leave people puzzled about aspects of how they are to be understood, but only quantum theory raises paradoxes so severe that leading physicists and leading philosophers of physics seriously consider tearing it down and rebuilding it anew. Quantum theory is both the conceptual and mathematical core of 21st-century physics and the gaping void in the attempt to understand the worldview given by 21st-century physics.
Unsurprisingly, then, the philosophy of quantum mechanics is dominated by the quantum measurement problem, and to a lesser extent by the related problem of quantum non-locality, and in this article, an introduction to each is given. In Section 1, I review the formalism of quantum mechanics and the quantum measurement problem. In Sections 2–4 I discuss the three main classes of solution to the measurement problem: treat the formalism as representing the objective state of the system; treat it as representing only probabilities of something else; modify it or replace it entirely. In Section 5 I review Bell’s inequality and the issue of non-locality in quantum mechanics, and relate it to the interpretations discussed in Sections 2–4. I make some brief concluding remarks in Section 6.
A note on terminology: I use “quantum theory” and “quantum mechanics” interchangeably to refer to the overall framework of quantum physics (containing quantum theories as simple as the qubit or harmonic oscillator and as complicated as the Standard Model of particle physics). I do not adopt the older convention (still somewhat common in philosophy of physics) that “quantum mechanics” means only the quantum theory of particles, or perhaps even non-relativistic particles: when I want to refer to non-relativistic quantum particle mechanics I will do so explicitly.

### Article

## Philosophy of Quantum Mechanics: Dynamical Collapse Theories

### Angelo Bassi

Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics.
A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so.
By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.

### Article

## Quantum Dots/Spin Qubits

### Shannon P. Harvey

Spin qubits in semiconductor quantum dots represent a prominent family of solid-state qubits in the effort to build a quantum computer. They are formed when electrons or holes are confined in a static potential well in a semiconductor, giving them a quantized energy spectrum. The simplest spin qubit is a single electron spin located in a quantum dot, but many additional varieties have been developed, some containing multiple spins in multiple quantum dots, each of which has different benefits and drawbacks. Although these spins act as simple quantum systems in many ways, they also experience complex effects due to their semiconductor environment. They can be controlled by both magnetic and electric fields depending on their configuration and are therefore dephased by magnetic and electric field noise, with different types of spin qubits having different control mechanisms and noise susceptibilities. Initial experiments were primarily performed in gallium arsenide–based materials, but silicon qubits have developed substantially and research on qubits in silicon metal-oxide-semiconductor, silicon/silicon germanium heterostructures, and donors in silicon is also being pursued. An increasing number of spin qubit varieties have attained error rates that are low enough to be compatible with quantum error correction for single-qubit gates, and two-qubit gates have been performed in several with success rates, or fidelities, of 90–95%.

### Article

## Quantum Error Correction

### Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state.
Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters
[
[
n
,
k
,
d
]
]
, where
n
is the number of physical qubits,
k
is the number of encoded logical qubits, and
d
is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has
n
>
k
; this physical redundancy is necessary to detect and correct errors without disturbing the logical state.
Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.

### Article

## Quantum Simulation With Trapped Ions

### D. Luo and N. M. Linke

Simulating quantum systems using classical computers encounters inherent challenges due to the exponential scaling with system size. To overcome this challenge, quantum simulation uses a well-controlled quantum system to simulate another less controllable system. Over the last 20 years, many physical platforms have emerged as quantum simulators, such as ultracold atoms, Rydberg atom arrays, trapped ions, nuclear spin, superconducting circuits, and integrated photonics. Trapped ions, with induced spin interactions and universal quantum gates, have demonstrated remarkable versatility, capable of both analog and digital quantum simulation. Recent experimental results, covering a range of research areas including condensed matter physics, quantum thermodynamics, high-energy physics, and quantum chemistry, guide this introductory review to the growing field of quantum simulation.

### Article

## The Philosophical Significance of Decoherence

### Elise Crull

Quantum decoherence is a physical process resulting from the entanglement of a system with environmental degrees of freedom. The entanglement allows the environment to behave like a measuring device on the initial system, resulting in the dynamical suppression of interference terms in mutually commuting bases. Because decoherence processes are extremely fast and often practically irreversible, measurements performed on the system after system–environment interactions typically yield outcomes empirically indistinguishable from physical collapse of the wave function. That is: environmental decoherence of a system’s phase relations produces effective eigenstates of a system in certain bases (depending on the details of the interaction) through prodigious damping—but not destruction—of the system’s off-diagonal terms in those bases.
Although decoherence by itself is neither an interpretation of quantum physics nor indeed even new physics, there is much debate concerning the implications of this process in both the philosophical and the scientific literature. This is especially true regarding fundamental questions arising from quantum theory about the roles of measurement, observation, the nature of entanglement, and the emergence of classicality. In particular, acknowledging the part decoherence plays in interpretations of quantum mechanics recasts that debate in a new light.