Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems.
The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.

## 1-20 of 30 Results

- Physics x

12

### Article

### Yang-Hui He

Calabi-Yau spaces, or Kähler spaces admitting zero Ricci curvature, have played a pivotal role in theoretical physics and pure mathematics for the last half century. In physics, they constituted the first and natural solution to compactification of superstring theory to our 4-dimensional universe, primarily due to one of their equivalent definitions being the admittance of covariantly constant spinors.
Since the mid-1980s, physicists and mathematicians have joined forces in creating explicit examples of Calabi-Yau spaces, compiling databases of formidable size, including the complete intersecion (CICY) data set, the weighted hypersurfaces data set, the elliptic-fibration data set, the Kreuzer-Skarke toric hypersurface data set, generalized CICYs, etc., totaling at least on the order of
10
10
manifolds. These all contribute to the vast string landscape, the multitude of possible vacuum solutions to string compactification.
More recently, this collaboration has been enriched by computer science and data science, the former in bench-marking the complexity of the algorithms in computing geometric quantities, and the latter in applying techniques such as machine learning in extracting unexpected information. These endeavours, inspired by the physics of the string landscape, have rendered the investigation of Calabi-Yau spaces one of the most exciting and interdisciplinary fields.

### Article

### Massimo Florio and Chiara Pancotti

In economics, infrastructure is a long-term investment aimed at the delivery of essential services to a large number of users, such as those in the field of transport, energy, or telecommunications. A research infrastructure (RI) is a single-sited, distributed, virtual, or mobile facility, designed to deliver scientific services to communities of scientists. In physical sciences (including astronomy and astrophysics, particle and nuclear physics, analytical physics, medical physics), the RI paradigm has found several large-scale applications, such as radio telescopes, neutrino detectors, gravitational wave interferometers, particle colliders and heavy ion beams, high intensity lasers, synchrotron light sources, spallation neutron sources, and hadrontherapy facilities.
These RIs require substantial capital and operation expenditures and are ultimately funded by taxpayers. In social cost–benefit analysis (CBA), the impact of an investment project is measured by the intertemporal difference of benefits and costs accruing to different agents. Benefits and costs are quantified and valued through a common metric and using the marginal social opportunity costs of goods (or shadow price) that may differ from the market price, as markets are often incomplete or imperfect. The key strength of CBA is that it produces information about the project’s net contribution to society that is summarized in simple numerical indicators, such as the net present value of a project.
For any RIs, consolidated cost accounting should include intertemporal capital and operational expenditure both for the main managing body and for experimental collaborations or other external teams, including in-kind contributions. As far as social intertemporal benefits are concerned, it is convenient to divide them into two broad classes. The first class of benefits accrue to different categories of direct and indirect users of infrastructure services: scientists, students, firms benefiting from technological spillovers, consumers of innovative services and products, and citizens who are involved in outreach activities. The empirical estimation of the use value of an RI depends on the scientific specificities of each project, as different social groups are involved to different degrees. Second, there are benefits for the general public of non-users: these benefits are associated with social preferences for scientific research, even when the use of a discovery is unknown. In analogy with the valuation of environmental and cultural goods, the empirical approach to non-use value aims at eliciting the willingness to pay of citizens for the scientific knowledge that is created by an RI. This can be done by well-designed contingency valuation surveys.
While some socio-economic impact studies of RIs in physics have been available since the 1980s, the intangible nature of some benefits and the uncertainty associated with scientific discoveries have limited the diffusion of CBA in this field until recently. Nevertheless, recent studies have explored the application of CBA to RIs in physics. Moreover, the European Commission, the European Strategy Forum on Research Infrastructures, the European Investment Bank, and some national authorities suggest that the study of social benefits and costs of RIs should be part of the process leading to funding decisions.

### Article

### Maarten Boonekamp and Matthias Schott

With the huge success of quantum electrodynamics (QED) to describe electromagnetic interactions in nature, several attempts have been made to extend the concept of gauge theories to the other known fundamental interactions. It was realized in the late 1960s that electromagnetic and weak interactions can be described by a single unified gauge theory. In addition to the photon, the single mediator of the electromagnetic interaction, this theory predicted new, heavy particles responsible for the weak interaction, namely the W and the Z bosons. A scalar field, the Higgs field, was introduced to generate their mass.
The discovery of the mediators of the weak interaction in 1983, at the European Center for Nuclear Research (CERN), marked a breakthrough in fundamental physics and opened the door to more precise tests of the Standard Model. Subsequent measurements of the weak boson properties allowed the mass of the top quark and of the Higgs Boson to be predicted before their discovery. Nowadays, these measurements are used to further probe the consistency of the Standard Model, and to place constrains on theories attempting to answer still open questions in physics, such as the presence of dark matter in the universe or unification of the electroweak and strong interactions with gravity.

### Article

### Kei Koizumi

Large-scale U.S. government support of scientific research began in World War II with physics, and rapidly expanded in the postwar era to contribute strongly to the United States’ emergence as the world’s leading scientific and economic superpower in the latter half of the 20th century. Vannevar Bush, who directed President Franklin Roosevelt’s World War II science efforts, in the closing days of the War advocated forcefully for U.S. government funding of scientific research to continue even in peacetime to support three important government missions of national security, health, and the economy. He also argued forcefully for the importance of basic research supported by the federal government but steered and guided by the scientific community. This vision guided an expanding role for the U.S. government in supporting research not only at government laboratories but also in non-government institutions, especially universities.
Although internationally comparable data are difficult to obtain, the U.S. government appears to be the single largest national funder of physics research. The U.S. government support of physics research comes from many different federal departments and agencies. Federal agencies also invest in experimental development based on research discoveries of physics. The Department of Energy’s (DOE) Office of Science is by far the dominant supporter of physics research in the United States, and DOE’s national laboratories are the dominant performers of U.S. government-supported physics research. Since the 1970s, U.S. government support of physics research has been stagnant with the greatest growth in U.S. government research support having shifted since the 1990s to the life sciences and computer sciences.

### Article

### Thomas Wiegelmann

Magnetohydrodynamic equilibria are time-independent solutions of the full magnetohydrodynamic (MHD) equations. An important class are static equilibria without plasma flow. They are described by the magnetohydrostatic equations
j
×
B
=
∇
p
+
ρ
∇
Ψ
,
∇
×
B
=
μ
0
j
,
∇
·
B
=
0.
B
is the magnetic field,
j
the electric current density,
p
the plasma pressure,
ρ
the mass density,
Ψ
the gravitational potential, and
µ
0
the permeability of free space. Under equilibrium conditions, the Lorentz force
j
×
B
is compensated by the plasma pressure gradient force and the gravity force.
Despite the apparent simplicity of these equations, it is extremely difficult to find exact solutions due to their intrinsic nonlinearity. The problem is greatly simplified for effectively two-dimensional configurations with a translational or axial symmetry. The magnetohydrostatic (MHS) equations can then be transformed into a single nonlinear partial differential equation, the Grad–Shafranov equation. This approach is popular as a first approximation to model, for example, planetary magnetospheres, solar and stellar coronae, and astrophysical and fusion plasmas.
For systems without symmetry, one has to solve the full equations in three dimensions, which requires numerically expensive computer programs. Boundary conditions for these systems can often be deduced from measurements. In several astrophysical plasmas (e.g., the solar corona), the magnetic pressure is orders of magnitudes higher than the plasma pressure, which allows a neglect of the plasma pressure in lowest order. If gravity is also negligible, Equation 1 then implies a force-free equilibrium in which the Lorentz force vanishes.
Generalizations of MHS equilibria are stationary equilibria including a stationary plasma flow (e.g., stellar winds in astrophysics). It is also possible to compute MHD equilibria in rotating systems (e.g., rotating magnetospheres, rotating stellar coronae) by incorporating the centrifugal force. MHD equilibrium theory is useful for studying physical systems that slowly evolve in time. In this case, while one has an equilibrium at each time step, the configuration changes, often in response to temporal changes of the measured boundary conditions (e.g., the magnetic field of the Sun for modeling the corona) or of external sources (e.g., mass loading in planetary magnetospheres). Finally, MHD equilibria can be used as initial conditions for time-dependent MHD simulations. This article reviews the various analytical solutions and numerical techniques to compute MHD equilibria, as well as applications to the Sun, planetary magnetospheres, space, and laboratory plasmas.

### Article

### D. I. Pontin

Magnetic reconnection is a fundamental process that is important for the dynamical evolution of highly conducting plasmas throughout the Universe. In such highly conducting plasmas the magnetic topology is preserved as the plasma evolves, an idea encapsulated by Alfvén’s frozen flux theorem. In this context, “magnetic topology” is defined by the connectivity and linkage of magnetic field lines (streamlines of the magnetic induction) within the domain of interest, together with the connectivity of field lines between points on the domain boundary. The conservation of magnetic topology therefore implies that magnetic field lines cannot break or merge, but evolve only according to smooth deformations. In any real plasma the conductivity is finite, so that the magnetic topology is not preserved everywhere: magnetic reconnection is the process by which the field lines break and recombine, permitting a reconfiguration of the magnetic field. Due to the high conductivity, reconnection may occur only in small dissipation regions where the electric current density reaches extreme values. In many applications of interest, the change of magnetic topology facilitates a rapid conversion of stored magnetic energy into plasma thermal energy, bulk-kinetic energy, and energy of non-thermally accelerated particles. This energy conversion is associated with dynamic phenomena in plasmas throughout the Universe. Examples include flares and other energetic phenomena in the atmosphere of stars including the Sun, substorms in planetary magnetospheres, and disruptions that limit the magnetic confinement time of plasma in nuclear fusion devices. One of the major challenges in understanding reconnection is the extreme separation between the global system scale and the scale of the dissipation region within which the reconnection process itself takes place. Current understanding of reconnection has developed through mathematical and computational modeling as well as dedicated experiments in both the laboratory and space. Magnetohydrodynamic (MHD) reconnection is studied in the framework of magnetohydrodynamics, which is used to study plasmas (and liquid metals) in the continuum approximation.

### Article

### E.R. Priest

Magnetohydrodynamics is sometimes called magneto-fluid dynamics or hydromagnetics and is referred to as MHD for short. It is the unification of two fields that were completely independent in the 19th, and first half of the 20th, century, namely, electromagnetism and fluid mechanics. It describes the subtle and complex nonlinear interaction between magnetic fields and electrically conducting fluids, which include liquid metals as well as the ionized gases or plasmas that comprise most of the universe.
In places such as the Earth’s magnetosphere or the Sun’s outer atmosphere (the corona) where the magnetic field provides an important component of the free energy, MHD effects are responsible for much of the observed dynamic behavior, such as geomagnetic substorms, solar flares and huge eruptions from the Sun that dominate the Earth’s space weather. However, MHD is also of great importance in astrophysics, since many of the MHD processes that are observed in the laboratory or in the Sun and the magnetosphere also take place under different parameter regimes in more exotic cosmical objects such as active stars, accretion discs, and black holes.
The different aspects of MHD include determining the nature of: magnetic equilibria under a balance between magnetic forces, pressure gradients and gravity; MHD wave motions; magnetic instabilities; and the important process of magnetic reconnection for converting magnetic energy into other forms. In turn, these aspects play key roles in the fundamental astrophysical processes of magnetoconvection, magnetic flux emergence, star spots, plasma heating, stellar wind acceleration, stellar flares and eruptions, and the generation of magnetic fields by dynamo action.

### Article

### V.M. Nakariakov

Magnetohydrodynamic (MHD) waves represent one of the macroscopic processes responsible for the transfer of the energy and information in plasmas. The existence of MHD waves is due to the elastic and compressible nature of the plasma, and by the effect of the frozen-in magnetic field. Basic properties of MHD waves are examined in the ideal MHD approximation, including effects of plasma nonuniformity and nonlinearity. In a uniform medium, there are four types of MHD wave or mode: the incompressive Alfvén wave, compressive fast and slow magnetoacoustic waves, and non-propagating entropy waves. MHD waves are essentially anisotropic, with the properties highly dependent on the direction of the wave vector with respect to the equilibrium magnetic field. All of these waves are dispersionless. A nonuniformity of the plasma may act as an MHD waveguide, which is exemplified by a field-aligned plasma cylinder that has a number of dispersive MHD modes with different properties. In addition, a smooth nonuniformity of the Alfvén speed across the field leads to mode coupling, the appearance of the Alfvén continuum, and Alfvén wave phase mixing. Interaction and self-interaction of weakly nonlinear MHD waves are discussed in terms of evolutionary equations. Applications of MHD wave theory are illustrated by kink and longitudinal waves in the corona of the Sun.

### Article

### Tzu-Chieh Wei

Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.

### Article

### Elena Khomenko

Multi-fluid magnetohydrodynamics is an extension of classical magnetohydrodynamics that allows a simplified treatment plasmas with complex chemical mixtures. The types of plasma susceptible to multi-fluid effects are those containing particles with properties significantly different from those of the rest of the plasma in either mass, or electric charge, such as neutral particles, molecules, or dust grains. In astrophysics, multi-fluid magnetohydrodynamics is relevant for planetary ionospheres and magnetospheres, the interstellar medium, and the formation of stars and planets, as well as in the atmospheres of cool stars such as the Sun. Traditionally, magnetohydrodynamics has been a classical approximation in many astrophysical and physical applications. Magnetohydrodynamics works well in dense plasmas where the typical plasma scales (e.g., cyclotron frequencies, Larmor radius) are significantly smaller than the scales of the processes under study. Nevertheless, when plasma components are not well coupled by collisions it is necessary to replace single-fluid magnetohydrodynamics by multi-fluid theory. The present article provides a description of environments in which a multi-fluid treatment is necessary and describes modifications to the magnetohydrodynamic equations that are necessary to treat non-ideal plasmas. It also summarizes the physical consequences of major multi-fluid non-ideal magnetohydrodynamic effects including ambipolar diffusion, the Hall effect, the battery effect, and other intrinsically multi-fluid effects. Multi-fluid theory is an intermediate step between magnetohydrodynamics dealing with the collective behaviour of an ensemble of particles, and a kinetic approach where the statistics of particle distributions are studied. The main assumption of multi-fluid theory is that each individual ensemble of particles behaves like a fluid, interacting via collisions with other particle ensembles, such as those belonging to different chemical species or ionization states. Collisional interaction creates a relative macroscopic motion between different plasma components, which, on larger scales, results in the non-ideal behaviour of such plasmas. The non-ideal effects discussed here manifest themselves in plasmas at relatively low temperatures and low densities.

### Article

Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics.
A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so.
By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.

### Article

The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.

### Article

### Cornelius Hempel

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
The theory of quantum mechanics provides an accurate description of nature at the fundamental level of elementary particles, such as photons, electrons, and larger objects like atoms, molecules, and more macroscopic systems. Any such physical system with two distinct energy levels can be used to represent a quantum bit, or qubit, which provides the equivalent to a classical bit within the context of quantum mechanics. As such, a qubit can be in a well-defined physical state representing one “classical bit” of information. Yet, it also allows for fundamental quantum phenomena such as superposition and mutual entanglement, making these effects available as a resource. Quantum information processing aims to use qubits and quantum effects to attain an advantage in computation and simulation, communication, or the measurement of physical parameters.
Much like the classical bits realized by transistors in silicon are at the foundation of many modern devices, quantum bits form the building blocks out of which quantum devices can be constructed that allow for the use of qubits as a resource. Since the 1990s, many physical systems have been investigated and prototyped as quantum bits, leading to implementations that range from photonics, to atoms and , as well as solid state devices in the form of tailored impurities in a material or superconducting electrical circuits. Each physical approach differs in how the quantum bits are stored, how they are being manipulated, and how quantum states are read out. Research in this area is often cross-cutting between different areas of physics, often covering atomic, optical, and solid state physics and combining fundamental with applied science and engineering. Tying these efforts together is a joint set of metrics that describes the qubits’ ability to retain a quantum mechanical state and the ability to manipulate and read out this state. Examples are phase coherence and fidelity of measurement and operations. Further aspects include the scalability with respect to current technological capabilities, speed, and amenability to error correction.

### Article

### Joel Wallman, Steven Flammia, and Ian Hincks

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Quantum systems may outperform current digital technologies at various information processing tasks, such as simulating the dynamics of quantum systems and integer factorization. Quantum Characterization, Verification, and Validation (QCVV) is the procedure for estimating the quality of physical quantum systems for use as information processors. QCVV consists of three components.
Characterization means determining the effect of control operations on a quantum system, and the nature of external noise acting on the quantum system. The first characterization experiments (Rabi, Ramsey, and Hahn-echo) were developed in the context of nuclear magnetic resonance. As other effective two-level systems with varying noise models have been identified and couplings become more complex, additional techniques such as tomography and randomized benchmarking have been developed specifically for quantum information processing.
Verification involves verifying that a control operation implements a desired ideal operation to within a specified precision. Often, these targets are set by the requirements for quantum error correction and fault-tolerant quantum computation in specific architectures.
Validation is demonstrating that a quantum information processor can solve specific problems. For problems whose solution can be efficiently verified (e.g., prime factorization), validation may involve running a corresponding quantum algorithm (e.g., Shor’s algorithm) and analyzing the time taken to produce the correct solution. For problems whose solution cannot be efficiently verified, for example, quantum simulation, developing adequate techniques is an active area of research.
The essential features that make a device useful as a quantum information processor also create difficulties for QCVV, and specialized techniques have been developed to surmount these difficulties. The field is now entering a mature phase where a broad range of techniques can address all three tasks. As quantum information processors continue to scale up and improve, these three tasks look to become increasingly relevant, and many challenges remain.

### Article

### Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state.
Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters
[
[
n
,
k
,
d
]
]
, where
n
is the number of physical qubits,
k
is the number of encoded logical qubits, and
d
is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has
n
>
k
; this physical redundancy is necessary to detect and correct errors without disturbing the logical state.
Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.

### Article

### Sumit R. Das

A quantum quench is a process in which a parameter of a many-body system or quantum field theory is changed in time, taking an initial stationary state into a complicated excited state. Traditionally “quench” refers to a process where this time dependence is fast compared to all scales in the problem. However in recent years the terminology has been generalized to include smooth changes that are slow compared to initial scales in the problem, but become fast compared to the physical scales at some later time, leading to a breakdown of adiabatic evolution. Quantum quench has been recently used as a theoretical tool to study many aspects of nonequilibrium physics like thermalization and universal aspects of critical dynamics. Relatively recent experiments in cold atom systems have implemented such quench protocols, which explore dynamical passages through critical points, and study in detail the process of relaxation to a steady state. On the other hand, quenches which remain adiabatic have been explored as a useful technique in quantum computation.

### Article

### A. W. Thomas

The strong force that binds atomic nuclei is governed by the rules of Quantum Chromodynamics. Here we consider the suggestion the internal quark structure of a nucleon will adjust self-consistently to the local mean scalar field in a nuclear medium and that this may play a profound role in nuclear structure. We show that one can derive an energy density functional based on this idea, which successfully describes the properties of atomic nuclei across the periodic table in terms of a small number of physically motivated parameters. Because this approach amounts to a new paradigm for nuclear theory, it is vital to find ways to test it experimentally and we review a number of the most promising possibilities.

### Article

### Shahin Jafarzadeh

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
The solar chromosphere (color sphere) is a strongly structured and highly dynamic region (layer) of the Sun’s atmosphere, located above the bright, visible photosphere. It is optically thin in the near-ultraviolet to near-infrared spectral range, but optically thick in the millimeter range and in strong spectral lines. Particularly important is the departure from the local thermodynamic equilibrium as one moves from the photosphere to the chromosphere. In a plane-parallel model, the temperature gradually rises from the low chromosphere outwards (radially from the center of the Sun), against the rapid decrease in both gas density and pressure with height throughout the entire solar atmosphere. In this classical picture, the chromosphere is sandwiched between the so-called temperature minimum (i.e., the minimum average temperature in the solar atmosphere; about 4000 K) and the hot transition region (with a few tens of thousands kelvin at its lower boundary), above which the temperature drastically increases outwards, reaching million degrees in the solar corona (i.e., the outermost layer of the Sun’s atmosphere). In reality, however, this standard (simple) model does not properly account for the many faces of the non-uniform and dynamic chromosphere. For instance, there also exists extremely cool gas in this highly dynamical region.
A variety of heating mechanisms has been suggested to contribute in the energetics of the solar chromosphere. These particularly include propagating waves (of various kinds) often generated in the low photosphere, as well as jets, flares, and explosive events as a result of, for example, magnetic reconnection. However, observations of energy deposition in the chromosphere (particularly from waves) have been rare.
The solar chromosphere is dominated by the magnetic fields (where the gas density reduces by more than four orders of magnitude compared to the underlying photosphere; hence, magnetic pressure dominates that of gas) featuring a variety of phenomena including sunspots, plages, eruptions, and elongated structures of different physical properties and/or appearances. The latter have been given different names in the literature, such as fibrils, spicules, filaments, prominences, straws, mottle, surges, or rosette, within which, various sub-categories have also been introduced. Some of these thread-like structures share the same properties, some are speculated to represent the same or completely different phenomena at different atmospheric heights, and some manifest themselves differently in intensity images, depending on properties of the sampling spectral lines. Their origins and relationships to each other are poorly understood. The elongated structures have been suggested to map the magnetic fields in the solar chromosphere; however, that includes challenges of measuring/approximating the chromospheric magnetic fields (particularly in the quiet regions), as well as of estimating the exact heights of formation of the fibrillar structures.
The solar chromosphere may thus be described as a challenging, complex plasma-physics lab, in which many of the observed phenomena and physical processes have not yet been fully understood.

### Article

### Lidia van Driel-Gesztelyi and Mathew J. Owens

The Sun’s magnetic field drives the solar wind and produces space weather. It also acts as the prototype for an understanding of other stars and their planetary environments. Plasma motions in the solar interior provide the dynamo action that generates the solar magnetic field. At the solar surface, this is evident as an approximately 11-year cycle in the number and position of visible sunspots. This solar cycle is manifest in virtually all observable solar parameters, from the occurrence of the smallest detected magnetic features on the Sun to the size of the bubble in interstellar space that is carved out by the solar wind. Moderate to severe space-weather effects show a strong solar cycle variation. However, it is a matter of debate whether extreme space-weather follows from the 11-year cycle.
Each 11-year solar cycle is actually only half of a solar magnetic “Hale” cycle, with the configuration of the Sun’s large-scale magnetic field taking approximately 22 years to repeat. At the start of a new solar cycle, sunspots emerge at mid-latitude regions with an orientation that opposes the dominant large-scale field, leading to an erosion of the polar fields. As the cycle progresses, sunspots emerge at lower latitudes. Around solar maximum, the polar field polarity reverses, but the sunspot orientation remains the same, leading to a build-up of polar field strength that peaks at the start of the next cycle. Similar magnetic cyclicity has recently been inferred at other stars.

12