1-20 of 24 Results

  • Physics x

Article

Erica K. Grant and Travis S. Humble

Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches. Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity. A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems. The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.

Article

Calabi-Yau spaces, or Kähler spaces admitting zero Ricci curvature, have played a pivotal role in theoretical physics and pure mathematics for the last half century. In physics, they constituted the first and natural solution to compactification of superstring theory to our 4-dimensional universe, primarily due to one of their equivalent definitions being the admittance of covariantly constant spinors. Since the mid-1980s, physicists and mathematicians have joined forces in creating explicit examples of Calabi-Yau spaces, compiling databases of formidable size, including the complete intersecion (CICY) data set, the weighted hypersurfaces data set, the elliptic-fibration data set, the Kreuzer-Skarke toric hypersurface data set, generalized CICYs, etc., totaling at least on the order of 10 10 manifolds. These all contribute to the vast string landscape, the multitude of possible vacuum solutions to string compactification. More recently, this collaboration has been enriched by computer science and data science, the former in bench-marking the complexity of the algorithms in computing geometric quantities, and the latter in applying techniques such as machine learning in extracting unexpected information. These endeavours, inspired by the physics of the string landscape, have rendered the investigation of Calabi-Yau spaces one of the most exciting and interdisciplinary fields.

Article

In economics, infrastructure is a long-term investment aimed at the delivery of essential services to a large number of users, such as those in the field of transport, energy, or telecommunications. A research infrastructure (RI) is a single-sited, distributed, virtual, or mobile facility, designed to deliver scientific services to communities of scientists. In physical sciences (including astronomy and astrophysics, particle and nuclear physics, analytical physics, medical physics), the RI paradigm has found several large-scale applications, such as radio telescopes, neutrino detectors, gravitational wave interferometers, particle colliders and heavy ion beams, high intensity lasers, synchrotron light sources, spallation neutron sources, and hadrontherapy facilities. These RIs require substantial capital and operation expenditures and are ultimately funded by taxpayers. In social cost–benefit analysis (CBA), the impact of an investment project is measured by the intertemporal difference of benefits and costs accruing to different agents. Benefits and costs are quantified and valued through a common metric and using the marginal social opportunity costs of goods (or shadow price) that may differ from the market price, as markets are often incomplete or imperfect. The key strength of CBA is that it produces information about the project’s net contribution to society that is summarized in simple numerical indicators, such as the net present value of a project. For any RIs, consolidated cost accounting should include intertemporal capital and operational expenditure both for the main managing body and for experimental collaborations or other external teams, including in-kind contributions. As far as social intertemporal benefits are concerned, it is convenient to divide them into two broad classes. The first class of benefits accrue to different categories of direct and indirect users of infrastructure services: scientists, students, firms benefiting from technological spillovers, consumers of innovative services and products, and citizens who are involved in outreach activities. The empirical estimation of the use value of an RI depends on the scientific specificities of each project, as different social groups are involved to different degrees. Second, there are benefits for the general public of non-users: these benefits are associated with social preferences for scientific research, even when the use of a discovery is unknown. In analogy with the valuation of environmental and cultural goods, the empirical approach to non-use value aims at eliciting the willingness to pay of citizens for the scientific knowledge that is created by an RI. This can be done by well-designed contingency valuation surveys. While some socio-economic impact studies of RIs in physics have been available since the 1980s, the intangible nature of some benefits and the uncertainty associated with scientific discoveries have limited the diffusion of CBA in this field until recently. Nevertheless, recent studies have explored the application of CBA to RIs in physics. Moreover, the European Commission, the European Strategy Forum on Research Infrastructures, the European Investment Bank, and some national authorities suggest that the study of social benefits and costs of RIs should be part of the process leading to funding decisions.

Article

Large-scale U.S. government support of scientific research began in World War II with physics, and rapidly expanded in the postwar era to contribute strongly to the United States’ emergence as the world’s leading scientific and economic superpower in the latter half of the 20th century. Vannevar Bush, who directed President Franklin Roosevelt’s World War II science efforts, in the closing days of the War advocated forcefully for U.S. government funding of scientific research to continue even in peacetime to support three important government missions of national security, health, and the economy. He also argued forcefully for the importance of basic research supported by the federal government but steered and guided by the scientific community. This vision guided an expanding role for the U.S. government in supporting research not only at government laboratories but also in non-government institutions, especially universities. Although internationally comparable data are difficult to obtain, the U.S. government appears to be the single largest national funder of physics research. The U.S. government support of physics research comes from many different federal departments and agencies. Federal agencies also invest in experimental development based on research discoveries of physics. The Department of Energy’s (DOE) Office of Science is by far the dominant supporter of physics research in the United States, and DOE’s national laboratories are the dominant performers of U.S. government-supported physics research. Since the 1970s, U.S. government support of physics research has been stagnant with the greatest growth in U.S. government research support having shifted since the 1990s to the life sciences and computer sciences.

Article

Thomas Wiegelmann

Magnetohydrodynamic equilibria are time-independent solutions of the full magnetohydrodynamic (MHD) equations. An important class are static equilibria without plasma flow. They are described by the magnetohydrostatic equations j × B = ∇ p + ρ ∇ Ψ , ∇ × B = μ 0 j , ∇ · B = 0. B is the magnetic field, j the electric current density, p the plasma pressure, ρ the mass density, Ψ the gravitational potential, and µ 0 the permeability of free space. Under equilibrium conditions, the Lorentz force j × B is compensated by the plasma pressure gradient force and the gravity force. Despite the apparent simplicity of these equations, it is extremely difficult to find exact solutions due to their intrinsic nonlinearity. The problem is greatly simplified for effectively two-dimensional configurations with a translational or axial symmetry. The magnetohydrostatic (MHS) equations can then be transformed into a single nonlinear partial differential equation, the Grad–Shafranov equation. This approach is popular as a first approximation to model, for example, planetary magnetospheres, solar and stellar coronae, and astrophysical and fusion plasmas. For systems without symmetry, one has to solve the full equations in three dimensions, which requires numerically expensive computer programs. Boundary conditions for these systems can often be deduced from measurements. In several astrophysical plasmas (e.g., the solar corona), the magnetic pressure is orders of magnitudes higher than the plasma pressure, which allows a neglect of the plasma pressure in lowest order. If gravity is also negligible, Equation 1 then implies a force-free equilibrium in which the Lorentz force vanishes. Generalizations of MHS equilibria are stationary equilibria including a stationary plasma flow (e.g., stellar winds in astrophysics). It is also possible to compute MHD equilibria in rotating systems (e.g., rotating magnetospheres, rotating stellar coronae) by incorporating the centrifugal force. MHD equilibrium theory is useful for studying physical systems that slowly evolve in time. In this case, while one has an equilibrium at each time step, the configuration changes, often in response to temporal changes of the measured boundary conditions (e.g., the magnetic field of the Sun for modeling the corona) or of external sources (e.g., mass loading in planetary magnetospheres). Finally, MHD equilibria can be used as initial conditions for time-dependent MHD simulations. This article reviews the various analytical solutions and numerical techniques to compute MHD equilibria, as well as applications to the Sun, planetary magnetospheres, space, and laboratory plasmas.

Article

Magnetic reconnection is a fundamental process that is important for the dynamical evolution of highly conducting plasmas throughout the Universe. In such highly conducting plasmas the magnetic topology is preserved as the plasma evolves, an idea encapsulated by Alfvén’s frozen flux theorem. In this context, “magnetic topology” is defined by the connectivity and linkage of magnetic field lines (streamlines of the magnetic induction) within the domain of interest, together with the connectivity of field lines between points on the domain boundary. The conservation of magnetic topology therefore implies that magnetic field lines cannot break or merge, but evolve only according to smooth deformations. In any real plasma the conductivity is finite, so that the magnetic topology is not preserved everywhere: magnetic reconnection is the process by which the field lines break and recombine, permitting a reconfiguration of the magnetic field. Due to the high conductivity, reconnection may occur only in small dissipation regions where the electric current density reaches extreme values. In many applications of interest, the change of magnetic topology facilitates a rapid conversion of stored magnetic energy into plasma thermal energy, bulk-kinetic energy, and energy of non-thermally accelerated particles. This energy conversion is associated with dynamic phenomena in plasmas throughout the Universe. Examples include flares and other energetic phenomena in the atmosphere of stars including the Sun, substorms in planetary magnetospheres, and disruptions that limit the magnetic confinement time of plasma in nuclear fusion devices. One of the major challenges in understanding reconnection is the extreme separation between the global system scale and the scale of the dissipation region within which the reconnection process itself takes place. Current understanding of reconnection has developed through mathematical and computational modeling as well as dedicated experiments in both the laboratory and space. Magnetohydrodynamic (MHD) reconnection is studied in the framework of magnetohydrodynamics, which is used to study plasmas (and liquid metals) in the continuum approximation.

Article

Magnetohydrodynamics is sometimes called magneto-fluid dynamics or hydromagnetics and is referred to as MHD for short. It is the unification of two fields that were completely independent in the 19th, and first half of the 20th, century, namely, electromagnetism and fluid mechanics. It describes the subtle and complex nonlinear interaction between magnetic fields and electrically conducting fluids, which include liquid metals as well as the ionized gases or plasmas that comprise most of the universe. In places such as the Earth’s magnetosphere or the Sun’s outer atmosphere (the corona) where the magnetic field provides an important component of the free energy, MHD effects are responsible for much of the observed dynamic behavior, such as geomagnetic substorms, solar flares and huge eruptions from the Sun that dominate the Earth’s space weather. However, MHD is also of great importance in astrophysics, since many of the MHD processes that are observed in the laboratory or in the Sun and the magnetosphere also take place under different parameter regimes in more exotic cosmical objects such as active stars, accretion discs, and black holes. The different aspects of MHD include determining the nature of: magnetic equilibria under a balance between magnetic forces, pressure gradients and gravity; MHD wave motions; magnetic instabilities; and the important process of magnetic reconnection for converting magnetic energy into other forms. In turn, these aspects play key roles in the fundamental astrophysical processes of magnetoconvection, magnetic flux emergence, star spots, plasma heating, stellar wind acceleration, stellar flares and eruptions, and the generation of magnetic fields by dynamo action.

Article

V.M. Nakariakov

Magnetohydrodynamic (MHD) waves represent one of the macroscopic processes responsible for the transfer of the energy and information in plasmas. The existence of MHD waves is due to the elastic and compressible nature of the plasma, and by the effect of the frozen-in magnetic field. Basic properties of MHD waves are examined in the ideal MHD approximation, including effects of plasma nonuniformity and nonlinearity. In a uniform medium, there are four types of MHD wave or mode: the incompressive Alfvén wave, compressive fast and slow magnetoacoustic waves, and non-propagating entropy waves. MHD waves are essentially anisotropic, with the properties highly dependent on the direction of the wave vector with respect to the equilibrium magnetic field. All of these waves are dispersionless. A nonuniformity of the plasma may act as an MHD waveguide, which is exemplified by a field-aligned plasma cylinder that has a number of dispersive MHD modes with different properties. In addition, a smooth nonuniformity of the Alfvén speed across the field leads to mode coupling, the appearance of the Alfvén continuum, and Alfvén wave phase mixing. Interaction and self-interaction of weakly nonlinear MHD waves are discussed in terms of evolutionary equations. Applications of MHD wave theory are illustrated by kink and longitudinal waves in the corona of the Sun.

Article

Multi-fluid magnetohydrodynamics is an extension of classical magnetohydrodynamics that allows a simplified treatment plasmas with complex chemical mixtures. The types of plasma susceptible to multi-fluid effects are those containing particles with properties significantly different from those of the rest of the plasma in either mass, or electric charge, such as neutral particles, molecules, or dust grains. In astrophysics, multi-fluid magnetohydrodynamics is relevant for planetary ionospheres and magnetospheres, the interstellar medium, and the formation of stars and planets, as well as in the atmospheres of cool stars such as the Sun. Traditionally, magnetohydrodynamics has been a classical approximation in many astrophysical and physical applications. Magnetohydrodynamics works well in dense plasmas where the typical plasma scales (e.g., cyclotron frequencies, Larmor radius) are significantly smaller than the scales of the processes under study. Nevertheless, when plasma components are not well coupled by collisions it is necessary to replace single-fluid magnetohydrodynamics by multi-fluid theory. The present article provides a description of environments in which a multi-fluid treatment is necessary and describes modifications to the magnetohydrodynamic equations that are necessary to treat non-ideal plasmas. It also summarizes the physical consequences of major multi-fluid non-ideal magnetohydrodynamic effects including ambipolar diffusion, the Hall effect, the battery effect, and other intrinsically multi-fluid effects. Multi-fluid theory is an intermediate step between magnetohydrodynamics dealing with the collective behaviour of an ensemble of particles, and a kinetic approach where the statistics of particle distributions are studied. The main assumption of multi-fluid theory is that each individual ensemble of particles behaves like a fluid, interacting via collisions with other particle ensembles, such as those belonging to different chemical species or ionization states. Collisional interaction creates a relative macroscopic motion between different plasma components, which, on larger scales, results in the non-ideal behaviour of such plasmas. The non-ideal effects discussed here manifest themselves in plasmas at relatively low temperatures and low densities.

Article

The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.

Article

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. The theory of quantum mechanics provides an accurate description of nature at the fundamental level of elementary particles, such as photons, electrons, and larger objects like atoms, molecules, and more macroscopic systems. Any such physical system with two distinct energy levels can be used to represent a quantum bit, or qubit, which provides the equivalent to a classical bit within the context of quantum mechanics. As such, a qubit can be in a well-defined physical state representing one “classical bit” of information. Yet, it also allows for fundamental quantum phenomena such as superposition and mutual entanglement, making these effects available as a resource. Quantum information processing aims to use qubits and quantum effects to attain an advantage in computation and simulation, communication, or the measurement of physical parameters. Much like the classical bits realized by transistors in silicon are at the foundation of many modern devices, quantum bits form the building blocks out of which quantum devices can be constructed that allow for the use of qubits as a resource. Since the 1990s, many physical systems have been investigated and prototyped as quantum bits, leading to implementations that range from photonics, to atoms and , as well as solid state devices in the form of tailored impurities in a material or superconducting electrical circuits. Each physical approach differs in how the quantum bits are stored, how they are being manipulated, and how quantum states are read out. Research in this area is often cross-cutting between different areas of physics, often covering atomic, optical, and solid state physics and combining fundamental with applied science and engineering. Tying these efforts together is a joint set of metrics that describes the qubits’ ability to retain a quantum mechanical state and the ability to manipulate and read out this state. Examples are phase coherence and fidelity of measurement and operations. Further aspects include the scalability with respect to current technological capabilities, speed, and amenability to error correction.

Article

Joel Wallman, Steven Flammia, and Ian Hincks

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. Quantum systems may outperform current digital technologies at various information processing tasks, such as simulating the dynamics of quantum systems and integer factorization. Quantum Characterization, Verification, and Validation (QCVV) is the procedure for estimating the quality of physical quantum systems for use as information processors. QCVV consists of three components. Characterization means determining the effect of control operations on a quantum system, and the nature of external noise acting on the quantum system. The first characterization experiments (Rabi, Ramsey, and Hahn-echo) were developed in the context of nuclear magnetic resonance. As other effective two-level systems with varying noise models have been identified and couplings become more complex, additional techniques such as tomography and randomized benchmarking have been developed specifically for quantum information processing. Verification involves verifying that a control operation implements a desired ideal operation to within a specified precision. Often, these targets are set by the requirements for quantum error correction and fault-tolerant quantum computation in specific architectures. Validation is demonstrating that a quantum information processor can solve specific problems. For problems whose solution can be efficiently verified (e.g., prime factorization), validation may involve running a corresponding quantum algorithm (e.g., Shor’s algorithm) and analyzing the time taken to produce the correct solution. For problems whose solution cannot be efficiently verified, for example, quantum simulation, developing adequate techniques is an active area of research. The essential features that make a device useful as a quantum information processor also create difficulties for QCVV, and specialized techniques have been developed to surmount these difficulties. The field is now entering a mature phase where a broad range of techniques can address all three tasks. As quantum information processors continue to scale up and improve, these three tasks look to become increasingly relevant, and many challenges remain.

Article

Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state. Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters [ [ n , k , d ] ] , where n is the number of physical qubits, k is the number of encoded logical qubits, and d is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has n > k ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state. Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.

Article

The strong force that binds atomic nuclei is governed by the rules of Quantum Chromodynamics. Here we consider the suggestion the internal quark structure of a nucleon will adjust self-consistently to the local mean scalar field in a nuclear medium and that this may play a profound role in nuclear structure. We show that one can derive an energy density functional based on this idea, which successfully describes the properties of atomic nuclei across the periodic table in terms of a small number of physically motivated parameters. Because this approach amounts to a new paradigm for nuclear theory, it is vital to find ways to test it experimentally and we review a number of the most promising possibilities.

Article

Shahin Jafarzadeh

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. The solar chromosphere (color sphere) is a strongly structured and highly dynamic region (layer) of the Sun’s atmosphere, located above the bright, visible photosphere. It is optically thin in the near-ultraviolet to near-infrared spectral range, but optically thick in the millimeter range and in strong spectral lines. Particularly important is the departure from the local thermodynamic equilibrium as one moves from the photosphere to the chromosphere. In a plane-parallel model, the temperature gradually rises from the low chromosphere outwards (radially from the center of the Sun), against the rapid decrease in both gas density and pressure with height throughout the entire solar atmosphere. In this classical picture, the chromosphere is sandwiched between the so-called temperature minimum (i.e., the minimum average temperature in the solar atmosphere; about 4000 K) and the hot transition region (with a few tens of thousands kelvin at its lower boundary), above which the temperature drastically increases outwards, reaching million degrees in the solar corona (i.e., the outermost layer of the Sun’s atmosphere). In reality, however, this standard (simple) model does not properly account for the many faces of the non-uniform and dynamic chromosphere. For instance, there also exists extremely cool gas in this highly dynamical region. A variety of heating mechanisms has been suggested to contribute in the energetics of the solar chromosphere. These particularly include propagating waves (of various kinds) often generated in the low photosphere, as well as jets, flares, and explosive events as a result of, for example, magnetic reconnection. However, observations of energy deposition in the chromosphere (particularly from waves) have been rare. The solar chromosphere is dominated by the magnetic fields (where the gas density reduces by more than four orders of magnitude compared to the underlying photosphere; hence, magnetic pressure dominates that of gas) featuring a variety of phenomena including sunspots, plages, eruptions, and elongated structures of different physical properties and/or appearances. The latter have been given different names in the literature, such as fibrils, spicules, filaments, prominences, straws, mottle, surges, or rosette, within which, various sub-categories have also been introduced. Some of these thread-like structures share the same properties, some are speculated to represent the same or completely different phenomena at different atmospheric heights, and some manifest themselves differently in intensity images, depending on properties of the sampling spectral lines. Their origins and relationships to each other are poorly understood. The elongated structures have been suggested to map the magnetic fields in the solar chromosphere; however, that includes challenges of measuring/approximating the chromospheric magnetic fields (particularly in the quiet regions), as well as of estimating the exact heights of formation of the fibrillar structures. The solar chromosphere may thus be described as a challenging, complex plasma-physics lab, in which many of the observed phenomena and physical processes have not yet been fully understood.

Article

Lidia van Driel-Gesztelyi and Mathew J. Owens

The Sun’s magnetic field drives the solar wind and produces space weather. It also acts as the prototype for an understanding of other stars and their planetary environments. Plasma motions in the solar interior provide the dynamo action that generates the solar magnetic field. At the solar surface, this is evident as an approximately 11-year cycle in the number and position of visible sunspots. This solar cycle is manifest in virtually all observable solar parameters, from the occurrence of the smallest detected magnetic features on the Sun to the size of the bubble in interstellar space that is carved out by the solar wind. Moderate to severe space-weather effects show a strong solar cycle variation. However, it is a matter of debate whether extreme space-weather follows from the 11-year cycle. Each 11-year solar cycle is actually only half of a solar magnetic “Hale” cycle, with the configuration of the Sun’s large-scale magnetic field taking approximately 22 years to repeat. At the start of a new solar cycle, sunspots emerge at mid-latitude regions with an orientation that opposes the dominant large-scale field, leading to an erosion of the polar fields. As the cycle progresses, sunspots emerge at lower latitudes. Around solar maximum, the polar field polarity reverses, but the sunspot orientation remains the same, leading to a build-up of polar field strength that peaks at the start of the next cycle. Similar magnetic cyclicity has recently been inferred at other stars.

Article

Robert Cameron

The solar dynamo is the action of flows inside the Sun to maintain its magnetic field against Ohmic decay. On small scales the magnetic field is seen at the solar surface as a ubiquitous “salt-and-pepper” disorganized field that may be generated directly by the turbulent convection. On large scales, the magnetic field is remarkably organized, with an 11-year activity cycle. During each cycle the field emerging in each hemisphere has a specific East–West alignment (known as Hale’s law) that alternates from cycle to cycle, and a statistical tendency for a North-South alignment (Joy’s law). The polar fields reverse sign during the period of maximum activity of each cycle. The relevant flows for the large-scale dynamo are those of convection, the bulk rotation of the Sun, and motions driven by magnetic fields, as well as flows produced by the interaction of these. Particularly important are the Sun’s large-scale differential rotation (for example, the equator rotates faster than the poles), and small-scale helical motions resulting from the Coriolis force acting on convective motions or on the motions associated with buoyantly rising magnetic flux. These two types of motions result in a magnetic cycle. In one phase of the cycle, differential rotation winds up a poloidal magnetic field to produce a toroidal field. Subsequently, helical motions are thought to bend the toroidal field to create new poloidal magnetic flux that reverses and replaces the poloidal field that was present at the start of the cycle. It is now clear that both small- and large-scale dynamo action are in principle possible, and the challenge is to understand which combination of flows and driving mechanisms are responsible for the time-dependent magnetic fields seen on the Sun.

Article

Dana Longcope

A solar flare is a transient increase in solar brightness powered by the release of magnetic energy stored in the Sun’s corona. Flares are observed in all wavelengths of the electromagnetic spectrum. The released magnetic energy heats coronal plasma to temperatures exceeding ten million Kelvins, leading to a significant increase in solar brightness at X-ray and extreme ultraviolet wavelengths. The Sun’s overall brightness is normally low at these wavelengths, and a flare can increase it by two or more an orders of magnitude. The size of a given flare is traditionally characterized by its peak brightness in a soft X-ray wavelength. Flares occur with frequency inversely related to this measure of size, with those of greatest size occuring less than once per year. Images and light curves from different parts of the spectrum from many different flares have led to an accepted model framework for explaining the typical solar flare. According to this model, a sheet of electric current (a current sheet) is first formed in the corona, perhaps by a coronal mass ejection. Magnetic reconnection at this current sheet allows stored magnetic energy to be converted into bulk flow energy, heat, radiation, and a population of non-thermal electrons and ions. Some of this energy is transmitted downward to cooler layers, which are then evaporated (or ablated) upward to fill the coronal with hot dense plasma. Much of the flares bright emission comes from this newly heated plasma. Theoretical models have been proposed to describe each step in this process.

Article

L. P. Chitta, H. N. Smitha, and S. K. Solanki

The Sun is a G2V star with an effective temperature of 5780 K. As the nearest star to Earth and the biggest object in the solar system, it serves as a reference for fundamental astronomical parameters such as stellar mass, luminosity, and elemental abundances. It also serves as a plasma physics laboratory. A great deal of researchers’ understanding of the Sun comes from its electromagnetic radiation, which is close to that of a blackbody whose emission peaks at a wavelength of around 5,000 Å and extends into the near UV and infrared. The bulk of this radiation escapes from the solar surface, from a layer that is a mere 100 km thick. This surface from where the photons escape into the heliosphere and beyond, together with the roughly 400–500 km thick atmospheric layer immediately above it (where the temperature falls off monotonically with distance from the Sun), is termed the solar photosphere. Observations of the solar photosphere have led to some important discoveries in modern-day astronomy and astrophysics. At low spatial resolution, the photosphere is nearly featureless. However, naked-eye solar observations, the oldest of which can plausibly be dated back to 800 bc, have shown there to be occasional blemishes or spots. Systematic observations made with telescopes from the early 1600s onward have provided further information on the evolution of these sunspots whose typical spatial extent is 10,000 km at the solar surface. Continued observations of these sunspots later revealed that they increase and decrease in number with a period of about 11 years and that they actually are a manifestation of the Sun’s magnetic field (representing the first observation of an extraterrestrial magnetic field). This established the presence of magnetic cycles on the Sun responsible for the observed cyclic behavior of solar activity. Such magnetic activity is now known to exist in other stars as well. Superimposed on the solar blackbody spectrum are numerous spectral lines from different atomic species that arise due to the absorption of photons at certain wavelengths by those atoms, in the cooler photospheric plasma overlying the solar surface. These spectral lines provide diagnostics of the properties and dynamics of the underlying plasma (e.g., the granulation due to convection and the solar p-mode oscillations) and of the solar magnetic field. Since the early 20th century, researchers have used these spectral lines and the accompanying polarimetric signals to decode the physics of the solar photosphere and its magnetic structures, including sunspots. Modern observations with high spatial (0.15 arcsec, corresponding to 100 km on the solar surface) and spectral (10 mÅ) resolutions reveal a tapestry of the magnetized plasma with structures down to tens of kilometers at the photosphere (three orders of magnitude smaller than sunspots). Such observations, combined with advanced numerical models, provide further clues to the very important role of the magnetic field in solar and stellar structures and the variability in their brightness. Being the lowest directly observable layer of the Sun, the photosphere is also a window into the solar interior by means of helioseismology, which makes use of the p-mode oscillations. Furthermore, being the lowest layer of the solar atmosphere, the photosphere provides key insights into another long-standing mystery, that above the temperature-minimum (~500 km above the surface at ~4000 K), the plasma in the extended corona (invisible to the naked eye except during a total solar eclipse) is heated to temperatures up to 1,000 times higher than at the visible surface. The physics of the solar photosphere is thus central to the understanding of many solar and stellar phenomena.

Article

E.R. Priest

Solar physics is one of the liveliest branches of astrophysics at the current time, with many major advances that have been stimulated by observations from a series of space satellites and ground-based telescopes as well as theoretical models and sophisticated computational experiments. Studying the Sun is of key importance in physics for two principal reasons. Firstly, the Sun has major effects on the Earth and on its climate and space weather, as well as other planets of the solar system. Secondly, it represents a Rosetta stone, where fundamental astrophysical processes can be investigated in great detail. Yet, there are still major unanswered questions in solar physics, such as how the magnetic field is generated in the interior by dynamo action, how magnetic flux emerges through the solar surface and interacts with the overlying atmosphere, how the chromosphere and corona are heated, how the solar wind is accelerated, how coronal mass ejections are initiated and how energy is released in solar flares and high-energy particles are accelerated. Huge progress has been made on each of these topics since the year 2000, but there is as yet no definitive answer to any of them. When the answers to such puzzles are found, they will have huge implications for similar processes elsewhere in the cosmos but under different parameter regimes.