You are looking at 1-15 of 15 articles
- Physics x
Erica Grant and Travis Humble
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished using dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well defined and controllable, thus enabling the development of new algorithmic principles.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate that computational speed ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, non-zero temperature, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite-temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental realizations of AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by very weak current operating at cryogenic temperatures. A family of devices specialized for unconstrained optimization problems have been applied to solving a variety of specific domains including logistics, finance, science, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access to a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trapped systems.
The significant progress in our understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations are needed that differentiate the computational power of AQC and its variants from conventional computing approaches. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.
Massimo Florio and Chiara Pancotti
In economics, infrastructure is a long-term investment aimed at the delivery of essential services to a large number of users, such as those in the field of transport, energy, or telecommunications. A research infrastructure (RI) is a single-sited, distributed, virtual, or mobile facility, designed to deliver scientific services to communities of scientists. In physical sciences (including astronomy and astrophysics, particle and nuclear physics, analytical physics, medical physics), the RI paradigm has found several large-scale applications, such as radio telescopes, neutrino detectors, gravitational wave interferometers, particle colliders and heavy ion beams, high intensity lasers, synchrotron light sources, spallation neutron sources, and hadrontherapy facilities.
These RIs require substantial capital and operation expenditures and are ultimately funded by taxpayers. In social cost–benefit analysis (CBA), the impact of an investment project is measured by the intertemporal difference of benefits and costs accruing to different agents. Benefits and costs are quantified and valued through a common metric and using the marginal social opportunity costs of goods (or shadow price) that may differ from the market price, as markets are often incomplete or imperfect. The key strength of CBA is that it produces information about the project’s net contribution to society that is summarized in simple numerical indicators, such as the net present value of a project.
For any RIs, consolidated cost accounting should include intertemporal capital and operational expenditure both for the main managing body and for experimental collaborations or other external teams, including in-kind contributions. As far as social intertemporal benefits are concerned, it is convenient to divide them into two broad classes. The first class of benefits accrue to different categories of direct and indirect users of infrastructure services: scientists, students, firms benefiting from technological spillovers, consumers of innovative services and products, and citizens who are involved in outreach activities. The empirical estimation of the use value of an RI depends on the scientific specificities of each project, as different social groups are involved to different degrees. Second, there are benefits for the general public of non-users: these benefits are associated with social preferences for scientific research, even when the use of a discovery is unknown. In analogy with the valuation of environmental and cultural goods, the empirical approach to non-use value aims at eliciting the willingness to pay of citizens for the scientific knowledge that is created by an RI. This can be done by well-designed contingency valuation surveys.
While some socio-economic impact studies of RIs in physics have been available since the 1980s, the intangible nature of some benefits and the uncertainty associated with scientific discoveries have limited the diffusion of CBA in this field until recently. Nevertheless, recent studies have explored the application of CBA to RIs in physics. Moreover, the European Commission, the European Strategy Forum on Research Infrastructures, the European Investment Bank, and some national authorities suggest that the study of social benefits and costs of RIs should be part of the process leading to funding decisions.
Large-scale U.S. government support of scientific research began in World War II with physics, and rapidly expanded in the postwar era to contribute strongly to the United States’ emergence as the world’s leading scientific and economic superpower in the latter half of the 20th century. Vannevar Bush, who directed President Franklin Roosevelt’s World War II science efforts, in the closing days of the War advocated forcefully for U.S. government funding of scientific research to continue even in peacetime to support three important government missions of national security, health, and the economy. He also argued forcefully for the importance of basic research supported by the federal government but steered and guided by the scientific community. This vision guided an expanding role for the U.S. government in supporting research not only at government laboratories but also in non-government institutions, especially universities.
Although internationally comparable data are difficult to obtain, the U.S. government appears to be the single largest national funder of physics research. The U.S. government support of physics research comes from many different federal departments and agencies. Federal agencies also invest in experimental development based on research discoveries of physics. The Department of Energy’s (DOE) Office of Science is by far the dominant supporter of physics research in the United States, and DOE’s national laboratories are the dominant performers of U.S. government-supported physics research. Since the 1970s, U.S. government support of physics research has been stagnant with the greatest growth in U.S. government research support having shifted since the 1990s to the life sciences and computer sciences.
Magnetohydrodynamic equilibria are time-independent solutions of the full magnetohydrodynamic (MHD) equations. An important class are static equilibria without plasma flow. They are described by the magnetohydrostatic equations
is the magnetic field, the electric current density, the plasma pressure, the mass density, the gravitational potential, and the permeability of free space. Under equilibrium conditions, the Lorentz force is compensated by the plasma pressure gradient force and the gravity force.
Despite the apparent simplicity of these equations, it is extremely difficult to find exact solutions due to their intrinsic nonlinearity. The problem is greatly simplified for effectively two-dimensional configurations with a translational or axial symmetry. The magnetohydrostatic (MHS) equations can then be transformed into a single nonlinear partial differential equation, the Grad–Shafranov equation. This approach is popular as a first approximation to model, for example, planetary magnetospheres, solar and stellar coronae, and astrophysical and fusion plasmas.
For systems without symmetry, one has to solve the full equations in three dimensions, which requires numerically expensive computer programs. Boundary conditions for these systems can often be deduced from measurements. In several astrophysical plasmas (e.g., the solar corona), the magnetic pressure is orders of magnitudes higher than the plasma pressure, which allows a neglect of the plasma pressure in lowest order. If gravity is also negligible, Equation 1 then implies a force-free equilibrium in which the Lorentz force vanishes.
Generalizations of MHS equilibria are stationary equilibria including a stationary plasma flow (e.g., stellar winds in astrophysics). It is also possible to compute MHD equilibria in rotating systems (e.g., rotating magnetospheres, rotating stellar coronae) by incorporating the centrifugal force. MHD equilibrium theory is useful for studying physical systems that slowly evolve in time. In this case, while one has an equilibrium at each time step, the configuration changes, often in response to temporal changes of the measured boundary conditions (e.g., the magnetic field of the Sun for modeling the corona) or of external sources (e.g., mass loading in planetary magnetospheres). Finally, MHD equilibria can be used as initial conditions for time-dependent MHD simulations. This article reviews the various analytical solutions and numerical techniques to compute MHD equilibria, as well as applications to the Sun, planetary magnetospheres, space, and laboratory plasmas.
D. I. Pontin
Magnetic reconnection is a fundamental process that is important for the dynamical evolution of highly conducting plasmas throughout the Universe. In such highly conducting plasmas the magnetic topology is preserved as the plasma evolves, an idea encapsulated by Alfvén’s frozen flux theorem. In this context, “magnetic topology” is defined by the connectivity and linkage of magnetic field lines (streamlines of the magnetic induction) within the domain of interest, together with the connectivity of field lines between points on the domain boundary. The conservation of magnetic topology therefore implies that magnetic field lines cannot break or merge, but evolve only according to smooth deformations. In any real plasma the conductivity is finite, so that the magnetic topology is not preserved everywhere: magnetic reconnection is the process by which the field lines break and recombine, permitting a reconfiguration of the magnetic field. Due to the high conductivity, reconnection may occur only in small dissipation regions where the electric current density reaches extreme values. In many applications of interest, the change of magnetic topology facilitates a rapid conversion of stored magnetic energy into plasma thermal energy, bulk-kinetic energy, and energy of non-thermally accelerated particles. This energy conversion is associated with dynamic phenomena in plasmas throughout the Universe. Examples include flares and other energetic phenomena in the atmosphere of stars including the Sun, substorms in planetary magnetospheres, and disruptions that limit the magnetic confinement time of plasma in nuclear fusion devices. One of the major challenges in understanding reconnection is the extreme separation between the global system scale and the scale of the dissipation region within which the reconnection process itself takes place. Current understanding of reconnection has developed through mathematical and computational modeling as well as dedicated experiments in both the laboratory and space. Magnetohydrodynamic (MHD) reconnection is studied in the framework of magnetohydrodynamics, which is used to study plasmas (and liquid metals) in the continuum approximation.
The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
The theory of quantum mechanics provides an accurate description of nature at the fundamental level of elementary particles, such as photons, electrons, and larger objects like atoms, molecules, and more macroscopic systems. Any such physical system with two distinct energy levels can be used to represent a quantum bit, or qubit, which provides the equivalent to a classical bit within the context of quantum mechanics. As such, a qubit can be in a well-defined physical state representing one “classical bit” of information. Yet, it also allows for fundamental quantum phenomena such as superposition and mutual entanglement, making these effects available as a resource. Quantum information processing aims to use qubits and quantum effects to attain an advantage in computation and simulation, communication, or the measurement of physical parameters.
Much like the classical bits realized by transistors in silicon are at the foundation of many modern devices, quantum bits form the building blocks out of which quantum devices can be constructed that allow for the use of qubits as a resource. Since the 1990s, many physical systems have been investigated and prototyped as quantum bits, leading to implementations that range from photonics, to atoms and , as well as solid state devices in the form of tailored impurities in a material or superconducting electrical circuits. Each physical approach differs in how the quantum bits are stored, how they are being manipulated, and how quantum states are read out. Research in this area is often cross-cutting between different areas of physics, often covering atomic, optical, and solid state physics and combining fundamental with applied science and engineering. Tying these efforts together is a joint set of metrics that describes the qubits’ ability to retain a quantum mechanical state and the ability to manipulate and read out this state. Examples are phase coherence and fidelity of measurement and operations. Further aspects include the scalability with respect to current technological capabilities, speed, and amenability to error correction.
Joel Wallman, Steven Flammia, and Ian Hincks
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Quantum systems may outperform current digital technologies at various information processing tasks, such as simulating the dynamics of quantum systems and integer factorization. Quantum Characterization, Verification, and Validation (QCVV) is the procedure for estimating the quality of physical quantum systems for use as information processors. QCVV consists of three components.
Characterization means determining the effect of control operations on a quantum system, and the nature of external noise acting on the quantum system. The first characterization experiments (Rabi, Ramsey, and Hahn-echo) were developed in the context of nuclear magnetic resonance. As other effective two-level systems with varying noise models have been identified and couplings become more complex, additional techniques such as tomography and randomized benchmarking have been developed specifically for quantum information processing.
Verification involves verifying that a control operation implements a desired ideal operation to within a specified precision. Often, these targets are set by the requirements for quantum error correction and fault-tolerant quantum computation in specific architectures.
Validation is demonstrating that a quantum information processor can solve specific problems. For problems whose solution can be efficiently verified (e.g., prime factorization), validation may involve running a corresponding quantum algorithm (e.g., Shor’s algorithm) and analyzing the time taken to produce the correct solution. For problems whose solution cannot be efficiently verified, for example, quantum simulation, developing adequate techniques is an active area of research.
The essential features that make a device useful as a quantum information processor also create difficulties for QCVV, and specialized techniques have been developed to surmount these difficulties. The field is now entering a mature phase where a broad range of techniques can address all three tasks. As quantum information processors continue to scale up and improve, these three tasks look to become increasingly relevant, and many challenges remain.
Todd A. Brun
Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state.
Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters , where is the number of physical qubits, is the number of encoded logical qubits, and is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state.
Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.
The solar chromosphere (color sphere) is a strongly structured and highly dynamic region (layer) of the Sun’s atmosphere, located above the bright, visible photosphere. It is optically thin in the near-ultraviolet to near-infrared spectral range, but optically thick in the millimeter range and in strong spectral lines. Particularly important is the departure from the local thermodynamic equilibrium as one moves from the photosphere to the chromosphere. In a plane-parallel model, the temperature gradually rises from the low chromosphere outwards (radially from the center of the Sun), against the rapid decrease in both gas density and pressure with height throughout the entire solar atmosphere. In this classical picture, the chromosphere is sandwiched between the so-called temperature minimum (i.e., the minimum average temperature in the solar atmosphere; about 4000 K) and the hot transition region (with a few tens of thousands kelvin at its lower boundary), above which the temperature drastically increases outwards, reaching million degrees in the solar corona (i.e., the outermost layer of the Sun’s atmosphere). In reality, however, this standard (simple) model does not properly account for the many faces of the non-uniform and dynamic chromosphere. For instance, there also exists extremely cool gas in this highly dynamical region.
A variety of heating mechanisms has been suggested to contribute in the energetics of the solar chromosphere. These particularly include propagating waves (of various kinds) often generated in the low photosphere, as well as jets, flares, and explosive events as a result of, for example, magnetic reconnection. However, observations of energy deposition in the chromosphere (particularly from waves) have been rare.
The solar chromosphere is dominated by the magnetic fields (where the gas density reduces by more than four orders of magnitude compared to the underlying photosphere; hence, magnetic pressure dominates that of gas) featuring a variety of phenomena including sunspots, plages, eruptions, and elongated structures of different physical properties and/or appearances. The latter have been given different names in the literature, such as fibrils, spicules, filaments, prominences, straws, mottle, surges, or rosette, within which, various sub-categories have also been introduced. Some of these thread-like structures share the same properties, some are speculated to represent the same or completely different phenomena at different atmospheric heights, and some manifest themselves differently in intensity images, depending on properties of the sampling spectral lines. Their origins and relationships to each other are poorly understood. The elongated structures have been suggested to map the magnetic fields in the solar chromosphere; however, that includes challenges of measuring/approximating the chromospheric magnetic fields (particularly in the quiet regions), as well as of estimating the exact heights of formation of the fibrillar structures.
The solar chromosphere may thus be described as a challenging, complex plasma-physics lab, in which many of the observed phenomena and physical processes have not yet been fully understood.
The solar dynamo is the action of flows inside the Sun to maintain its magnetic field against Ohmic decay. On small scales the magnetic field is seen at the solar surface as a ubiquitous “salt-and-pepper” disorganized field that may be generated directly by the turbulent convection. On large scales, the magnetic field is remarkably organized, with an 11-year activity cycle. During each cycle the field emerging in each hemisphere has a specific East–West alignment (known as Hale’s law) that alternates from cycle to cycle, and a statistical tendency for a North-South alignment (Joy’s law). The polar fields reverse sign during the period of maximum activity of each cycle.
The relevant flows for the large-scale dynamo are those of convection, the bulk rotation of the Sun, and motions driven by magnetic fields, as well as flows produced by the interaction of these. Particularly important are the Sun’s large-scale differential rotation (for example, the equator rotates faster than the poles), and small-scale helical motions resulting from the Coriolis force acting on convective motions or on the motions associated with buoyantly rising magnetic flux. These two types of motions result in a magnetic cycle. In one phase of the cycle, differential rotation winds up a poloidal magnetic field to produce a toroidal field. Subsequently, helical motions are thought to bend the toroidal field to create new poloidal magnetic flux that reverses and replaces the poloidal field that was present at the start of the cycle.
It is now clear that both small- and large-scale dynamo action are in principle possible, and the challenge is to understand which combination of flows and driving mechanisms are responsible for the time-dependent magnetic fields seen on the Sun.
A solar flare is a transient increase in solar brightness powered by the release of magnetic energy stored in the Sun’s corona. Flares are observed in all wavelengths of the electromagnetic spectrum. The released magnetic energy heats coronal plasma to temperatures exceeding ten million Kelvins, leading to a significant increase in solar brightness at X-ray and extreme ultraviolet wavelengths. The Sun’s overall brightness is normally low at these wavelengths, and a flare can increase it by two or more an orders of magnitude. The size of a given flare is traditionally characterized by its peak brightness in a soft X-ray wavelength. Flares occur with frequency inversely related to this measure of size, with those of greatest size occuring less than once per year. Images and light curves from different parts of the spectrum from many different flares have led to an accepted model framework for explaining the typical solar flare. According to this model, a sheet of electric current (a current sheet) is first formed in the corona, perhaps by a coronal mass ejection. Magnetic reconnection at this current sheet allows stored magnetic energy to be converted into bulk flow energy, heat, radiation, and a population of non-thermal electrons and ions. Some of this energy is transmitted downward to cooler layers, which are then evaporated (or ablated) upward to fill the coronal with hot dense plasma. Much of the flares bright emission comes from this newly heated plasma. Theoretical models have been proposed to describe each step in this process.
L. P. Chitta, H. N. Smitha, and S. K. Solanki
The Sun is a G2V star with an effective temperature of 5780 K. As the nearest star to Earth and the biggest object in the solar system, it serves as a reference for fundamental astronomical parameters such as stellar mass, luminosity, and elemental abundances. It also serves as a plasma physics laboratory. A great deal of researchers’ understanding of the Sun comes from its electromagnetic radiation, which is close to that of a blackbody whose emission peaks at a wavelength of around 5,000 Å and extends into the near UV and infrared. The bulk of this radiation escapes from the solar surface, from a layer that is a mere 100 km thick. This surface from where the photons escape into the heliosphere and beyond, together with the roughly 400–500 km thick atmospheric layer immediately above it (where the temperature falls off monotonically with distance from the Sun), is termed the solar photosphere.
Observations of the solar photosphere have led to some important discoveries in modern-day astronomy and astrophysics. At low spatial resolution, the photosphere is nearly featureless. However, naked-eye solar observations, the oldest of which can plausibly be dated back to 800
Superimposed on the solar blackbody spectrum are numerous spectral lines from different atomic species that arise due to the absorption of photons at certain wavelengths by those atoms, in the cooler photospheric plasma overlying the solar surface. These spectral lines provide diagnostics of the properties and dynamics of the underlying plasma (e.g., the granulation due to convection and the solar p-mode oscillations) and of the solar magnetic field. Since the early 20th century, researchers have used these spectral lines and the accompanying polarimetric signals to decode the physics of the solar photosphere and its magnetic structures, including sunspots. Modern observations with high spatial (0.15 arcsec, corresponding to 100 km on the solar surface) and spectral (10 mÅ) resolutions reveal a tapestry of the magnetized plasma with structures down to tens of kilometers at the photosphere (three orders of magnitude smaller than sunspots). Such observations, combined with advanced numerical models, provide further clues to the very important role of the magnetic field in solar and stellar structures and the variability in their brightness. Being the lowest directly observable layer of the Sun, the photosphere is also a window into the solar interior by means of helioseismology, which makes use of the p-mode oscillations. Furthermore, being the lowest layer of the solar atmosphere, the photosphere provides key insights into another long-standing mystery, that above the temperature-minimum
(~500 km above the surface at ~4000 K), the plasma in the extended corona (invisible to the naked eye except during a total solar eclipse) is heated to temperatures up to 1,000 times higher than at the visible surface. The physics of the solar photosphere is thus central to the understanding of many solar and stellar phenomena.
Steven R. Cranmer
The Sun continuously expels a fraction of its own mass in the form of a steadily accelerating outflow of ionized gas called the “solar wind.” The solar wind is the extension of the Sun’s hot (million-degree Kelvin) outer atmosphere that is visible during solar eclipses as the bright and wispy corona. In 1958, Eugene Parker theorized that a hot corona could not exist for very long without beginning to accelerate some of its gas into interplanetary space. After more than half a century, Parker’s idea of a gas-pressure-driven solar wind still is largely accepted, although many questions remain unanswered. Specifically, the physical processes that heat the corona have not yet been identified conclusively, and the importance of additional wind-acceleration mechanisms continue to be investigated. Variability in the solar wind also gives rise to a number of practical “space weather” effects on human life and technology, and there is still a need for more accurate forecasting. Fortunately, recent improvements in both observations (with telescopes and via direct sampling by space probes) and theory (with the help of ever more sophisticated computers) are leading to new generations of predictive and self-consistent simulations. Attempts to model the origin of the solar wind are also leading to new insights into long-standing mysteries about turbulent flows, magnetic reconnection, and kinetic wave-particle resonances.
The hot solar atmosphere continually expands out into space to form the solar wind, which drags with it the Sun’s magnetic field. This creates a cavity in the interstellar medium, extending far past the outer planets, within which the solar magnetic-field dominates. While the physical mechanisms by which the solar atmosphere is heated are still debated, the resulting solar wind can be readily understood in terms of the pressure difference between the hot, dense solar atmosphere and the cold, tenuous interstellar medium. This results in an accelerating solar-wind profile which becomes supersonic long before it reaches Earth orbit. The large-scale structure of the magnetic field carried by the solar wind is that of an Archimedean spiral, owing to the radial solar-wind flow away from the Sun and the rotation of the magnetic footpoints with the solar surface. Within this relatively simple picture, however, is a range of substructure, on all observable time and spatial scales. Solar-wind flows are largely bimodal in character. “Fast” wind comes from open magnetic-field regions, which have a single connection to the solar surface. “Slow” wind, on the other hand, appears to come from the vicinity of closed magnetic field regions, which have both ends connected to the Sun. Interaction of fast and slow wind leads to patterns of solar-wind compression and expansion which sweep past Earth. Within this relatively stable structure of flows, huge episodic eruptions of solar material further perturb conditions. At the smaller scales, turbulent eddies create unpredictable variations in solar-wind conditions. These solar-wind structures are of great interest as they give rise to space weather that can adversely affect space- and ground-based technologies, as well as pose a threat to humans in space.