Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems.
The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.
1-20 of 63 Results
Article
Adiabatic Quantum Computing and Quantum Annealing
Erica K. Grant and Travis S. Humble
Article
AdS3 Gravity and Holography
Per Kraus
General relativity in three spacetime dimensions is a simplified model of gravity, possessing no local degrees of freedom, yet rich enough to admit black-hole solutions and other phenomena of interest. In the presence of a negative cosmological constant, the asymptotically anti–de Sitter (AdS) solutions admit a symmetry algebra consisting of two copies of the Virasoro algebra, with central charge inversely proportional to Newton’s constant. The study of this theory is greatly enriched by the AdS/CFT correspondence, which in this case implies a relationship to two-dimensional conformal field theory. General aspects of this theory can be understood by focusing on universal properties such as symmetries. The best understood examples of the AdS3/CFT2 correspondence arise from string theory constructions, in which case the gravity sector is accompanied by other propagating degrees of freedom. A question of recent interest is whether pure gravity can be made sense of as a quantum theory of gravity with a holographic dual. Attempting to answer this question requires making sense of the path integral over asymptotically AdS3 geometries.
Article
Calabi-Yau Spaces in the String Landscape
Yang-Hui He
Calabi-Yau spaces, or Kähler spaces admitting zero Ricci curvature, have played a pivotal role in theoretical physics and pure mathematics for the last half century. In physics, they constituted the first and natural solution to compactification of superstring theory to our 4-dimensional universe, primarily due to one of their equivalent definitions being the admittance of covariantly constant spinors.
Since the mid-1980s, physicists and mathematicians have joined forces in creating explicit examples of Calabi-Yau spaces, compiling databases of formidable size, including the complete intersecion (CICY) data set, the weighted hypersurfaces data set, the elliptic-fibration data set, the Kreuzer-Skarke toric hypersurface data set, generalized CICYs, etc., totaling at least on the order of
10
10
manifolds. These all contribute to the vast string landscape, the multitude of possible vacuum solutions to string compactification.
More recently, this collaboration has been enriched by computer science and data science, the former in bench-marking the complexity of the algorithms in computing geometric quantities, and the latter in applying techniques such as machine learning in extracting unexpected information. These endeavours, inspired by the physics of the string landscape, have rendered the investigation of Calabi-Yau spaces one of the most exciting and interdisciplinary fields.
Article
Circuit Model of Quantum Computation
James Wootton
Quantum circuits are an abstract framework to represent quantum dynamics. They are used to formally describe and reason about processes within quantum information technology. They are primarily used in quantum computation, quantum communication, and quantum cryptography—for which they provide a machine code–level description of quantum algorithms and protocols. The quantum circuit model is an abstract representation of these technologies based on the use of quantum circuits, with which algorithms and protocols can be concretely developed and studied.
Quantum circuits are typically based on the concept of qubits: two-level quantum systems that serve as a fundamental unit of quantum hardware. In their simplest form, circuits take a set of qubits initialized in a simple known state, apply a set of discrete single- and two-qubit evolutions known as “gates,” and then finally measure all qubits. Any quantum computation can be expressed in this form through a suitable choice of gates, in a quantum analogy of the Boolean circuit model of conventional digital computation.
More complex versions of quantum circuits can include features such as qudits, which are higher level quantum systems, as well as the ability to reset and measure qubits or qudits throughout the circuit. However, even the simplest form of the model can be used to emulate such behavior, making it fully sufficient to describe quantum information technology. It is possible to use the quantum circuit model to emulate other models of quantum computing, such as the adiabatic and measurement-based models, which formalize quantum algorithms in a very different way.
As well as being a theoretical model to reason about quantum information technology, quantum circuits can also provide a blueprint for quantum hardware development. Corresponding hardware is based on the concept of building physical systems that can be controlled in the way required for qubits or qudits, including applying gates on them in sequence and performing measurements.
Article
Dark Matter
Timothy Sumner
Dark matter is one of the most fundamental and perplexing issues of modern physics. Its presence is deduced from a straightforward application of Newton’s theory of gravity to astronomical systems whose dynamical motion should be simple to understand.
The success of Newton’s theory in describing the behavior of the solar system was one of the greatest achievements of the 18th century. Its subsequent use to deduce the presence of a previously unknown planet, Neptune, discovered in 1846, was the first demonstration of how minor departures from its predictions indicated additional mass.
The expectation in the early 20th century, as astronomical observations allowed more distance and larger celestial systems to be studied, was that galaxies and collections of galaxies should behave like larger solar systems, albeit more complicated. However, the reality was quite different. It is not a minor discrepancy, as led to the discovery of Neptune, but it is extreme. The stars at the edges of galaxies are not behaving at all like Pluto at the edge of the solar system. Instead of having a slower orbital speed, as expected and shown by Pluto, they have the same speed as those much further in. If Newton’s law is to be retained, there must be much more mass in the galaxy than can be seen, and it must be distributed out to large distances, beyond the visible extent of the galaxy. This unseen mass is called “dark matter,” and its presence was becoming widely accepted by the 1970s. Subsequently, many other types of astrophysical observations covering many other types of object were made that came to the same conclusions.
The ultimate realization was that the universe itself requires dark matter to explain how it developed the structures within it observed today. The current consensus is that one-fourth of the universe is dark matter, whereas only 1/20th is normal matter. This leaves the majority in some other form, and therein lies another mystery—“dark energy.”
The modern form of Newton’s laws is general relativity, due to Albert Einstein. This offers no help in solving the problem of dark matter because most of the systems involved are nonrelativistic and the solutions to the general theory of relativity (GR) reproduce Newtonian behavior. However, it would not be right to avoid mentioning the possibility of modifying Newton’s laws (and hence GR) in such a way as to change the nonrelativistic behavior to explain the way galaxies behave, but without changing the solar system dynamics. Although this is a minority concept, it is nonetheless surviving within the scientific community as an idea.
Understanding the nature of dark matter is one of the most intensely competitive research areas, and the solution will be of profound importance to astrophysics, cosmology, and fundamental physics. There is thus a huge “industry” of direct detection experiments predicated on the premise that there is a new particle species yet to be found, and which pervades the universe. There are also experiments searching for evidence of the decay of the particles via their annihilation products, and, finally, there are intense searches for newly formed unknown particles in collider experiments.
Article
The Economics of Physics: The Social Cost-Benefit Analysis of Large Research Infrastructures
Massimo Florio and Chiara Pancotti
In economics, infrastructure is a long-term investment aimed at the delivery of essential services to a large number of users, such as those in the field of transport, energy, or telecommunications. A research infrastructure (RI) is a single-sited, distributed, virtual, or mobile facility, designed to deliver scientific services to communities of scientists. In physical sciences (including astronomy and astrophysics, particle and nuclear physics, analytical physics, medical physics), the RI paradigm has found several large-scale applications, such as radio telescopes, neutrino detectors, gravitational wave interferometers, particle colliders and heavy ion beams, high intensity lasers, synchrotron light sources, spallation neutron sources, and hadrontherapy facilities.
These RIs require substantial capital and operation expenditures and are ultimately funded by taxpayers. In social cost–benefit analysis (CBA), the impact of an investment project is measured by the intertemporal difference of benefits and costs accruing to different agents. Benefits and costs are quantified and valued through a common metric and using the marginal social opportunity costs of goods (or shadow price) that may differ from the market price, as markets are often incomplete or imperfect. The key strength of CBA is that it produces information about the project’s net contribution to society that is summarized in simple numerical indicators, such as the net present value of a project.
For any RIs, consolidated cost accounting should include intertemporal capital and operational expenditure both for the main managing body and for experimental collaborations or other external teams, including in-kind contributions. As far as social intertemporal benefits are concerned, it is convenient to divide them into two broad classes. The first class of benefits accrue to different categories of direct and indirect users of infrastructure services: scientists, students, firms benefiting from technological spillovers, consumers of innovative services and products, and citizens who are involved in outreach activities. The empirical estimation of the use value of an RI depends on the scientific specificities of each project, as different social groups are involved to different degrees. Second, there are benefits for the general public of non-users: these benefits are associated with social preferences for scientific research, even when the use of a discovery is unknown. In analogy with the valuation of environmental and cultural goods, the empirical approach to non-use value aims at eliciting the willingness to pay of citizens for the scientific knowledge that is created by an RI. This can be done by well-designed contingency valuation surveys.
While some socio-economic impact studies of RIs in physics have been available since the 1980s, the intangible nature of some benefits and the uncertainty associated with scientific discoveries have limited the diffusion of CBA in this field until recently. Nevertheless, recent studies have explored the application of CBA to RIs in physics. Moreover, the European Commission, the European Strategy Forum on Research Infrastructures, the European Investment Bank, and some national authorities suggest that the study of social benefits and costs of RIs should be part of the process leading to funding decisions.
Article
Electromagnetism and Electrodynamics in the 19th Century
Chen-Pang Yeang
Electromagnetism and electrodynamics—studies of electricity, magnetism, and their interactions—are viewed as a pillar of classical physics. In the 1820s and 1830s, Ampère founded electrodynamics as the science of mechanical forces associated with electric currents, and Faraday discovered electromagnetic induction. By the mid-19th century, Neumann, Weber, and others in Germany had established an electrical science that integrated precision measurements with a unified theory based on mathematical potential or forces between electrical corpuscles. Meanwhile, based on Faraday’s findings in electrolysis, dielectrics, diamagnetism, and magneto-optic rotation, Faraday and Thomson in Britain explored a theory of the electromagnetic field. In the 1850s and 1860s, Maxwell further developed the Faraday–Thomson field theory, introduced the displacement current, and predicted the existence of electromagnetic waves. Helmholtz’s reworking of these Maxwellian insights led to Hertz’s discovery of electric waves in 1887.
Article
Electroweak Interactions and W, Z Boson Properties
Maarten Boonekamp and Matthias Schott
With the huge success of quantum electrodynamics (QED) to describe electromagnetic interactions in nature, several attempts have been made to extend the concept of gauge theories to the other known fundamental interactions. It was realized in the late 1960s that electromagnetic and weak interactions can be described by a single unified gauge theory. In addition to the photon, the single mediator of the electromagnetic interaction, this theory predicted new, heavy particles responsible for the weak interaction, namely the W and the Z bosons. A scalar field, the Higgs field, was introduced to generate their mass.
The discovery of the mediators of the weak interaction in 1983, at the European Center for Nuclear Research (CERN), marked a breakthrough in fundamental physics and opened the door to more precise tests of the Standard Model. Subsequent measurements of the weak boson properties allowed the mass of the top quark and of the Higgs Boson to be predicted before their discovery. Nowadays, these measurements are used to further probe the consistency of the Standard Model, and to place constrains on theories attempting to answer still open questions in physics, such as the presence of dark matter in the universe or unification of the electroweak and strong interactions with gravity.
Article
The Emergence of Modern Cosmology
Helge Kragh
The term modern cosmology primarily refers to the developments concerned with the expansion of the universe, its origin billions of years ago, and the concept of dark matter. Similar to the history of any other area of science, the history of cosmology is rich in wrong theories and false trials. According to the simplest version of Brandon Carter’s anthropic principle, carbon-based life could not have originated in a universe evolving just slightly differently from the one observed. The present debate concerning the anthropic principle and its consequences is in some ways strikingly similar to the cosmological controversy of the past between the steady-state theory and relativistic evolution theories.
Article
Energy-Efficient Particle Accelerators for Research
M. Seidel
Particle accelerators are the drivers for large-scale research infrastructures for particle physics but also for many branches of condensed matter research. The types of accelerator-driven research infrastructures include particle colliders, neutron, muon or neutrino sources, synchrotron light sources and free-electron lasers, as well as medical applications. These facilities are often large and complex and have a significant carbon footprint, both in construction and operation. In all facilities grid power is converted to beam power and ultimately to the desired type of radiation for research. The energy efficiency of this conversion process can be optimized using efficient technologies, but also with optimal concepts for entire facilities.
Article
The Evolution of Public Funding of Science in the United States From World War II to the Present
Kei Koizumi
Large-scale U.S. government support of scientific research began in World War II with physics, and rapidly expanded in the postwar era to contribute strongly to the United States’ emergence as the world’s leading scientific and economic superpower in the latter half of the 20th century. Vannevar Bush, who directed President Franklin Roosevelt’s World War II science efforts, in the closing days of the War advocated forcefully for U.S. government funding of scientific research to continue even in peacetime to support three important government missions of national security, health, and the economy. He also argued forcefully for the importance of basic research supported by the federal government but steered and guided by the scientific community. This vision guided an expanding role for the U.S. government in supporting research not only at government laboratories but also in non-government institutions, especially universities.
Although internationally comparable data are difficult to obtain, the U.S. government appears to be the single largest national funder of physics research. The U.S. government support of physics research comes from many different federal departments and agencies. Federal agencies also invest in experimental development based on research discoveries of physics. The Department of Energy’s (DOE) Office of Science is by far the dominant supporter of physics research in the United States, and DOE’s national laboratories are the dominant performers of U.S. government-supported physics research. Since the 1970s, U.S. government support of physics research has been stagnant with the greatest growth in U.S. government research support having shifted since the 1990s to the life sciences and computer sciences.
Article
Experimentation in Physics in the 20th and 21st Centuries
Allan Franklin and Ronald Laymon
What is the general notion of pursuit that correlates to scientific experiments in physics? What are the roles of experiments in physics, the epistemology of experiments, arguments around credibility, and experimental investigations in the 20th and 21st centuries? The experimental enterprise is a complex and interdependent activity wherein experiments yield results where those results form the basis for answers to questions posed by the many uses of the experiment. It is worth examining the significance of exploratory experiments and testing theories, as experimenters often apply several strategies in arguing for the correctness of their results.
Article
Fluid–Gravity Correspondence
Mukund Rangamani
The fluid–gravity correspondence establishes a detailed connection between solutions of relativistic dissipative hydrodynamics and black hole spacetimes that solve Einstein’s equations in a spacetime with negative cosmological constant. The correspondence can be seen as a natural corollary of the holographic anti–de Sitter (AdS)/conformal field theory (CFT) correspondence, which arises from string theory. The latter posits a quantum duality between gravitational dynamics in AdS spacetimes and that of a CFT in one dimension less. The fluid–gravity correspondence applies in the statistical thermodynamic limit of the CFT but can be viewed as an independent statement of a relation between two classic equations of physics: the relativistic Navier–Stokes equations and Einstein’s equations. The general structure of relativistic fluid dynamics is formulated in terms of conservation equations of energy–momentum and charges, supplemented with constitutive relations for the corresponding current densities. One can view this construction as an effective field theory for these conserved currents. This intuition applied to the gravitational equations of motion allows the solutions of relativistic hydrodynamics to be embedded as inhomogeneous, dynamical black holes in AdS spacetime.
Article
Free Electron Lasers
Zhirong Huang
Free electron lasers (FELs) are coherent radiation sources based on radiation from “free” relativistic electrons rather than electrons bound in atomic and molecular systems. FELs can in principle operate at any arbitrary wavelength, limited only by the energy and quality of the electron beam that is produced by accelerators. Therefore, FELs can be used to fill gaps in regions of the electromagnetic spectrum where no other coherent sources exist and can provide radiation of very high power and extreme brightness. More than 50 FELs have been built around the world, serving a diverse array of scientific fields and applications.
FELs are based on the resonant interaction of a high-quality electron beam with the radiation in a periodic magnetic device called an “undulator” and can have several operating modes. FEL oscillators use optical cavities to trap the radiation so that the field is built up over many amplification passes through the undulator. FELs can also act as linear amplifiers that will magnify external radiation whose central frequency is close to the undulator resonance condition. Without any seed signal, self-amplified spontaneous emission (SASE) can be used to generate intense coherent radiation starting from electron shot noise and is the most common approach for X-ray FELs. SASE will have limited temporal coherence and pulse stability due to its noisy startup but is very flexible in generating ultrashort X-ray pulses from hundreds of femtoseconds down to hundreds of attoseconds in duration. Various advanced schemes aiming at achieving fully coherent, stable X-ray pulses are proposed and are actively being investigated and developed.
Article
From the Interpretation of Quantum Mechanics to Quantum Technologies
Olival Freire Junior
Quantum mechanics emerged laden with issues and doubts about its foundations and interpretation. However, nobody in the 1920s and 1930s dared to conjecture that research on such issues would open the doors to developments so huge as to require the term second quantum revolution to describe them. On the one hand, the new theory saw its scope of applications widen in various domains including atoms, molecules, light, the interaction between light and matter, relativistic effects, field quantization, nuclear physics, and solid state and particle physics. On the other hand, there were debates on alternative interpretations, the status of statistical predictions, the completeness of the theory, the underlying logic, mathematical structures, the understanding of measurements, and the transition from the quantum to the classical description. Until the early 1960s, there seemed to be a coexistence between these two orders of issues, without any interaction between them.
From the late 1960s on, however, this landscape underwent dramatic changes. The main factor of change was Bell’s theorem, which implied a conflict between quantum mechanics predictions for certain systems that are spatially separated and the assumption of local realism. Experimental tests of this theorem led to the corroboration of quantum predictions and the understanding of quantum entanglement as a physical feature, a result that justified the 2022 Nobel Prize. Another theoretical breakthrough was the understanding and calculation of the interaction of a quantum system with its environment, leading to the transition from pure to mixed states, a feature now known as decoherence. Entanglement and decoherence both resulted from the dialogue between research on the foundations and quantum predictions. In addition, research on quantum optics and quantum gravity benefitted debates on the foundations. From the early 1980s on, another major change occurred, now in terms of experimental techniques, allowing physicists to manipulate single quantum systems and taking the thought experiments of the founders of quantum mechanics into the labs. Lastly, the insight that quantum systems may be used in computing opened the doors to the first quantum algorithms. Altogether, these developments have produced a new field of research, quantum information, which has quantum computers as its holy grail. The term second quantum revolution distinguishes these new achievements from the first spin-offs of quantum mechanics, for example, transistors, electronic microscopes, magnetic resonance imaging, and lasers. Nowadays the applications of this second revolution have gone beyond computing to include sensors and metrology, for instance, and thus are better labeled as quantum technologies.
Article
Gravity and Quantum Entanglement
Mukund Rangamani and Veronika Hubeny
The holographic entanglement entropy proposals give an explicit geometric encoding of spatially ordered quantum entanglement in continuum quantum field theory. These proposals have been developed in the context of the AdS/CFT correspondence, which posits a quantum duality between gravitational dynamics in anti-de Sitter (AdS) space times and that of a conformal field theory (CFT) in one fewer dimension. The von Neumann entropy of a spatial region of the CFT is given by the area of a particular extremal surface in the dual geometry. This surprising connection between a fundamental quantum mechanical concept and a simple geometric construct has given deep insights into the nature of the holographic map and potentially holds an important clue to unraveling the mysteries of quantum gravity.
Article
Impedance-Induced Beam Instabilities
Mauro Migliorati
Modern particle accelerators require ever higher currents to meet user demands, both for high-energy physics experiments and for medical and industrial applications. These high currents, interacting with the accelerators’ environment, produce strong self-induced electromagnetic fields that perturb the external fields that guide and accelerate the charged particles.
Under certain conditions, these perturbations can be so large as to limit the accelerators’ performance, giving rise to unwanted effects such as uncontrolled beam oscillations or instabilities. The self-induced fields are described in terms of the so-called wakefields and beam coupling impedances, two quantities that are used to evaluate their impact on beam dynamics and instabilities’ thresholds.
The determination of wakefields and beam coupling impedances generated by the interaction of the beam with the different machine devices, and of the corresponding induced instabilities, is therefore very important, particularly for high currents. This is carried out with analytical approaches, through the use of simplified models, or, more rigorously and realistically, through simulation codes. The first step in this study is generally represented by a complete electromagnetic characterization of the different accelerator devices and the search for possible minimization of wakefields and beam coupling impedances. Once these quantities are known, their effect on beam dynamics can be evaluated, with both simulations and analytical methods, and a proper machine working point, far away from any impedance-induced beam instabilities, can be determined.
As machine performance is pushed increasingly higher, new effects, produced by wakefields and beam coupling impedances, are found that are related, in many cases, to the coupling with other mechanisms (e.g., with beam–beam). All these effects can no longer be studied separately. Finally, mitigation solutions, such as beam coupling impedance optimization, feedback systems, the use of nonlinearities, and other techniques, must also be investigated so that different tools will be available to counteract unwanted beam-induced instabilities.
Article
Investigation of the Quark-Gluon Plasma With the ALICE Experiment
Luisa Cifarelli and Francesca Bellini
The quark-gluon plasma, or QGP, is a state of matter in which quarks and gluons, the elementary building blocks of ordinary baryonic matter (as protons and neutrons), are no longer confined into hadrons by the strong force. A phase transition from ordinary nuclear matter to a QGP is expected to occur in extreme conditions of high baryon density and temperature, as is thought to have characterized the Universe about 1–10 μs after the Big Bang or can be reached in the dense cores of neutron stars. In the laboratory, the conditions of high energy density necessary to form a QGP can be obtained by colliding heavy ions at velocities close to the speed of light.
The experimental investigation of the QGP provides a test of Quantum Chromodynamics (QCD), the quantum field theory within the Standard Model of elementary particles describing the interaction among color charges. A QGP is an extended many-body system of color charges whose characteristics emerge from the fundamental properties of the strong interaction at high energy densities. Understanding the phenomenology of this state of matter is therefore an important step in the understanding of the strong interaction itself and of QCD.
Following the first theoretical speculations about the existence of the QGP dating back to the 1970s, the field of experimental heavy-ion physics was established through the 1970s–1980s, first at the Bevalac at the Lawrence Livermore National Laboratory (USA), then at the Alternating Gradient Synchrotron at the Brookhaven National Laboratory (BNL, USA) and the Super Proton Synchrotron (SPS) at CERN, Switzerland. In the year 2000, the discovery of the QGP was announced at the CERN SPS. Since then, the major particle and nuclear physics laboratories all around the world have been running or planning a heavy-ion experimental program, covering a broad range of collision energies. The longest and most intensive heavy-ion studies have been pursued at the BNL Relativistic Heavy Ion Collider and at the CERN Large Hadron Collider (LHC), where they are still ongoing. In parallel, numerous advances on the theoretical side, partly also helped by the increase in computing power over time, have provided more and more sophisticated tools to describe the phenomenology of heavy-ion collisions and the properties of the QGP.
Even if a QGP can be produced in the laboratory under appropriate conditions, its direct observation is not possible, because the matter created in the collision exists in a deconfined state for a time of the order of 10−23 s, after which it transitions to a system of hadrons. This represents a major challenge toward the characterization of a QGP: reliance on the measurement of final-state hadronic observables and select probes that are sensitive to the QGP properties of interest at different stages of its evolution is necessary. This requires the capability to disentangle the effects due to the presence of a QGP medium from many others, including those due to the presence of a nuclear environment in the target, or from reinteractions in the final hadronic stage.
At the CERN LHC in Geneva, in an underground tunnel across the border between France and Switzerland, protons and fully ionized heavy ions are accelerated and collide at energies of a few TeV per nucleon pair, the largest ever reached in a particle accelerator, recreating the conditions present in the early Universe. A Large Ion Collider Experiment (ALICE) is the experiment specifically designed to study the QGP produced at the LHC. Operating at the energy frontier since 2009, ALICE has carried out a successful physics program that has enabled a quantitative assessment of the properties of the QGP produced in heavy-ion collisions at the LHC and led to some new discoveries. The results of ALICE and the other LHC experiments have also posed new questions related to the limits of QGP formation in different collision systems, thus prompting new advances in the theoretical field.
Article
Ions for the Treatment of Tumors
Sandro Rossi
Physics and medicine are distinct fields with different objectives, standards, and practices, but with many common points and mutually enriching activities. Hadron therapy, a technique that uses charged particles that also feel the strong interaction, is an area in which scientific insight and technological advancement work hand in hand in an inspirational fashion to leverage their benefits on behalf of patients.
The oncological treatment of patients has become a multidisciplinary effort, in which the contribution of specialists from manifold backgrounds is essential, and success can only be achieved by means of a transdisciplinary “fusion,” an integration and overlap across relevant disciplines.
Article
Magnetohydrodynamic Equilibria
Thomas Wiegelmann
Magnetohydrodynamic equilibria are time-independent solutions of the full magnetohydrodynamic (MHD) equations. An important class are static equilibria without plasma flow. They are described by the magnetohydrostatic equations
j
×
B
=
∇
p
+
ρ
∇
Ψ
,
∇
×
B
=
μ
0
j
,
∇
·
B
=
0.
B
is the magnetic field,
j
the electric current density,
p
the plasma pressure,
ρ
the mass density,
Ψ
the gravitational potential, and
µ
0
the permeability of free space. Under equilibrium conditions, the Lorentz force
j
×
B
is compensated by the plasma pressure gradient force and the gravity force.
Despite the apparent simplicity of these equations, it is extremely difficult to find exact solutions due to their intrinsic nonlinearity. The problem is greatly simplified for effectively two-dimensional configurations with a translational or axial symmetry. The magnetohydrostatic (MHS) equations can then be transformed into a single nonlinear partial differential equation, the Grad–Shafranov equation. This approach is popular as a first approximation to model, for example, planetary magnetospheres, solar and stellar coronae, and astrophysical and fusion plasmas.
For systems without symmetry, one has to solve the full equations in three dimensions, which requires numerically expensive computer programs. Boundary conditions for these systems can often be deduced from measurements. In several astrophysical plasmas (e.g., the solar corona), the magnetic pressure is orders of magnitudes higher than the plasma pressure, which allows a neglect of the plasma pressure in lowest order. If gravity is also negligible, Equation 1 then implies a force-free equilibrium in which the Lorentz force vanishes.
Generalizations of MHS equilibria are stationary equilibria including a stationary plasma flow (e.g., stellar winds in astrophysics). It is also possible to compute MHD equilibria in rotating systems (e.g., rotating magnetospheres, rotating stellar coronae) by incorporating the centrifugal force. MHD equilibrium theory is useful for studying physical systems that slowly evolve in time. In this case, while one has an equilibrium at each time step, the configuration changes, often in response to temporal changes of the measured boundary conditions (e.g., the magnetic field of the Sun for modeling the corona) or of external sources (e.g., mass loading in planetary magnetospheres). Finally, MHD equilibria can be used as initial conditions for time-dependent MHD simulations. This article reviews the various analytical solutions and numerical techniques to compute MHD equilibria, as well as applications to the Sun, planetary magnetospheres, space, and laboratory plasmas.