21-40 of 63 Results

Article

Magnetohydrodynamic Reconnection  

D. I. Pontin

Magnetic reconnection is a fundamental process that is important for the dynamical evolution of highly conducting plasmas throughout the Universe. In such highly conducting plasmas the magnetic topology is preserved as the plasma evolves, an idea encapsulated by Alfvén’s frozen flux theorem. In this context, “magnetic topology” is defined by the connectivity and linkage of magnetic field lines (streamlines of the magnetic induction) within the domain of interest, together with the connectivity of field lines between points on the domain boundary. The conservation of magnetic topology therefore implies that magnetic field lines cannot break or merge, but evolve only according to smooth deformations. In any real plasma the conductivity is finite, so that the magnetic topology is not preserved everywhere: magnetic reconnection is the process by which the field lines break and recombine, permitting a reconfiguration of the magnetic field. Due to the high conductivity, reconnection may occur only in small dissipation regions where the electric current density reaches extreme values. In many applications of interest, the change of magnetic topology facilitates a rapid conversion of stored magnetic energy into plasma thermal energy, bulk-kinetic energy, and energy of non-thermally accelerated particles. This energy conversion is associated with dynamic phenomena in plasmas throughout the Universe. Examples include flares and other energetic phenomena in the atmosphere of stars including the Sun, substorms in planetary magnetospheres, and disruptions that limit the magnetic confinement time of plasma in nuclear fusion devices. One of the major challenges in understanding reconnection is the extreme separation between the global system scale and the scale of the dissipation region within which the reconnection process itself takes place. Current understanding of reconnection has developed through mathematical and computational modeling as well as dedicated experiments in both the laboratory and space. Magnetohydrodynamic (MHD) reconnection is studied in the framework of magnetohydrodynamics, which is used to study plasmas (and liquid metals) in the continuum approximation.

Article

Magnetohydrodynamics: Overview  

E.R. Priest

Magnetohydrodynamics is sometimes called magneto-fluid dynamics or hydromagnetics and is referred to as MHD for short. It is the unification of two fields that were completely independent in the 19th, and first half of the 20th, century, namely, electromagnetism and fluid mechanics. It describes the subtle and complex nonlinear interaction between magnetic fields and electrically conducting fluids, which include liquid metals as well as the ionized gases or plasmas that comprise most of the universe. In places such as the Earth’s magnetosphere or the Sun’s outer atmosphere (the corona) where the magnetic field provides an important component of the free energy, MHD effects are responsible for much of the observed dynamic behavior, such as geomagnetic substorms, solar flares and huge eruptions from the Sun that dominate the Earth’s space weather. However, MHD is also of great importance in astrophysics, since many of the MHD processes that are observed in the laboratory or in the Sun and the magnetosphere also take place under different parameter regimes in more exotic cosmical objects such as active stars, accretion discs, and black holes. The different aspects of MHD include determining the nature of: magnetic equilibria under a balance between magnetic forces, pressure gradients and gravity; MHD wave motions; magnetic instabilities; and the important process of magnetic reconnection for converting magnetic energy into other forms. In turn, these aspects play key roles in the fundamental astrophysical processes of magnetoconvection, magnetic flux emergence, star spots, plasma heating, stellar wind acceleration, stellar flares and eruptions, and the generation of magnetic fields by dynamo action.

Article

Magnetohydrodynamic Waves  

V.M. Nakariakov

Magnetohydrodynamic (MHD) waves represent one of the macroscopic processes responsible for the transfer of the energy and information in plasmas. The existence of MHD waves is due to the elastic and compressible nature of the plasma, and by the effect of the frozen-in magnetic field. Basic properties of MHD waves are examined in the ideal MHD approximation, including effects of plasma nonuniformity and nonlinearity. In a uniform medium, there are four types of MHD wave or mode: the incompressive Alfvén wave, compressive fast and slow magnetoacoustic waves, and non-propagating entropy waves. MHD waves are essentially anisotropic, with the properties highly dependent on the direction of the wave vector with respect to the equilibrium magnetic field. All of these waves are dispersionless. A nonuniformity of the plasma may act as an MHD waveguide, which is exemplified by a field-aligned plasma cylinder that has a number of dispersive MHD modes with different properties. In addition, a smooth nonuniformity of the Alfvén speed across the field leads to mode coupling, the appearance of the Alfvén continuum, and Alfvén wave phase mixing. Interaction and self-interaction of weakly nonlinear MHD waves are discussed in terms of evolutionary equations. Applications of MHD wave theory are illustrated by kink and longitudinal waves in the corona of the Sun.

Article

Measurement-Based Quantum Computation  

Tzu-Chieh Wei

Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.

Article

Mechanobiology  

Julia M. Yeomans

Growth, motion, morphogenesis, and self-organization are features common to all biological systems. Harnessing chemical energy allows biological cells to function out of thermodynamic equilibrium and to alter their number, size, shape, and location. For example, the zygote that results when a mammalian egg and sperm cell fuse divides to form a ball of cells, the blastocyst. Collective cell migrations and remodeling then drive the tissue folding that determines how cells are positioned before they differentiate to grow into the stunning diversity of different living creatures. The development of organoids and tumors is controlled by the confining properties of the viscous, extracellular matrix that surrounds tissues; wounds heal through the collective motion of cell layers and escape from a surface layer into the third dimension, which determines the growth of biofilms. The relevance of stresses, forces, and flows in these processes is clear and forms the basis of the interdisciplinary science of mechanobiology, which draws on knowledge from physics, engineering, and biochemistry to ask how cells organize their internal components, how they move, grow and divide, and how they interact mechanically with each other and with their surroundings. This approach to biological processes is particularly timely, both because of experimental advances exploiting soft lithography and enhanced imaging techniques, and because of progress in the theories of active matter, which is leading to new ways to describe the collective dynamical behavior of systems out of thermodynamic equilibrium. Identifying stresses, forces, and flows, and describing how they act, may be key to unifying research on the underlying molecular mechanisms and to interpreting a wealth of disparate data to understand biological self-organization from molecular to tissue scales.

Article

Monopole Excitation and Nuclear Compressibility: Present and Future Perspectives  

Juan Carlos Zamora and Simon Giraud

Isoscalar giant resonances are nuclear collective excitations associated with the oscillation in phase of protons and neutrons according to a certain multipolarity L . In particular, the isoscalar giant monopole resonance ( L = 0 ) is the strongest nuclear compression mode, and its excitation energy is directly related to the compression modulus for finite nuclei. Typically, microscopic calculations are utilized to establish a relationship between the experimental compression modulus and the nuclear incompressibility that is a crucial parameter of the equation of state (EOS) for nuclear matter. The incompressibility of nuclear matter has been determined with an accuracy of 10–20% using relativistic and non-relativistic microscopic models for describing the monopole distributions in 208Pb and 90Zr isotopes. However, the same theoretical models are not able to describe data for open-shell nuclei, such as those of tin and cadmium isotopes. In fact, only effective interactions with a softer nuclear-matter incompressibility are able to predict the centroid energy of monopole distributions for open-shell nuclei. A unified description of the monopole resonance in 208Pb and other open-shell nuclei remains unsolved from the theory side. Most of this uncertainty is due to poor knowledge of the symmetry energy, which is another essential component of the EOS of nuclear matter. Therefore, experimental data on isotopic chains covering a wide range of N : Z ratios, including neutron-deficient and neutron-rich nuclei, are of paramount importance for determining both the nuclear-matter incompressibility and the symmetry energy more precisely. Novel approaches using inverse kinematics have been developed to achieve giant resonance experiments with unstable nuclei. The active target and storage ring are potentially the most feasible methods for measuring giant resonances in nuclei far from stability. The combination of these techniques with high-intensity radioactive beams at new accelerator facilities will provide the means to explore the nuclear-matter properties of the most exotic nuclei.

Article

Multi-Fluid Effects in Magnetohydrodynamics  

Elena Khomenko

Multi-fluid magnetohydrodynamics is an extension of classical magnetohydrodynamics that allows a simplified treatment plasmas with complex chemical mixtures. The types of plasma susceptible to multi-fluid effects are those containing particles with properties significantly different from those of the rest of the plasma in either mass, or electric charge, such as neutral particles, molecules, or dust grains. In astrophysics, multi-fluid magnetohydrodynamics is relevant for planetary ionospheres and magnetospheres, the interstellar medium, and the formation of stars and planets, as well as in the atmospheres of cool stars such as the Sun. Traditionally, magnetohydrodynamics has been a classical approximation in many astrophysical and physical applications. Magnetohydrodynamics works well in dense plasmas where the typical plasma scales (e.g., cyclotron frequencies, Larmor radius) are significantly smaller than the scales of the processes under study. Nevertheless, when plasma components are not well coupled by collisions it is necessary to replace single-fluid magnetohydrodynamics by multi-fluid theory. The present article provides a description of environments in which a multi-fluid treatment is necessary and describes modifications to the magnetohydrodynamic equations that are necessary to treat non-ideal plasmas. It also summarizes the physical consequences of major multi-fluid non-ideal magnetohydrodynamic effects including ambipolar diffusion, the Hall effect, the battery effect, and other intrinsically multi-fluid effects. Multi-fluid theory is an intermediate step between magnetohydrodynamics dealing with the collective behaviour of an ensemble of particles, and a kinetic approach where the statistics of particle distributions are studied. The main assumption of multi-fluid theory is that each individual ensemble of particles behaves like a fluid, interacting via collisions with other particle ensembles, such as those belonging to different chemical species or ionization states. Collisional interaction creates a relative macroscopic motion between different plasma components, which, on larger scales, results in the non-ideal behaviour of such plasmas. The non-ideal effects discussed here manifest themselves in plasmas at relatively low temperatures and low densities.

Article

Newton and Newtonianism  

Jip Van Besouw and Cornelis Schilt

Isaac Newton’s famous works on mechanics, astronomy, mathematics, and optics have been widely studied in the history of physics. Over the last half century, however, historians have also come to grasp Newton’s extensive studies in fields now no longer considered scientific, such as alchemy, prophecy, and church history. Whereas earlier biographers of Newton believed they could distinguish between a young, scientific Newton and an old, non-scientific Newton, contemporary historians have shown that Newton started cultivating his wide interests well before authoring his epoch-making Principia. Newton never published a coherent account of his natural philosophy. Because of that, the relations between Newton’s different investigations were obscured to his contemporaries. Nevertheless, many were inspired by Newton. This has led previous generations of historians to describe 18th-century physics as the unfolding of ‘Newtonianism.’ Current historical research has now largely rejected that idea. Instead, recent research focuses on how Newton’s work was actually read and interpreted throughout the eighteenth century for purposes and developments that were not Newton’s own. Much eighteenth-century engagement with Newton had to do with constructing views of what proper natural philosophy was supposed to look like, and with the separation of physics and metaphysics.

Article

The Nuclear Physics of Neutron Stars  

Jorge Piekarewicz

Neutron stars—compact objects with masses similar to that of our Sun but radii comparable to the size of a city—contain the densest form of matter in the universe that can be probed in terrestrial laboratories as well as in earth- and space-based observatories. The historical detection of gravitational waves from a binary neutron star merger has opened the new era of multimessenger astronomy and has propelled neutron stars to the center of a variety of disciplines, such as astrophysics, general relativity, nuclear physics, and particle physics. The main input required to study the structure of neutron stars is the pressure support generated by its constituents against gravitational collapse. These include neutrons, protons, electrons, and perhaps even more exotic constituents. As such, nuclear physics plays a prominent role in elucidating the fascinating structure, dynamics, and composition of neutron stars.

Article

Nucleon Clustering in Light Nuclei  

Martin Freer

The ability to model the nature of the strong interaction at the nuclear scale using ab initio approaches and the development of high-performance computing is allowing a greater understanding of the details of the structure of light nuclei. The nature of the nucleon–nucleon interaction is such that it promotes the creation of clusters, mainly α-particles, inside the nuclear medium. The emergence of these clusters and understanding the resultant structures they create has been a long-standing area of study. At low excitation energies, close to the ground state, there is a strong connection between symmetries associated with mean-field, single-particle behavior and the geometric arrangement of the clusters, while at higher excitation energies, when the cluster decay threshold is reached, there is a transition to a more gas-like cluster behavior. State-of-the-art calculations now guide the thinking in these two regimes, but there are some key underpinning principles that they reflect. Building from the simple ideas to the state of the art, a thread is created by which the more complex calculations have a foundation, developing a description of the evolution of clustering from α-particle to 16O clusters.

Article

Philosophical Issues in Early Universe Cosmology  

Adam Koberinski and Chris Smeenk

There are many interesting foundational and philosophical issues that become salient in early universe cosmology. One major focus is on issues that arise at the boundaries of distinct theories or frameworks when trying to merge them for describing the early universe. These include issues at the boundary of gravity and statistical physics, as well as gravity and quantum field theory. These foundational issues arise in trying to unify distinct domains of physics. Another major theme of early universe cosmology is the methodological goal of finding dynamical explanations for striking features in the universe. Some examples of such a methodologyinclude the cosmic arrow of time, posits of a Past Hypothesis for the initial state of the universe, inflation, baryogenesis, and emergence of spacetime. There is much philosophical debate about the prospects for success of such a methodology; these are surveyed below.

Article

Philosophical Issues in Thermal Physics  

Wayne C. Myrvold

Thermodynamics gives rise to a number of conceptual issues that have been explored by both physicists and philosophers. One source of contention is the nature of thermodynamics itself. Is it what physicists these days would call a resource theory, that is, a theory about how agents with limited means of manipulating a physical system can exploit its physical properties to achieve desired ends, or is it a theory of the basic properties of matter, independent of considerations of manipulation and control? Another source of contention is the relation between thermodynamics and statistical mechanics. It has been recognized since the 1870s that the laws of thermodynamics, as originally conceived, cannot be strictly correct. Because of fluctuations at the molecular level, processes forbidden by the original version second law of thermodynamics are continually occurring. The original version of the second law is to be replaced with a probabilistic version, according to which large-scale violations of the original second law are not impossible but merely highly improbable, and small-scale violations unpredictable, unable to be harnessed to systematically produce useful work. The introduction of probability talk raises the question of how we should conceive of probabilities in the context of deterministic physical laws.

Article

Philosophy of Quantum Mechanics  

David Wallace

If the philosophy of physics has a central problem, it is the quantum measurement problem: the problem of how to interpret, make sense of, and perhaps even fix quantum mechanics. Other theories in physics challenge people’s intuitions and everyday assumptions, but only quantum theory forces people to take seriously the idea that there is no objective world at all beyond their observations—or, perhaps, that there are many. Other theories in physics leave people puzzled about aspects of how they are to be understood, but only quantum theory raises paradoxes so severe that leading physicists and leading philosophers of physics seriously consider tearing it down and rebuilding it anew. Quantum theory is both the conceptual and mathematical core of 21st-century physics and the gaping void in the attempt to understand the worldview given by 21st-century physics. Unsurprisingly, then, the philosophy of quantum mechanics is dominated by the quantum measurement problem, and to a lesser extent by the related problem of quantum non-locality, and in this article, an introduction to each is given. In Section 1, I review the formalism of quantum mechanics and the quantum measurement problem. In Sections 2–4 I discuss the three main classes of solution to the measurement problem: treat the formalism as representing the objective state of the system; treat it as representing only probabilities of something else; modify it or replace it entirely. In Section 5 I review Bell’s inequality and the issue of non-locality in quantum mechanics, and relate it to the interpretations discussed in Sections 2–4. I make some brief concluding remarks in Section 6. A note on terminology: I use “quantum theory” and “quantum mechanics” interchangeably to refer to the overall framework of quantum physics (containing quantum theories as simple as the qubit or harmonic oscillator and as complicated as the Standard Model of particle physics). I do not adopt the older convention (still somewhat common in philosophy of physics) that “quantum mechanics” means only the quantum theory of particles, or perhaps even non-relativistic particles: when I want to refer to non-relativistic quantum particle mechanics I will do so explicitly.

Article

Philosophy of Quantum Mechanics: Dynamical Collapse Theories  

Angelo Bassi

Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics. A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so. By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.

Article

Physics-to-Technology Partnerships in the Semiconductor Industry  

Robert Doering

The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.

Article

Progress in Gamma Detection for Basic Nuclear Science and Applications  

J. Simpson and A. J. Boston

The atomic nucleus, consisting of protons and neutrons, is a unique strongly interacting quantum mechanical system that makes up 99.9% of all visible matter. From the inception of gamma-ray detectors to the early 21st century, advances in gamma detection have allowed researchers to broaden their understanding of the fundamental properties of all nuclei and their interactions. Key technical advances have enabled the development of state-of-the art instruments that are expected to address a wide range of nuclear science at the extremes of the nuclear landscape, excitation energy, spin, stability, and mass. The realisation of efficient gamma detection systems has impact in many applications such as medical imaging environmental radiation monitoring, and security. Even though the technical advances made so far are remarkable, further improvements are continually being implemented or planned.

Article

Quantum Dots/Spin Qubits  

Shannon P. Harvey

Spin qubits in semiconductor quantum dots represent a prominent family of solid-state qubits in the effort to build a quantum computer. They are formed when electrons or holes are confined in a static potential well in a semiconductor, giving them a quantized energy spectrum. The simplest spin qubit is a single electron spin located in a quantum dot, but many additional varieties have been developed, some containing multiple spins in multiple quantum dots, each of which has different benefits and drawbacks. Although these spins act as simple quantum systems in many ways, they also experience complex effects due to their semiconductor environment. They can be controlled by both magnetic and electric fields depending on their configuration and are therefore dephased by magnetic and electric field noise, with different types of spin qubits having different control mechanisms and noise susceptibilities. Initial experiments were primarily performed in gallium arsenide–based materials, but silicon qubits have developed substantially and research on qubits in silicon metal-oxide-semiconductor, silicon/silicon germanium heterostructures, and donors in silicon is also being pursued. An increasing number of spin qubit varieties have attained error rates that are low enough to be compatible with quantum error correction for single-qubit gates, and two-qubit gates have been performed in several with success rates, or fidelities, of 90–95%.

Article

Quantum Error Correction  

Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state. Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters [ [ n , k , d ] ] , where n is the number of physical qubits, k is the number of encoded logical qubits, and d is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has n > k ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state. Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.

Article

Quantum Quench and Universal Scaling  

Sumit R. Das

A quantum quench is a process in which a parameter of a many-body system or quantum field theory is changed in time, taking an initial stationary state into a complicated excited state. Traditionally “quench” refers to a process where this time dependence is fast compared to all scales in the problem. However in recent years the terminology has been generalized to include smooth changes that are slow compared to initial scales in the problem, but become fast compared to the physical scales at some later time, leading to a breakdown of adiabatic evolution. Quantum quench has been recently used as a theoretical tool to study many aspects of nonequilibrium physics like thermalization and universal aspects of critical dynamics. Relatively recent experiments in cold atom systems have implemented such quench protocols, which explore dynamical passages through critical points, and study in detail the process of relaxation to a steady state. On the other hand, quenches which remain adiabatic have been explored as a useful technique in quantum computation.

Article

Quantum Simulation With Trapped Ions  

D. Luo and N. M. Linke

Simulating quantum systems using classical computers encounters inherent challenges due to the exponential scaling with system size. To overcome this challenge, quantum simulation uses a well-controlled quantum system to simulate another less controllable system. Over the last 20 years, many physical platforms have emerged as quantum simulators, such as ultracold atoms, Rydberg atom arrays, trapped ions, nuclear spin, superconducting circuits, and integrated photonics. Trapped ions, with induced spin interactions and universal quantum gates, have demonstrated remarkable versatility, capable of both analog and digital quantum simulation. Recent experimental results, covering a range of research areas including condensed matter physics, quantum thermodynamics, high-energy physics, and quantum chemistry, guide this introductory review to the growing field of quantum simulation.