21-40 of 55 Results

Article

Measurement-Based Quantum Computation  

Tzu-Chieh Wei

Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.

Article

Mechanobiology  

Julia M. Yeomans

Growth, motion, morphogenesis, and self-organization are features common to all biological systems. Harnessing chemical energy allows biological cells to function out of thermodynamic equilibrium and to alter their number, size, shape, and location. For example, the zygote that results when a mammalian egg and sperm cell fuse divides to form a ball of cells, the blastocyst. Collective cell migrations and remodeling then drive the tissue folding that determines how cells are positioned before they differentiate to grow into the stunning diversity of different living creatures. The development of organoids and tumors is controlled by the confining properties of the viscous, extracellular matrix that surrounds tissues; wounds heal through the collective motion of cell layers and escape from a surface layer into the third dimension, which determines the growth of biofilms. The relevance of stresses, forces, and flows in these processes is clear and forms the basis of the interdisciplinary science of mechanobiology, which draws on knowledge from physics, engineering, and biochemistry to ask how cells organize their internal components, how they move, grow and divide, and how they interact mechanically with each other and with their surroundings. This approach to biological processes is particularly timely, both because of experimental advances exploiting soft lithography and enhanced imaging techniques, and because of progress in the theories of active matter, which is leading to new ways to describe the collective dynamical behavior of systems out of thermodynamic equilibrium. Identifying stresses, forces, and flows, and describing how they act, may be key to unifying research on the underlying molecular mechanisms and to interpreting a wealth of disparate data to understand biological self-organization from molecular to tissue scales.

Article

Multi-Fluid Effects in Magnetohydrodynamics  

Elena Khomenko

Multi-fluid magnetohydrodynamics is an extension of classical magnetohydrodynamics that allows a simplified treatment plasmas with complex chemical mixtures. The types of plasma susceptible to multi-fluid effects are those containing particles with properties significantly different from those of the rest of the plasma in either mass, or electric charge, such as neutral particles, molecules, or dust grains. In astrophysics, multi-fluid magnetohydrodynamics is relevant for planetary ionospheres and magnetospheres, the interstellar medium, and the formation of stars and planets, as well as in the atmospheres of cool stars such as the Sun. Traditionally, magnetohydrodynamics has been a classical approximation in many astrophysical and physical applications. Magnetohydrodynamics works well in dense plasmas where the typical plasma scales (e.g., cyclotron frequencies, Larmor radius) are significantly smaller than the scales of the processes under study. Nevertheless, when plasma components are not well coupled by collisions it is necessary to replace single-fluid magnetohydrodynamics by multi-fluid theory. The present article provides a description of environments in which a multi-fluid treatment is necessary and describes modifications to the magnetohydrodynamic equations that are necessary to treat non-ideal plasmas. It also summarizes the physical consequences of major multi-fluid non-ideal magnetohydrodynamic effects including ambipolar diffusion, the Hall effect, the battery effect, and other intrinsically multi-fluid effects. Multi-fluid theory is an intermediate step between magnetohydrodynamics dealing with the collective behaviour of an ensemble of particles, and a kinetic approach where the statistics of particle distributions are studied. The main assumption of multi-fluid theory is that each individual ensemble of particles behaves like a fluid, interacting via collisions with other particle ensembles, such as those belonging to different chemical species or ionization states. Collisional interaction creates a relative macroscopic motion between different plasma components, which, on larger scales, results in the non-ideal behaviour of such plasmas. The non-ideal effects discussed here manifest themselves in plasmas at relatively low temperatures and low densities.

Article

The Nuclear Physics of Neutron Stars  

Jorge Piekarewicz

Neutron stars—compact objects with masses similar to that of our Sun but radii comparable to the size of a city—contain the densest form of matter in the universe that can be probed in terrestrial laboratories as well as in earth- and space-based observatories. The historical detection of gravitational waves from a binary neutron star merger has opened the new era of multimessenger astronomy and has propelled neutron stars to the center of a variety of disciplines, such as astrophysics, general relativity, nuclear physics, and particle physics. The main input required to study the structure of neutron stars is the pressure support generated by its constituents against gravitational collapse. These include neutrons, protons, electrons, and perhaps even more exotic constituents. As such, nuclear physics plays a prominent role in elucidating the fascinating structure, dynamics, and composition of neutron stars.

Article

Nucleon Clustering in Light Nuclei  

Martin Freer

The ability to model the nature of the strong interaction at the nuclear scale using ab initio approaches and the development of high-performance computing is allowing a greater understanding of the details of the structure of light nuclei. The nature of the nucleon–nucleon interaction is such that it promotes the creation of clusters, mainly α-particles, inside the nuclear medium. The emergence of these clusters and understanding the resultant structures they create has been a long-standing area of study. At low excitation energies, close to the ground state, there is a strong connection between symmetries associated with mean-field, single-particle behavior and the geometric arrangement of the clusters, while at higher excitation energies, when the cluster decay threshold is reached, there is a transition to a more gas-like cluster behavior. State-of-the-art calculations now guide the thinking in these two regimes, but there are some key underpinning principles that they reflect. Building from the simple ideas to the state of the art, a thread is created by which the more complex calculations have a foundation, developing a description of the evolution of clustering from α-particle to 16O clusters.

Article

Philosophical Issues in Early Universe Cosmology  

Adam Koberinski and Chris Smeenk

There are many interesting foundational and philosophical issues that become salient in early universe cosmology. One major focus is on issues that arise at the boundaries of distinct theories or frameworks when trying to merge them for describing the early universe. These include issues at the boundary of gravity and statistical physics, as well as gravity and quantum field theory. These foundational issues arise in trying to unify distinct domains of physics. Another major theme of early universe cosmology is the methodological goal of finding dynamical explanations for striking features in the universe. Some examples of such a methodologyinclude the cosmic arrow of time, posits of a Past Hypothesis for the initial state of the universe, inflation, baryogenesis, and emergence of spacetime. There is much philosophical debate about the prospects for success of such a methodology; these are surveyed below.

Article

Philosophical Issues in Thermal Physics  

Wayne C. Myrvold

Thermodynamics gives rise to a number of conceptual issues that have been explored by both physicists and philosophers. One source of contention is the nature of thermodynamics itself. Is it what physicists these days would call a resource theory, that is, a theory about how agents with limited means of manipulating a physical system can exploit its physical properties to achieve desired ends, or is it a theory of the basic properties of matter, independent of considerations of manipulation and control? Another source of contention is the relation between thermodynamics and statistical mechanics. It has been recognized since the 1870s that the laws of thermodynamics, as originally conceived, cannot be strictly correct. Because of fluctuations at the molecular level, processes forbidden by the original version second law of thermodynamics are continually occurring. The original version of the second law is to be replaced with a probabilistic version, according to which large-scale violations of the original second law are not impossible but merely highly improbable, and small-scale violations unpredictable, unable to be harnessed to systematically produce useful work. The introduction of probability talk raises the question of how we should conceive of probabilities in the context of deterministic physical laws.

Article

Philosophy of Quantum Mechanics  

David Wallace

If the philosophy of physics has a central problem, it is the quantum measurement problem: the problem of how to interpret, make sense of, and perhaps even fix quantum mechanics. Other theories in physics challenge people’s intuitions and everyday assumptions, but only quantum theory forces people to take seriously the idea that there is no objective world at all beyond their observations—or, perhaps, that there are many. Other theories in physics leave people puzzled about aspects of how they are to be understood, but only quantum theory raises paradoxes so severe that leading physicists and leading philosophers of physics seriously consider tearing it down and rebuilding it anew. Quantum theory is both the conceptual and mathematical core of 21st-century physics and the gaping void in the attempt to understand the worldview given by 21st-century physics. Unsurprisingly, then, the philosophy of quantum mechanics is dominated by the quantum measurement problem, and to a lesser extent by the related problem of quantum non-locality, and in this article, an introduction to each is given. In Section 1, I review the formalism of quantum mechanics and the quantum measurement problem. In Sections 2–4 I discuss the three main classes of solution to the measurement problem: treat the formalism as representing the objective state of the system; treat it as representing only probabilities of something else; modify it or replace it entirely. In Section 5 I review Bell’s inequality and the issue of non-locality in quantum mechanics, and relate it to the interpretations discussed in Sections 2–4. I make some brief concluding remarks in Section 6. A note on terminology: I use “quantum theory” and “quantum mechanics” interchangeably to refer to the overall framework of quantum physics (containing quantum theories as simple as the qubit or harmonic oscillator and as complicated as the Standard Model of particle physics). I do not adopt the older convention (still somewhat common in philosophy of physics) that “quantum mechanics” means only the quantum theory of particles, or perhaps even non-relativistic particles: when I want to refer to non-relativistic quantum particle mechanics I will do so explicitly.

Article

Philosophy of Quantum Mechanics: Dynamical Collapse Theories  

Angelo Bassi

Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics. A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so. By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.

Article

Physics-to-Technology Partnerships in the Semiconductor Industry  

Robert Doering

The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.

Article

Progress in Gamma Detection for Basic Nuclear Science and Applications  

J. Simpson and A. J. Boston

The atomic nucleus, consisting of protons and neutrons, is a unique strongly interacting quantum mechanical system that makes up 99.9% of all visible matter. From the inception of gamma-ray detectors to the early 21st century, advances in gamma detection have allowed researchers to broaden their understanding of the fundamental properties of all nuclei and their interactions. Key technical advances have enabled the development of state-of-the art instruments that are expected to address a wide range of nuclear science at the extremes of the nuclear landscape, excitation energy, spin, stability, and mass. The realisation of efficient gamma detection systems has impact in many applications such as medical imaging environmental radiation monitoring, and security. Even though the technical advances made so far are remarkable, further improvements are continually being implemented or planned.

Article

Quantum Dots/Spin Qubits  

Shannon P. Harvey

Spin qubits in semiconductor quantum dots represent a prominent family of solid-state qubits in the effort to build a quantum computer. They are formed when electrons or holes are confined in a static potential well in a semiconductor, giving them a quantized energy spectrum. The simplest spin qubit is a single electron spin located in a quantum dot, but many additional varieties have been developed, some containing multiple spins in multiple quantum dots, each of which has different benefits and drawbacks. Although these spins act as simple quantum systems in many ways, they also experience complex effects due to their semiconductor environment. They can be controlled by both magnetic and electric fields depending on their configuration and are therefore dephased by magnetic and electric field noise, with different types of spin qubits having different control mechanisms and noise susceptibilities. Initial experiments were primarily performed in gallium arsenide–based materials, but silicon qubits have developed substantially and research on qubits in silicon metal-oxide-semiconductor, silicon/silicon germanium heterostructures, and donors in silicon is also being pursued. An increasing number of spin qubit varieties have attained error rates that are low enough to be compatible with quantum error correction for single-qubit gates, and two-qubit gates have been performed in several with success rates, or fidelities, of 90–95%.

Article

Quantum Error Correction  

Todd A. Brun

Quantum error correction is a set of methods to protect quantum information—that is, quantum states—from unwanted environmental interactions (decoherence) and other forms of noise. The information is stored in a quantum error-correcting code, which is a subspace in a larger Hilbert space. This code is designed so that the most common errors move the state into an error space orthogonal to the original code space while preserving the information in the state. It is possible to determine whether an error has occurred by a suitable measurement and to apply a unitary correction that returns the state to the code space without measuring (and hence disturbing) the protected state itself. In general, codewords of a quantum code are entangled states. No code that stores information can protect against all possible errors; instead, codes are designed to correct a specific error set, which should be chosen to match the most likely types of noise. An error set is represented by a set of operators that can multiply the codeword state. Most work on quantum error correction has focused on systems of quantum bits, or qubits, which are two-level quantum systems. These can be physically realized by the states of a spin-1/2 particle, the polarization of a single photon, two distinguished levels of a trapped atom or ion, the current states of a microscopic superconducting loop, or many other physical systems. The most widely used codes are the stabilizer codes, which are closely related to classical linear codes. The code space is the joint +1 eigenspace of a set of commuting Pauli operators on n qubits, called stabilizer generators; the error syndrome is determined by measuring these operators, which allows errors to be diagnosed and corrected. A stabilizer code is characterized by three parameters [ [ n , k , d ] ] , where n is the number of physical qubits, k is the number of encoded logical qubits, and d is the minimum distance of the code (the smallest number of simultaneous qubit errors that can transform one valid codeword into another). Every useful code has n > k ; this physical redundancy is necessary to detect and correct errors without disturbing the logical state. Quantum error correction is used to protect information in quantum communication (where quantum states pass through noisy channels) and quantum computation (where quantum states are transformed through a sequence of imperfect computational steps in the presence of environmental decoherence to solve a computational problem). In quantum computation, error correction is just one component of fault-tolerant design. Other approaches to error mitigation in quantum systems include decoherence-free subspaces, noiseless subsystems, and dynamical decoupling.

Article

Quantum Quench and Universal Scaling  

Sumit R. Das

A quantum quench is a process in which a parameter of a many-body system or quantum field theory is changed in time, taking an initial stationary state into a complicated excited state. Traditionally “quench” refers to a process where this time dependence is fast compared to all scales in the problem. However in recent years the terminology has been generalized to include smooth changes that are slow compared to initial scales in the problem, but become fast compared to the physical scales at some later time, leading to a breakdown of adiabatic evolution. Quantum quench has been recently used as a theoretical tool to study many aspects of nonequilibrium physics like thermalization and universal aspects of critical dynamics. Relatively recent experiments in cold atom systems have implemented such quench protocols, which explore dynamical passages through critical points, and study in detail the process of relaxation to a steady state. On the other hand, quenches which remain adiabatic have been explored as a useful technique in quantum computation.

Article

Quantum Simulation With Trapped Ions  

D. Luo and N. M. Linke

Simulating quantum systems using classical computers encounters inherent challenges due to the exponential scaling with system size. To overcome this challenge, quantum simulation uses a well-controlled quantum system to simulate another less controllable system. Over the last 20 years, many physical platforms have emerged as quantum simulators, such as ultracold atoms, Rydberg atom arrays, trapped ions, nuclear spin, superconducting circuits, and integrated photonics. Trapped ions, with induced spin interactions and universal quantum gates, have demonstrated remarkable versatility, capable of both analog and digital quantum simulation. Recent experimental results, covering a range of research areas including condensed matter physics, quantum thermodynamics, high-energy physics, and quantum chemistry, guide this introductory review to the growing field of quantum simulation.

Article

Role of Quarks in Nuclear Structure  

A. W. Thomas

The strong force that binds atomic nuclei is governed by the rules of Quantum Chromodynamics. Here we consider the suggestion the internal quark structure of a nucleon will adjust self-consistently to the local mean scalar field in a nuclear medium and that this may play a profound role in nuclear structure. We show that one can derive an energy density functional based on this idea, which successfully describes the properties of atomic nuclei across the periodic table in terms of a small number of physically motivated parameters. Because this approach amounts to a new paradigm for nuclear theory, it is vital to find ways to test it experimentally and we review a number of the most promising possibilities.

Article

Self-Polarization in Storage Rings  

Eliana Gianfelice-Wendt

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. The conditions for the Sokolov-Ternov effect to occur are approximately satisfied by electrons (or positrons) circulating on the design orbit of a planar storage ring. Indeed, self-polarization was first observed in the electron/positron colliders Anneau de Collisions d’Orsay (ACO) and VEPP-2. Beam polarization offers an additional tool for understanding the physics events. The possibility of having polarized electron/positron beams for free is therefore appealing. However, the Sokolov-Ternov polarization time constant, proportional to 1/γ5 and to the third power of the bending radius, restricts the region of interest for self-polarization. For the about 100 km Future Circular Collider (FCC) under study at CERN, the polarization constant is about 10 days at 45 GeV beam energy. At high energy the randomization of the particle trajectory due to photon emission in a storage ring with finite alignment precision of the magnets introduces spin diffusion and limits the attainable polarization. In addition, in a collider the force exerted by the counter-rotating particles impact the beam polarization. This force increases with beam intensity and experiments are reluctant to pass up luminosity for polarization. To this day the electron(positron)/proton collider HERA has been the only high energy collider where electron (and positron) self-polarization was an integral part of the physics program.

Article

Solar Chromosphere  

Shahin Jafarzadeh

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article. The solar chromosphere (color sphere) is a strongly structured and highly dynamic region (layer) of the Sun’s atmosphere, located above the bright, visible photosphere. It is optically thin in the near-ultraviolet to near-infrared spectral range, but optically thick in the millimeter range and in strong spectral lines. Particularly important is the departure from the local thermodynamic equilibrium as one moves from the photosphere to the chromosphere. In a plane-parallel model, the temperature gradually rises from the low chromosphere outwards (radially from the center of the Sun), against the rapid decrease in both gas density and pressure with height throughout the entire solar atmosphere. In this classical picture, the chromosphere is sandwiched between the so-called temperature minimum (i.e., the minimum average temperature in the solar atmosphere; about 4000 K) and the hot transition region (with a few tens of thousands kelvin at its lower boundary), above which the temperature drastically increases outwards, reaching million degrees in the solar corona (i.e., the outermost layer of the Sun’s atmosphere). In reality, however, this standard (simple) model does not properly account for the many faces of the non-uniform and dynamic chromosphere. For instance, there also exists extremely cool gas in this highly dynamical region. A variety of heating mechanisms has been suggested to contribute in the energetics of the solar chromosphere. These particularly include propagating waves (of various kinds) often generated in the low photosphere, as well as jets, flares, and explosive events as a result of, for example, magnetic reconnection. However, observations of energy deposition in the chromosphere (particularly from waves) have been rare. The solar chromosphere is dominated by the magnetic fields (where the gas density reduces by more than four orders of magnitude compared to the underlying photosphere; hence, magnetic pressure dominates that of gas) featuring a variety of phenomena including sunspots, plages, eruptions, and elongated structures of different physical properties and/or appearances. The latter have been given different names in the literature, such as fibrils, spicules, filaments, prominences, straws, mottle, surges, or rosette, within which, various sub-categories have also been introduced. Some of these thread-like structures share the same properties, some are speculated to represent the same or completely different phenomena at different atmospheric heights, and some manifest themselves differently in intensity images, depending on properties of the sampling spectral lines. Their origins and relationships to each other are poorly understood. The elongated structures have been suggested to map the magnetic fields in the solar chromosphere; however, that includes challenges of measuring/approximating the chromospheric magnetic fields (particularly in the quiet regions), as well as of estimating the exact heights of formation of the fibrillar structures. The solar chromosphere may thus be described as a challenging, complex plasma-physics lab, in which many of the observed phenomena and physical processes have not yet been fully understood.

Article

Solar Cycle  

Lidia van Driel-Gesztelyi and Mathew J. Owens

The Sun’s magnetic field drives the solar wind and produces space weather. It also acts as the prototype for an understanding of other stars and their planetary environments. Plasma motions in the solar interior provide the dynamo action that generates the solar magnetic field. At the solar surface, this is evident as an approximately 11-year cycle in the number and position of visible sunspots. This solar cycle is manifest in virtually all observable solar parameters, from the occurrence of the smallest detected magnetic features on the Sun to the size of the bubble in interstellar space that is carved out by the solar wind. Moderate to severe space-weather effects show a strong solar cycle variation. However, it is a matter of debate whether extreme space-weather follows from the 11-year cycle. Each 11-year solar cycle is actually only half of a solar magnetic “Hale” cycle, with the configuration of the Sun’s large-scale magnetic field taking approximately 22 years to repeat. At the start of a new solar cycle, sunspots emerge at mid-latitude regions with an orientation that opposes the dominant large-scale field, leading to an erosion of the polar fields. As the cycle progresses, sunspots emerge at lower latitudes. Around solar maximum, the polar field polarity reverses, but the sunspot orientation remains the same, leading to a build-up of polar field strength that peaks at the start of the next cycle. Similar magnetic cyclicity has recently been inferred at other stars.

Article

Solar Dynamo  

Robert Cameron

The solar dynamo is the action of flows inside the Sun to maintain its magnetic field against Ohmic decay. On small scales the magnetic field is seen at the solar surface as a ubiquitous “salt-and-pepper” disorganized field that may be generated directly by the turbulent convection. On large scales, the magnetic field is remarkably organized, with an 11-year activity cycle. During each cycle the field emerging in each hemisphere has a specific East–West alignment (known as Hale’s law) that alternates from cycle to cycle, and a statistical tendency for a North-South alignment (Joy’s law). The polar fields reverse sign during the period of maximum activity of each cycle. The relevant flows for the large-scale dynamo are those of convection, the bulk rotation of the Sun, and motions driven by magnetic fields, as well as flows produced by the interaction of these. Particularly important are the Sun’s large-scale differential rotation (for example, the equator rotates faster than the poles), and small-scale helical motions resulting from the Coriolis force acting on convective motions or on the motions associated with buoyantly rising magnetic flux. These two types of motions result in a magnetic cycle. In one phase of the cycle, differential rotation winds up a poloidal magnetic field to produce a toroidal field. Subsequently, helical motions are thought to bend the toroidal field to create new poloidal magnetic flux that reverses and replaces the poloidal field that was present at the start of the cycle. It is now clear that both small- and large-scale dynamo action are in principle possible, and the challenge is to understand which combination of flows and driving mechanisms are responsible for the time-dependent magnetic fields seen on the Sun.