Adiabatic quantum computing (AQC) is a model of computation that uses quantum mechanical processes operating under adiabatic conditions. As a form of universal quantum computation, AQC employs the principles of superposition, tunneling, and entanglement that manifest in quantum physical systems. The AQC model of quantum computing is distinguished by the use of dynamical evolution that is slow with respect to the time and energy scales of the underlying physical systems. This adiabatic condition enforces the promise that the quantum computational state will remain well-defined and controllable thus enabling the development of new algorithmic approaches.
Several notable algorithms developed within the AQC model include methods for solving unstructured search and combinatorial optimization problems. In an idealized setting, the asymptotic complexity analyses of these algorithms indicate computational speed-ups may be possible relative to state-of-the-art conventional methods. However, the presence of non-ideal conditions, including non-adiabatic dynamics, residual thermal excitations, and physical noise complicate the assessment of the potential computational performance. A relaxation of the adiabatic condition is captured in the complementary computational heuristic of quantum annealing, which accommodates physical systems operating at finite temperature and in open environments. While quantum annealing (QA) provides a more accurate model for the behavior of actual quantum physical systems, the possibility of non-adiabatic effects obscures a clear separation with conventional computing complexity.
A series of technological advances in the control of quantum physical systems have enabled experimental AQC and QA. Prominent examples include demonstrations using superconducting electronics, which encode quantum information in the magnetic flux induced by a weak current operating at cryogenic temperatures. A family of devices developed specifically for unconstrained optimization problems has been applied to solve problems in specific domains including logistics, finance, material science, machine learning, and numerical analysis. An accompanying infrastructure has also developed to support these experimental demonstrations and to enable access of a broader community of users. Although AQC is most commonly applied in superconducting technologies, alternative approaches include optically trapped neutral atoms and ion-trap systems.
The significant progress in the understanding of AQC has revealed several open topics that continue to motivate research into this model of quantum computation. Foremost is the development of methods for fault-tolerant operation that will ensure the scalability of AQC for solving large-scale problems. In addition, unequivocal experimental demonstrations that differentiate the computational power of AQC and its variants from conventional computing approaches are needed. This will also require advances in the fabrication and control of quantum physical systems under the adiabatic restrictions.
1-20 of 40 Results
- Physics x
12
Article
Per Kraus
General relativity in three spacetime dimensions is a simplified model of gravity, possessing no local degrees of freedom, yet rich enough to admit black-hole solutions and other phenomena of interest. In the presence of a negative cosmological constant, the asymptotically anti–de Sitter (AdS) solutions admit a symmetry algebra consisting of two copies of the Virasoro algebra, with central charge inversely proportional to Newton’s constant. The study of this theory is greatly enriched by the AdS/CFT correspondence, which in this case implies a relationship to two-dimensional conformal field theory. General aspects of this theory can be understood by focusing on universal properties such as symmetries. The best understood examples of the AdS3/CFT2 correspondence arise from string theory constructions, in which case the gravity sector is accompanied by other propagating degrees of freedom. A question of recent interest is whether pure gravity can be made sense of as a quantum theory of gravity with a holographic dual. Attempting to answer this question requires making sense of the path integral over asymptotically AdS3 geometries.
Article
Yang-Hui He
Calabi-Yau spaces, or Kähler spaces admitting zero Ricci curvature, have played a pivotal role in theoretical physics and pure mathematics for the last half century. In physics, they constituted the first and natural solution to compactification of superstring theory to our 4-dimensional universe, primarily due to one of their equivalent definitions being the admittance of covariantly constant spinors.
Since the mid-1980s, physicists and mathematicians have joined forces in creating explicit examples of Calabi-Yau spaces, compiling databases of formidable size, including the complete intersecion (CICY) data set, the weighted hypersurfaces data set, the elliptic-fibration data set, the Kreuzer-Skarke toric hypersurface data set, generalized CICYs, etc., totaling at least on the order of
10
10
manifolds. These all contribute to the vast string landscape, the multitude of possible vacuum solutions to string compactification.
More recently, this collaboration has been enriched by computer science and data science, the former in bench-marking the complexity of the algorithms in computing geometric quantities, and the latter in applying techniques such as machine learning in extracting unexpected information. These endeavours, inspired by the physics of the string landscape, have rendered the investigation of Calabi-Yau spaces one of the most exciting and interdisciplinary fields.
Article
Massimo Florio and Chiara Pancotti
In economics, infrastructure is a long-term investment aimed at the delivery of essential services to a large number of users, such as those in the field of transport, energy, or telecommunications. A research infrastructure (RI) is a single-sited, distributed, virtual, or mobile facility, designed to deliver scientific services to communities of scientists. In physical sciences (including astronomy and astrophysics, particle and nuclear physics, analytical physics, medical physics), the RI paradigm has found several large-scale applications, such as radio telescopes, neutrino detectors, gravitational wave interferometers, particle colliders and heavy ion beams, high intensity lasers, synchrotron light sources, spallation neutron sources, and hadrontherapy facilities.
These RIs require substantial capital and operation expenditures and are ultimately funded by taxpayers. In social cost–benefit analysis (CBA), the impact of an investment project is measured by the intertemporal difference of benefits and costs accruing to different agents. Benefits and costs are quantified and valued through a common metric and using the marginal social opportunity costs of goods (or shadow price) that may differ from the market price, as markets are often incomplete or imperfect. The key strength of CBA is that it produces information about the project’s net contribution to society that is summarized in simple numerical indicators, such as the net present value of a project.
For any RIs, consolidated cost accounting should include intertemporal capital and operational expenditure both for the main managing body and for experimental collaborations or other external teams, including in-kind contributions. As far as social intertemporal benefits are concerned, it is convenient to divide them into two broad classes. The first class of benefits accrue to different categories of direct and indirect users of infrastructure services: scientists, students, firms benefiting from technological spillovers, consumers of innovative services and products, and citizens who are involved in outreach activities. The empirical estimation of the use value of an RI depends on the scientific specificities of each project, as different social groups are involved to different degrees. Second, there are benefits for the general public of non-users: these benefits are associated with social preferences for scientific research, even when the use of a discovery is unknown. In analogy with the valuation of environmental and cultural goods, the empirical approach to non-use value aims at eliciting the willingness to pay of citizens for the scientific knowledge that is created by an RI. This can be done by well-designed contingency valuation surveys.
While some socio-economic impact studies of RIs in physics have been available since the 1980s, the intangible nature of some benefits and the uncertainty associated with scientific discoveries have limited the diffusion of CBA in this field until recently. Nevertheless, recent studies have explored the application of CBA to RIs in physics. Moreover, the European Commission, the European Strategy Forum on Research Infrastructures, the European Investment Bank, and some national authorities suggest that the study of social benefits and costs of RIs should be part of the process leading to funding decisions.
Article
Maarten Boonekamp and Matthias Schott
With the huge success of quantum electrodynamics (QED) to describe electromagnetic interactions in nature, several attempts have been made to extend the concept of gauge theories to the other known fundamental interactions. It was realized in the late 1960s that electromagnetic and weak interactions can be described by a single unified gauge theory. In addition to the photon, the single mediator of the electromagnetic interaction, this theory predicted new, heavy particles responsible for the weak interaction, namely the W and the Z bosons. A scalar field, the Higgs field, was introduced to generate their mass.
The discovery of the mediators of the weak interaction in 1983, at the European Center for Nuclear Research (CERN), marked a breakthrough in fundamental physics and opened the door to more precise tests of the Standard Model. Subsequent measurements of the weak boson properties allowed the mass of the top quark and of the Higgs Boson to be predicted before their discovery. Nowadays, these measurements are used to further probe the consistency of the Standard Model, and to place constrains on theories attempting to answer still open questions in physics, such as the presence of dark matter in the universe or unification of the electroweak and strong interactions with gravity.
Article
Helge Kragh
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Whereas philosophers and astronomers have always been interested in the universe at large, cosmology as a physical science is of relatively recent origin. Two roots of so-called modern cosmology can be identified, one observational and the other theoretical. On the observational side, spectroscopic study of the distant nebulae in the late 19th century was one of the roots. The other root, purely theoretical in nature, was Einstein’s cosmological model of 1917 based on general relativity. For a long time, the two traditions developed independently, but with the recognition of the expanding universe at about 1930 they merged into one.
Physical cosmology in more or less the modern sense of the term relied on quantum mechanics and advances in nuclear physics, which provided the basis for the first theories of the very early universe established in the late 1940s. During the following decade the new theory of the hot big bang met stiff competition from the rival steady state theory. However in 1965, and largely as a result of the discovery of the cosmic microwave background, the latter theory was abandoned or at least marginalized. Five years later, the standard hot big bang theory of the universe had obtained an almost paradigmatic status in the small but growing community of cosmologists. Modern cosmology had come into being. This was only a beginning and over the next decades the paradigm (if so it was) continued to be refined and on occasions even questioned.
Article
Kei Koizumi
Large-scale U.S. government support of scientific research began in World War II with physics, and rapidly expanded in the postwar era to contribute strongly to the United States’ emergence as the world’s leading scientific and economic superpower in the latter half of the 20th century. Vannevar Bush, who directed President Franklin Roosevelt’s World War II science efforts, in the closing days of the War advocated forcefully for U.S. government funding of scientific research to continue even in peacetime to support three important government missions of national security, health, and the economy. He also argued forcefully for the importance of basic research supported by the federal government but steered and guided by the scientific community. This vision guided an expanding role for the U.S. government in supporting research not only at government laboratories but also in non-government institutions, especially universities.
Although internationally comparable data are difficult to obtain, the U.S. government appears to be the single largest national funder of physics research. The U.S. government support of physics research comes from many different federal departments and agencies. Federal agencies also invest in experimental development based on research discoveries of physics. The Department of Energy’s (DOE) Office of Science is by far the dominant supporter of physics research in the United States, and DOE’s national laboratories are the dominant performers of U.S. government-supported physics research. Since the 1970s, U.S. government support of physics research has been stagnant with the greatest growth in U.S. government research support having shifted since the 1990s to the life sciences and computer sciences.
Article
Zhirong Huang
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Free electron lasers (FELs) are coherent radiation sources based on radiation from “free” relativistic electrons rather than electrons bound in atomic and molecular systems. FELs can, in principle, operate at any arbitrary wavelength, limited only by the energy and quality of the electron beam that is produced by accelerators. Therefore, FELs can be used to fill gaps in regions of the electromagnetic spectrum where no other coherent sources exist and can provide radiation of very high power and extreme brightness. More than 50 FELs have been built around the world, serving a diverse array of scientific fields and applications.
FELs are based on the resonant interaction of a high-quality electron beam with the radiation in a periodic magnetic device called an “undulator” and can have several operating modes. FEL oscillators use optical cavities to trap the radiation, so that the field is built up over many amplification passes through the undulator. FELs can also act as linear amplifiers that will magnify external radiation whose central frequency is close to the undulator resonance condition. Without any external signal, self-amplified spontaneous emission (SASE) can be used to generate intense coherent radiation starting from electron shot noise and is the most common approach for X-ray FELs. SASE will have limited temporal coherence and pulse stability due to its noisy start-up but is very flexible to generate ultrashort X-ray pulses down to attosecond durations. Various advanced schemes aiming at achieving fully coherent, stable X-ray pulses are proposed and are actively being investigated and developed.
Article
Mauro Migliorati
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Physics. Please check back later for the full article.
Modern particle accelerators require ever higher currents to meet user demands, both for high energy physics experiments and for other applications, as with, for example, FLASH therapy, an innovation in radiation therapy where short pulses of electrons at very high dose rates are required. These high currents interacting with the accelerators’ environment produce strong self-induced electromagnetic fields that perturb the external fields used to guide and accelerate the charged particles.
Under certain conditions, these perturbations can be so large as to limit the accelerators’ performances giving rise to unwanted effects, such as uncontrolled beam oscillations or even instabilities. The beam self-induced fields are described in terms of the so-called wakefields and coupling impedances, two quantities that are used to evaluate their impact on beam dynamics and on instabilities’ thresholds.
It is therefore very important, in particular for high currents, to determine both wakefields and coupling impedances generated by the interaction of the beam with the different machine devices and of the corresponding induced instabilities. This is carried out with analytical approaches using simplified models, or in a more rigorous and realistic way, through simulation codes. A first step in this type of study is generally represented by a complete electromagnetic characterization of the different accelerator devices and by the search for possible minimization of wakefields and coupling impedances. Once these quantities are known, their effect on beam dynamics must be evaluated, and a proper machine working point, far away from any impedance-induced beam instabilities, needs to be determined.
Nowadays, since the machine performances are pushed higher and higher, new effects produced by wakefields and coupling impedances are found, and are related, in many cases, to the interference between different mechanisms that can no longer be studied separately. Finally, mitigation solutions, such as feedback systems, use of nonlinearities, and other techniques must also be investigated to have different tools able to counteract possible unwanted beam-induced instabilities.
Article
Thomas Wiegelmann
Magnetohydrodynamic equilibria are time-independent solutions of the full magnetohydrodynamic (MHD) equations. An important class are static equilibria without plasma flow. They are described by the magnetohydrostatic equations
j
×
B
=
∇
p
+
ρ
∇
Ψ
,
∇
×
B
=
μ
0
j
,
∇
·
B
=
0.
B
is the magnetic field,
j
the electric current density,
p
the plasma pressure,
ρ
the mass density,
Ψ
the gravitational potential, and
µ
0
the permeability of free space. Under equilibrium conditions, the Lorentz force
j
×
B
is compensated by the plasma pressure gradient force and the gravity force.
Despite the apparent simplicity of these equations, it is extremely difficult to find exact solutions due to their intrinsic nonlinearity. The problem is greatly simplified for effectively two-dimensional configurations with a translational or axial symmetry. The magnetohydrostatic (MHS) equations can then be transformed into a single nonlinear partial differential equation, the Grad–Shafranov equation. This approach is popular as a first approximation to model, for example, planetary magnetospheres, solar and stellar coronae, and astrophysical and fusion plasmas.
For systems without symmetry, one has to solve the full equations in three dimensions, which requires numerically expensive computer programs. Boundary conditions for these systems can often be deduced from measurements. In several astrophysical plasmas (e.g., the solar corona), the magnetic pressure is orders of magnitudes higher than the plasma pressure, which allows a neglect of the plasma pressure in lowest order. If gravity is also negligible, Equation 1 then implies a force-free equilibrium in which the Lorentz force vanishes.
Generalizations of MHS equilibria are stationary equilibria including a stationary plasma flow (e.g., stellar winds in astrophysics). It is also possible to compute MHD equilibria in rotating systems (e.g., rotating magnetospheres, rotating stellar coronae) by incorporating the centrifugal force. MHD equilibrium theory is useful for studying physical systems that slowly evolve in time. In this case, while one has an equilibrium at each time step, the configuration changes, often in response to temporal changes of the measured boundary conditions (e.g., the magnetic field of the Sun for modeling the corona) or of external sources (e.g., mass loading in planetary magnetospheres). Finally, MHD equilibria can be used as initial conditions for time-dependent MHD simulations. This article reviews the various analytical solutions and numerical techniques to compute MHD equilibria, as well as applications to the Sun, planetary magnetospheres, space, and laboratory plasmas.
Article
D. I. Pontin
Magnetic reconnection is a fundamental process that is important for the dynamical evolution of highly conducting plasmas throughout the Universe. In such highly conducting plasmas the magnetic topology is preserved as the plasma evolves, an idea encapsulated by Alfvén’s frozen flux theorem. In this context, “magnetic topology” is defined by the connectivity and linkage of magnetic field lines (streamlines of the magnetic induction) within the domain of interest, together with the connectivity of field lines between points on the domain boundary. The conservation of magnetic topology therefore implies that magnetic field lines cannot break or merge, but evolve only according to smooth deformations. In any real plasma the conductivity is finite, so that the magnetic topology is not preserved everywhere: magnetic reconnection is the process by which the field lines break and recombine, permitting a reconfiguration of the magnetic field. Due to the high conductivity, reconnection may occur only in small dissipation regions where the electric current density reaches extreme values. In many applications of interest, the change of magnetic topology facilitates a rapid conversion of stored magnetic energy into plasma thermal energy, bulk-kinetic energy, and energy of non-thermally accelerated particles. This energy conversion is associated with dynamic phenomena in plasmas throughout the Universe. Examples include flares and other energetic phenomena in the atmosphere of stars including the Sun, substorms in planetary magnetospheres, and disruptions that limit the magnetic confinement time of plasma in nuclear fusion devices. One of the major challenges in understanding reconnection is the extreme separation between the global system scale and the scale of the dissipation region within which the reconnection process itself takes place. Current understanding of reconnection has developed through mathematical and computational modeling as well as dedicated experiments in both the laboratory and space. Magnetohydrodynamic (MHD) reconnection is studied in the framework of magnetohydrodynamics, which is used to study plasmas (and liquid metals) in the continuum approximation.
Article
E.R. Priest
Magnetohydrodynamics is sometimes called magneto-fluid dynamics or hydromagnetics and is referred to as MHD for short. It is the unification of two fields that were completely independent in the 19th, and first half of the 20th, century, namely, electromagnetism and fluid mechanics. It describes the subtle and complex nonlinear interaction between magnetic fields and electrically conducting fluids, which include liquid metals as well as the ionized gases or plasmas that comprise most of the universe.
In places such as the Earth’s magnetosphere or the Sun’s outer atmosphere (the corona) where the magnetic field provides an important component of the free energy, MHD effects are responsible for much of the observed dynamic behavior, such as geomagnetic substorms, solar flares and huge eruptions from the Sun that dominate the Earth’s space weather. However, MHD is also of great importance in astrophysics, since many of the MHD processes that are observed in the laboratory or in the Sun and the magnetosphere also take place under different parameter regimes in more exotic cosmical objects such as active stars, accretion discs, and black holes.
The different aspects of MHD include determining the nature of: magnetic equilibria under a balance between magnetic forces, pressure gradients and gravity; MHD wave motions; magnetic instabilities; and the important process of magnetic reconnection for converting magnetic energy into other forms. In turn, these aspects play key roles in the fundamental astrophysical processes of magnetoconvection, magnetic flux emergence, star spots, plasma heating, stellar wind acceleration, stellar flares and eruptions, and the generation of magnetic fields by dynamo action.
Article
V.M. Nakariakov
Magnetohydrodynamic (MHD) waves represent one of the macroscopic processes responsible for the transfer of the energy and information in plasmas. The existence of MHD waves is due to the elastic and compressible nature of the plasma, and by the effect of the frozen-in magnetic field. Basic properties of MHD waves are examined in the ideal MHD approximation, including effects of plasma nonuniformity and nonlinearity. In a uniform medium, there are four types of MHD wave or mode: the incompressive Alfvén wave, compressive fast and slow magnetoacoustic waves, and non-propagating entropy waves. MHD waves are essentially anisotropic, with the properties highly dependent on the direction of the wave vector with respect to the equilibrium magnetic field. All of these waves are dispersionless. A nonuniformity of the plasma may act as an MHD waveguide, which is exemplified by a field-aligned plasma cylinder that has a number of dispersive MHD modes with different properties. In addition, a smooth nonuniformity of the Alfvén speed across the field leads to mode coupling, the appearance of the Alfvén continuum, and Alfvén wave phase mixing. Interaction and self-interaction of weakly nonlinear MHD waves are discussed in terms of evolutionary equations. Applications of MHD wave theory are illustrated by kink and longitudinal waves in the corona of the Sun.
Article
Tzu-Chieh Wei
Measurement-based quantum computation is a framework of quantum computation, where entanglement is used as a resource and local measurements on qubits are used to drive the computation. It originates from the one-way quantum computer of Raussendorf and Briegel, who introduced the so-called cluster state as the underlying entangled resource state and showed that any quantum circuit could be executed by performing only local measurement on individual qubits. The randomness in the measurement outcomes can be dealt with by adapting future measurement axes so that computation is deterministic. Subsequent works have expanded the discussions of the measurement-based quantum computation to various subjects, including the quantification of entanglement for such a measurement-based scheme, the search for other resource states beyond cluster states and computational phases of matter. In addition, the measurement-based framework also provides useful connections to the emergence of time ordering, computational complexity and classical spin models, blind quantum computation, and so on, and has given an alternative, resource-efficient approach to implement the original linear-optic quantum computation of Knill, Laflamme, and Milburn. Cluster states and a few other resource states have been created experimentally in various physical systems, and the measurement-based approach offers a potential alternative to the standard circuit approach to realize a practical quantum computer.
Article
Julia M. Yeomans
Growth, motion, morphogenesis, and self-organization are features common to all biological systems. Harnessing chemical energy allows biological cells to function out of thermodynamic equilibrium and to alter their number, size, shape, and location. For example, the zygote that results when a mammalian egg and sperm cell fuse divides to form a ball of cells, the blastocyst. Collective cell migrations and remodeling then drive the tissue folding that determines how cells are positioned before they differentiate to grow into the stunning diversity of different living creatures. The development of organoids and tumors is controlled by the confining properties of the viscous, extracellular matrix that surrounds tissues; wounds heal through the collective motion of cell layers and escape from a surface layer into the third dimension, which determines the growth of biofilms.
The relevance of stresses, forces, and flows in these processes is clear and forms the basis of the interdisciplinary science of mechanobiology, which draws on knowledge from physics, engineering, and biochemistry to ask how cells organize their internal components, how they move, grow and divide, and how they interact mechanically with each other and with their surroundings. This approach to biological processes is particularly timely, both because of experimental advances exploiting soft lithography and enhanced imaging techniques, and because of progress in the theories of active matter, which is leading to new ways to describe the collective dynamical behavior of systems out of thermodynamic equilibrium. Identifying stresses, forces, and flows, and describing how they act, may be key to unifying research on the underlying molecular mechanisms and to interpreting a wealth of disparate data to understand biological self-organization from molecular to tissue scales.
Article
Elena Khomenko
Multi-fluid magnetohydrodynamics is an extension of classical magnetohydrodynamics that allows a simplified treatment plasmas with complex chemical mixtures. The types of plasma susceptible to multi-fluid effects are those containing particles with properties significantly different from those of the rest of the plasma in either mass, or electric charge, such as neutral particles, molecules, or dust grains. In astrophysics, multi-fluid magnetohydrodynamics is relevant for planetary ionospheres and magnetospheres, the interstellar medium, and the formation of stars and planets, as well as in the atmospheres of cool stars such as the Sun. Traditionally, magnetohydrodynamics has been a classical approximation in many astrophysical and physical applications. Magnetohydrodynamics works well in dense plasmas where the typical plasma scales (e.g., cyclotron frequencies, Larmor radius) are significantly smaller than the scales of the processes under study. Nevertheless, when plasma components are not well coupled by collisions it is necessary to replace single-fluid magnetohydrodynamics by multi-fluid theory. The present article provides a description of environments in which a multi-fluid treatment is necessary and describes modifications to the magnetohydrodynamic equations that are necessary to treat non-ideal plasmas. It also summarizes the physical consequences of major multi-fluid non-ideal magnetohydrodynamic effects including ambipolar diffusion, the Hall effect, the battery effect, and other intrinsically multi-fluid effects. Multi-fluid theory is an intermediate step between magnetohydrodynamics dealing with the collective behaviour of an ensemble of particles, and a kinetic approach where the statistics of particle distributions are studied. The main assumption of multi-fluid theory is that each individual ensemble of particles behaves like a fluid, interacting via collisions with other particle ensembles, such as those belonging to different chemical species or ionization states. Collisional interaction creates a relative macroscopic motion between different plasma components, which, on larger scales, results in the non-ideal behaviour of such plasmas. The non-ideal effects discussed here manifest themselves in plasmas at relatively low temperatures and low densities.
Article
Martin Freer
The ability to model the nature of the strong interaction at the nuclear scale using ab initio approaches and the development of high-performance computing is allowing a greater understanding of the details of the structure of light nuclei. The nature of the nucleon–nucleon interaction is such that it promotes the creation of clusters, mainly α-particles, inside the nuclear medium. The emergence of these clusters and understanding the resultant structures they create has been a long-standing area of study. At low excitation energies, close to the ground state, there is a strong connection between symmetries associated with mean-field, single-particle behavior and the geometric arrangement of the clusters, while at higher excitation energies, when the cluster decay threshold is reached, there is a transition to a more gas-like cluster behavior. State-of-the-art calculations now guide the thinking in these two regimes, but there are some key underpinning principles that they reflect. Building from the simple ideas to the state of the art, a thread is created by which the more complex calculations have a foundation, developing a description of the evolution of clustering from α-particle to 16O clusters.
Article
Wayne C. Myrvold
Thermodynamics gives rise to a number of conceptual issues that have been explored by both physicists and philosophers. One source of contention is the nature of thermodynamics itself. Is it what physicists these days would call a resource theory, that is, a theory about how agents with limited means of manipulating a physical system can exploit its physical properties to achieve desired ends, or is it a theory of the basic properties of matter, independent of considerations of manipulation and control? Another source of contention is the relation between thermodynamics and statistical mechanics. It has been recognized since the 1870s that the laws of thermodynamics, as originally conceived, cannot be strictly correct. Because of fluctuations at the molecular level, processes forbidden by the original version second law of thermodynamics are continually occurring. The original version of the second law is to be replaced with a probabilistic version, according to which large-scale violations of the original second law are not impossible but merely highly improbable, and small-scale violations unpredictable, unable to be harnessed to systematically produce useful work. The introduction of probability talk raises the question of how we should conceive of probabilities in the context of deterministic physical laws.
Article
Quantum Mechanics is one of the most successful theories of nature. It accounts for all known properties of matter and light, and it does so with an unprecedented level of accuracy. On top of this, it generated many new technologies that now are part of daily life. In many ways, it can be said that we live in a quantum world. Yet, quantum theory is subject to an intense debate about its meaning as a theory of nature, which started from the very beginning and has never ended. The essence was captured by Schrödinger with the cat paradox: why do cats behave classically instead of being quantum like the one imagined by Schrödinger? Answering this question digs deep into the foundation of quantum mechanics.
A possible answer is Dynamical Collapse Theories. The fundamental assumption is that the Schrödinger equation, which is supposed to govern all quantum phenomena (at the non-relativistic level) is only approximately correct. It is an approximation of a nonlinear and stochastic dynamics, according to which the wave functions of microscopic objects can be in a superposition of different states because the nonlinear effects are negligible, while those of macroscopic objects are always very well localized in space because the nonlinear effects dominate for increasingly massive systems. Then, microscopic systems behave quantum mechanically, while macroscopic ones such as Schrödinger’s cat behave classically simply because the (newly postulated) laws of nature say so.
By changing the dynamics, collapse theories make predictions that are different from quantum-mechanical predictions. Then it becomes interesting to test the various collapse models that have been proposed. Experimental effort is increasing worldwide, so far limiting values of the theory’s parameters quantifying the collapse, since no collapse signal was detected, but possibly in the future finding such a signal and opening up a window beyond quantum theory.
Article
The development of physics over the past few centuries has increasingly enabled the development of numerous technologies that have revolutionized society. In the 17th century, Newton built on the results of Galileo and Descartes to start the quantitative science of mechanics. The fields of thermodynamics and electromagnetism were developed more gradually in the 18th and 19th centuries. Of the big physics breakthroughs in the 20th century, quantum mechanics has most clearly led to the widest range of new technologies. New scientific discovery and its conversion to technology, enabling new products, is typically a complex process. From an industry perspective, it is addressed through various R&D strategies, particularly those focused on optimization of return on investment (ROI) and the associated risk management. The evolution of such strategies has been driven by many diverse factors and related trends, including international markets, government policies, and scientific breakthroughs. As a result, many technology-creation initiatives have been based on various types of partnerships between industry, academia, and/or governments. Specific strategies guiding such partnerships are best understood in terms of how they have been developed and implemented within a particular industry. As a consequence, it is useful to consider case studies of strategic R&D partnerships involving the semiconductor industry, which provides a number of instructive examples illustrating strategies that have been successful over decades. There is a large quantity of literature on this subject, in books, journal articles, and online.
12