Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Physics. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 25 March 2023

The Philosophical Significance of Decoherencefree

The Philosophical Significance of Decoherencefree

  • Elise CrullElise CrullThe City College of New York & CUNY Graduate Center

Summary

Quantum decoherence is a physical process resulting from the entanglement of a system with environmental degrees of freedom. The entanglement allows the environment to behave like a measuring device on the initial system, resulting in the dynamical suppression of interference terms in mutually commuting bases. Because decoherence processes are extremely fast and often practically irreversible, measurements performed on the system after system–environment interactions typically yield outcomes empirically indistinguishable from physical collapse of the wave function. That is: environmental decoherence of a system’s phase relations produces effective eigenstates of a system in certain bases (depending on the details of the interaction) through prodigious damping—but not destruction—of the system’s off-diagonal terms in those bases.

Although decoherence by itself is neither an interpretation of quantum physics nor indeed even new physics, there is much debate concerning the implications of this process in both the philosophical and the scientific literature. This is especially true regarding fundamental questions arising from quantum theory about the roles of measurement, observation, the nature of entanglement, and the emergence of classicality. In particular, acknowledging the part decoherence plays in interpretations of quantum mechanics recasts that debate in a new light.

Subjects

  • Physics and Philosophy
  • Quantum Information

1 Introduction

Quantum decoherence is a physical process resulting from the entanglement of a system (not necessarily microscopic) with environmental degrees of freedom. This entanglement allows the environment to dynamically suppress interference terms in the basis or bases of the system that commute with the environment. Because decoherence processes are extremely fast and often practically irreversible, measurements performed on an initial system after environmental interaction typically yield outcomes empirically indistinguishable from those of a wave function that has collapsed.

Decoherence is worth understanding from a physical perspective in part because it has proven to be a powerful tool for effectively isolating individual components of complex systems. For example, decoherence models have enabled chemists to study individual DNA base pairs by effectively isolating them from various specified environments (e.g., the remainder of the DNA double helix itself, or the double helix together with additional external influences), something that was previously impossible (Zilly et al., 2010). An example from the life sciences involves light-harvesting and energy-transfer mechanisms in photosynthetic organisms, which require sustained coherence of light despite noisy biological environments. Decoherence modeling has enabled biochemists to explain why longer rates of coherence are possible during these processes than were expected from semiclassical models (cf. Wilde et al., 2010). Unsurprisingly, decoherence models have proved exceptionally rich fodder for theoretical physics, although the literature is too vast and too quickly expanding to cite fruitfully here. Clearly, the fecundity of the models across scientific disciplines indicates the importance of understanding this process.

The prodigious growth of decoherence studies has drawn the attention of scholars working in philosophical foundations of physics. In particular, recent years have witnessed a concerted effort by proponents of various interpretations of quantum theory to incorporate decoherence processes into the explanatory package of that interpretation. In addition, philosophers have begun looking to decoherence to explain certain long-standing questions arising from quantum physics. In what follows, it is shown that decoherence plays an essential explanatory role in understanding the preferred basis problem, various classical limit theorems, the strange lack of macroscopic superpositions, and aspects of the measurement problem. Tackling these questions makes it clear that while decoherence processes must certainly be accounted for in any approach to quantum theory, various interpretations invoke this physics in different contexts, using different formal methods, and with different motivations. It is suggested (not for the first time) that when decoherence is taken into account properly and in full measure in these interpretations, their relative merits and demerits can be assessed in a new light.

Decoherence is both ubiquitous and powerful; understanding its theoretical and experimental import is a crucial component of any general discussion—philosophical or otherwise—about systems and their interactions with an environment. For instance, as decoherence is one of entanglement’s measurable effects, the successful application of increasingly sophisticated decoherence models promises substantial gains in understanding this feature of quantum theories. Unfortunately, much of the gain has been muddied by unsubstantiated generalizations regarding the process, along with widespread equivocation between quantum decoherence rightly understood as a strictly physical process and wrongly understood as a distinct interpretation of quantum mechanics. While the temptation to make generalizations is ever present in the former case, it must be fiercely resisted, as the dynamics of decoherence are highly sensitive to details about the system, the environment, and their interaction. Hence the aim in what follows is to be as clear as possible about what the physical process of decoherence may or may not imply more generally about the behavior of systems interacting with environments.

Section 2 briefly describes the basic physics of decoherence along with two formalisms often used for it: the reduced density matrix formalism and the restricted path-integrals formalism. Section 3 explores whether and how decoherence may resolve certain foundational questions, such as the emergence of classicality, the problem of the preferred basis, and the measurement problem. Section 4 focuses on the role of decoherence in realist interpretations of quantum mechanics.

2 The Physics of Decoherence

A minimal definition of decoherence involves the recognition that environmental interactions essentially suppress quantum interference phenomena by means other than mere mechanical perturbation or thermal dissipation. A more careful definition of decoherence requires the admission not only that no quantum system is perfectly closed (at least not for long), but also that these dynamics are the direct result of quantum interactions (i.e., system–environment entanglement).

The conditions needed for decoherence to occur are minimal. In the Hilbert space formalism, they amount to three rather pedestrian postulates and one empirical fact:

1.

Pure and mixed states can be represented by density operators in a Hilbert space.

2.

Observables can be represented by self-adjoint operators acting in that space.

3.

The Schrödinger equation unitarily and deterministically describes the evolution of closed systems.

4.

Quantum interactions are ubiquitous, and will generally result in entanglement.1

These conditions can be weakened further. Regarding the first, mixed states can be represented otherwise, such as by using restricted path integrals (described in Section 2.2). Regarding the second, note that this proposition isn’t strictly necessary. It is certainly natural to associate (usually classical, macroscopic) observables with the class of operators taking only positive values and projecting the system’s state onto distinguishable, orthonormal bases in the Hilbert space. Notions of usefulness and naturalness, while excellent as guides, are problematic as dogma. The operator formalism is designed to correlate measured eigenvalues with what are postulated (in line with classical assumptions) to be definite physical properties of a system. Add to these postulates the ubiquity of quantum interactions leading to widespread entanglement and one has the ingredients for decoherence.

An initial description of the physics is as follows. The well-known, idealized von Neumann measurement scheme gives for initial system S with basis vectors sn and measuring apparatus A with macroscopically distinguishable basis vectors an (letting the initial state of the apparatus prior to measurement be denoted a0), the following evolution:

( n c n s n ) a 0 n c n s n a n . (1)

The system and apparatus are now coupled, but since the Schmidt decomposition theorem allows the resultant composite state to be written in a potentially unique orthonormal basis, it could be argued that the dynamical soil is not yet rich enough to give rise to decoherence, since there are no crossterms in such a basis. However, expressing the composite state as a diagonalized matrix in some basis does not necessarily imply the nonexistence of interference terms in all bases. Removal of the assumption that the system-plus-apparatus composite is perfectly isolated and incorporating environmental states (representing the initially uncoupled environmental state as e0) into the measurement scheme makes this obvious. This yields:

( n c n s n ) a 0 ( n c n s n a n ) e 0 n c n s n a n e n , (2)

a composite system upon which, in general, no Schmidt decomposition can be carried out. Taking seriously the influence of environmental degrees of freedom (i.e., degrees of freedom external to the system-plus-apparatus composite, which may even include, to mention Borel’s famous example, those of a small mass out near Sirius), the recovery of information about the initial system will require different mathematical tools. In particular, numerous unobserved environmental degrees of freedom will need to be separated out as completely as possible, and this is commonly done using the reduced density matrix formalism.

2.1 Reduced Density Matrix Formalism

The most commonly employed formalism for decoherence in both theory and experiment is that of the reduced density matrix (RDM); this section (as well as the next) focuses on assumptions and other moments where interpretations of formal results enter without presenting the full details of the formalism itself.2

While density matrices express complete statistical information about a given state, they are incomplete representations of other crucial physical details. It is often true that precise knowledge about the system’s pre-interaction state is lacking. In such cases, one should refrain from making assumptions about the nature of the system based solely on the appearance of the density matrix, because formally identical matrices (e.g., one representing a mixed state) may describe distinct physical situations (e.g., an improper vs. proper mixture), and of course the given matrix can always be written down in a different basis. Thus, density matrices alone do not give sufficient information to substantiate claims about the actual state of the system. Such a claim would require the invocation of a particular interpretation of quantum mechanics, and it will be useful to see how far the formalism alone can go toward providing dynamical explanations of certain phenomena.

In the density matrix formalism, expectation values are recovered by taking the trace (“Tr”) of the operator formed by the product of the system’s density matrix ρ̂ and system observables Ô:Ô=Trρ̂Ô. A partial trace operation is then defined that will be useful for dealing with composite systems. A partial trace is taken over just one subspace of the joint Hilbert space instead of over the entire Hilbert space (which is often impossible). This operation yields an RDM representing complete local measurement statistics for the other subsystem.

Consider again the system S with basis vectors sn in Hilbert space S, now interacting with an environment E whose basis vectors en live in Hilbert space . Calculating expectation values by carrying out a trace over the total density matrix ρ̂ in the joint Hilbert space S is unmanageable, so instead a partial trace Tr is performed over environmental degrees of freedom in an orthonormal basis of alone, producing an RDM of the initial system ρ̂S=Trρ̂.

After interaction, the composite system continues to evolve with dynamics governed by the unitary group Ut generated from the total Hamiltonian. However, the dynamics of the RDM are no longer unitary, for this subsystem is not closed. Indeed, evaluating ρ̂St=TrUtρ̂0Ut under various simplifying conditions (e.g., the Markov approximation) generates the master equations used for modeling decoherence processes like the differing rates of decoherence of a system’s position and momentum under the same environmental conditions.

Several philosophical comments regarding this formalism are in order. The use of density matrices is sometimes referred to as coarse-graining, because it is a move from fully known Hamiltonians to statistical ensembles. But in the present context, coarse-graining is something of a misnomer. Moving to the density matrix formalism does not introduce a loss of information: maximal statistical knowledge about the entangled composite system is still available from mixed-state density matrices. More cannot be said about the nature of composite systems based solely on their total density matrix without interpreting the probabilities giving rise to these statistics. However, when generating a reduced statistical operator—which, when written in a particular basis, yields the RDM for one subsystem—an additional piece of information is gained: this RDM must be non-pure, because it was derived explicitly from a density matrix of superpositions corresponding to an entangled composite.

The choice to form the RDM by tracing out a certain set of degrees of freedom rather than others—in short, how one defines one’s subsystems—is arbitrary. This is important: although the RDM formalism does indeed require that the composite system be partitioned by taking a partial trace (and the von Neumann entropy may change depending on the nature of this projection), how the division into “system” and “environment” is made by no means confers special meaning upon the resultant subsystems.3 In as much as the partitioning is arbitrary, it carries no philosophical weight.

It might be a cause of concern that, while the RDM formalism is a nice workaround for assuming a locally interpretable density operator in cases where coherence is a property only of the global system, it does this by introducing yet another assumption: the Born rule. It is true that the trace operation is intentionally designed to recover Born probabilities. However, it is not necessarily true that the trace operation’s dependence on the Born rule forces the adoption of a particular interpretational stance. As long as it is not required that measurement outcomes be actually ontologically definite—which is a significant step beyond considering them only apparently definite—then the Born rule can be understood as merely a tool for generating expectations and not a metaphysical claim requiring further justification.4

2.2 Restricted Path Integral Formalism

The restricted or reduced path integral (RPI) formalism was developed by Mensky (1979; see also Mensky, 1993) in analogy with Feynman’s path integral method for calculating nonrelativistic wave-function trajectories. Mensky wanted in particular to develop a method for studying continuous quantum measurements not of the extraordinarily rapid kind induced by a typical noisy, microscopic environment, but continuous measurements of the kind leading to “gradual decoherence”—i.e., slower processes induced by mesoscopic environments (relevant for work in quantum optics) or affecting qubit coherence (relevant for work in quantum computing). Importantly, the RPI formalism is derivable from general principles regarding subsystem interactions and does not rely on any specific model of decoherence.

Briefly (and following Mensky, 2002), the RPI formalism is as follows. Feynman’s approach describes the probability amplitude of a system’s propagation along path pq (momentum and configuration variables) in its phase space during time interval t0t1 as:

U p q exp i t 0 t 1 dt p q ̇ H p q t , (3)

where H is the system Hamiltonian and t0<t<t1. If the path actually taken by the system cannot be discovered, the total propagator between points q0, q1 in its configuration space is calculated by taking the integral over all possible paths [p] along with all paths [q] bounded by qt0=q0 and qt1=q1:

U q 1 t 1 q 0 t 0 = d p q exp i t 0 t 1 dt p q ̇ H p q t , (4)

which satisfies Schrödinger’s equation. In the case of continuous measurement of the system between t0t1, information about the stable result of that measurement allows a probabilistic weighting over all paths to be introduced. This effectively reduces the total propagator (4) to that subset of paths that is consistent with the result, and this in turn yields the restricted path integral of the system. For measurement outcome α, the system’s evolution in phase space is described by the subset of unitary operators Uα and can be expressed in terms of a time-dependent density operator ραt1Uαρt0Uα. In cases where the final outcome is unknown, integration must take place over all possible results:

ρ t 1 = ρ α t 1 , (5)

where the total density matrix is normalized just in case the system’s initial density matrix is normalized and UαUα=1.

Mensky then applied this formalism to instances where the continuous measurement is the (environmental) monitoring of a (system) observable, noting that the small uncertainty in the value of the observable at a given time may be codified by representing the measurement result (the value of the observable) as a channel, or corridor, of proximate paths. The restricted path integral is therefore more appropriately interpreted as defining the relevant corridor (with fuzzy boundaries) for continuous monitoring of the observable. In this way, an RPI tracks with the interpretation of an RDM as providing statistical and not precise information about the state of the initial system under continuous monitoring.5

Given that decoherence comes about when the fiction of a closed system is corrected, both frameworks need to provide a means for later extracting information about the decohered state of one subsystem from the coherent composite system. Although the endgame of both formalisms is similar in this respect, there are a few differences of potential philosophical import worth mentioning. For one, while the RDM formalism works in the Schrödinger picture (states, not operators, are time-dependent), the RPI formalism switches over to the Heisenberg picture (operators, not states, evolve in time). For this reason, the path integral method may appeal less to those interested in metaphysical questions. For example, it is harder to intuit the meaning of an evolving operator as opposed to an evolving state, and the act of integrating over all possible paths in order to discover one actualized path suggests a modal framework, which may or may not be metaphysically desirable.

In addition, although this presentation of the RPI method dealt with the continuous measurement of a single system observable, Mensky noted that the formalism may just as well be applied to an observable with multiple commuting components. This means that a full generalization of the method must treat also (more realistic) cases involving the continuous monitoring of multiple potentially noncommuting observables. However, as Mensky perhaps too understatedly remarked, in such cases “the definition of path integrals may need further elaboration” (p. 158n of Mensky, 2002) and this undoubtably complicates the metaphysical picture. On the other hand, the RPI formalism may be more natural for those concerned with epistemic questions, as this method is couched explicitly in terms of information gain and loss regarding paths and worries little about the ontological meaning of (say) the fuzziness of quantum-corridor boundaries.

Another potential interpretational difference involves the invocation of various probability types. When dealing with mixed states, both formalisms utilize classical as well as quantum probability measures at the “reducing” step (obtaining the RDM from the total density matrix or the RPI from the total path integral). But of course, how these probabilities appear in each formalism is different. Each method relies on classical probability measures in as much as each incorporates coarse-graining techniques of the sort familiar from classical statistical mechanics, but the appearance of quantum probability measures looks quite different. In the RDM formalism, the total density matrix for a mixed state is simply a classically weighted ensemble of pure-state density matrices, and application of the trace operator yields the quantum probability measure by reproducing the expectation values for system observables (Schlosshauer, 2007, pp. 37–38). In the RPI formalism, the weighted functional restricting the total propagator to sets of possible paths α (corresponding to some continuously monitored observable dependent on p,q,t) reproduces a classical distribution of outcomes. However, since the evolution of an individual path in a corridor represents the propagation of probability amplitudes and not probabilities, it must be interpreted using quantum mechanics (Mensky, 2002, p. 164).

Last, the RPI approach is often promoted with the claim that unlike the RDM method, a system–environment split need not be imposed; this is part of what Mensky meant by saying that his formalism can be derived from general principles and is therefore neutral with respect to decoherence models (Mensky, 2002). However, as mentioned in the preceding section, although the division of system from environment is necessary for the RDM formalism, how this slicing is done is arbitrary. In fact, the choice of system is generally defensible by appeal to the very effectiveness of decoherence: the extreme rapidity and practical irreversibility of the process mean that the initial system may be considered effectively isolated ab initio. These “preselection” interactions result in dynamical stabilities for some degrees of freedom over others, which in turn promote certain definitions for the system as more natural than others. In principle, however, one could always implement the split otherwise, and although it is true that the RPI formalism does not require any system/environment cleaving, the sum of all possible paths of the system must still be restricted in accordance with information about the results of continuous measurements on that system, and if these results aren’t known, it must still be assumed that continuous monitoring (resulting in decoherence) has occurred in order to obtain the RPI in the first place. In short: both formalisms—and indeed all attempts to analyze physical systems mathematically—require a choice about how to define the said system. It just so happens that decoherence itself typically explains the apparent naturalness of certain definitions over others.

3 What Decoherence Explains

Decoherence is often considered the explanation for the effective collapse of the wave function, and as such goes a long way toward answering questions about the perceived classicality of the macroscopic world. This section describes in what way decoherence is called upon to answer such questions. They take general form as queries about the emergence of classicality from a fundamentally quantum world (see Section 3.1) and take more specific form as queries about the lack of macroscopic superpositions (see Section 3.2), the privileging of certain bases like the pointer basis (see Section 3.3), and of course the measurement problem (see Section 3.4).

But first, a caveat regarding the crucial qualifier, effective. Decoherence is often mistakenly associated with the collapse of the wave function, or the elimination of interference terms. It is neither of these things. It only apparently collapses the wave function through the prodigious and rapid suppression of interference terms; it does not eliminate these terms altogether (and so result in a legitimate wave function collapse). To illustrate this, consider studies of coherence revival times. Somewhat analogous to Poincaré recurrences, coherence revivals happen when the original system regains phase coherence subsequent to environmentally induced decoherence. It has been possible to measure coherence revivals in cases where a single scattering event is sufficient to resolve spatial information about the system, as was done by Kokorowski et al. (2001). However, after a sufficient number of photons has become entangled to the system (as in cases of saturation), revivals of coherence become effectively unobservable due to the fact that system information leaked into the environment becomes increasingly difficult to retrieve. Indeed, obtaining recurrence times short enough to allow for observation requires extraordinarily highly ordered initial states; for this reason, a vast majority of naturally occurring systems have recurrence times on the order of several lifetimes of the universe (see Schlosshauer, 2007). Such experiments underscore the effectiveness of decoherence’s damping of interference terms, while at the same time confirming that decoherence has not destroyed them beyond recall.

3.1 The Emergence of Classicality

The total Hamiltonian for the general case of a macroscopic system interacting with an uncontrolled environment will exhibit strong position dependence owing to the prevalence of 1/r potentials in the interaction Hamiltonian. Since the set of mutually commuting system–environment observables are translational, the environment is able to monitor the system in translational bases. This causes interference terms among superposed position states of the system to decohere rapidly and effectively; measurements of the system’s position after even minimal environmental interaction will report back eigenstates, as only eigenstates effectively remain. Hence “classicality” (here read as: the apparent definiteness of macroscopic objects with respect to position) is said to emerge from decoherence.

Two notes are in order. First, while decoherence explains the appearance of “classical” behavior, this is not all that it explains. Indeed, if decoherence suppressed all entanglement beyond recall, how would the results of Bell-type tests and phenomena like Schrödinger kittens be explained? Instead, the effects of decoherence—ergo the explanations this process countenances—depend on the details of the system, the environment, and their interaction. Instead of the everyday case just described, let the system’s internal dynamics dominate the total Hamiltonian; the environment’s internal dynamics and the interaction strength are negligible. A case of this kind was first studied by Paz and Zurek (1999), who found that since quantum systems typically have strongly energy-dependent dynamics, it is in the energy basis that such a system will become most effectively decohered (giving rise to the measurement of apparent energy eigenstates) when interacting with an environment at equilibrium.

Alternatively, consider the case described by Zurek (1981, 1982), where the total Hamiltonian is not dominated by any one term; that is, both system and environment have nontrivial intrinsic dynamics and the interaction is strong. The resultant entanglement is in the position and momentum bases, allowing for environmental monitoring and decoherence of the system in phase space (although note well that decoherence occurs at different rates in each basis; position states lose coherence slightly faster than momentum states). Decoherence then allows the system’s evolution to be modeled as a minimum uncertainty Gaussian tracing out a quasi-Newtonian trajectory in phase space.6 Here is yet another sense in which decoherence gives rise to the appearance of classicality, and this relates to the second note: as discussed elsewhere in more detail (Crull, 2011, Chapter 4, as well as Crull 2013, 2017), what is understood by classicality and thus how the classical and quantum domains are delimited varies significantly. This is expanded upon below, after citation of an additional, rather famous, example.

Consider the case of a maximally entangled EPR pair in a singlet state. While such a state will maintain coherence with respect to certain bases long enough for entanglement to be measured (indeed, they are engineered with precisely this intention), decoherence may be occurring in another basis, and may even occur in the basis of maximal entanglement given the right environment. For example, let each particle of an EPR pair, initially in the singlet state, travel to its respective measuring device not in a controlled lab environment but instead in a heat bath modeled by an infinite set of independent harmonic oscillators—effectively a bosonic field. Bosonic fields are analogous to environments containing delocalized modes, and so are appropriate for extended-mode environments like collections of photons, phonons, and conduction electrons (Hines & Stamp, 2008, p. 543). That this environment is delocalized in the position basis is not due to prior interaction among modes; individual modes are assumed to be independent of one another and to interact only weakly with the system of interest. Then a Hamiltonian may be constructed where the environmental term is negligible for broad ranges of temperature and frequency. The self-dynamics of the EPR system can be modeled as a spin-1/2 system, that is, as a double-well potential with finite energy barrier. This will allow the use of the exactly solvable spin-boson decoherence model to study the evolution of the EPR state. No thermal dissipation is expected from the system–environment interaction, since the environment’s self-Hamiltonian commutes with neither the system’s Hamiltonian nor the interaction term (the latter two contain spin vectors). This is noteworthy, because it indicates that the dynamics observed cannot be due to thermal exchange. The master equation for this interaction is the Born-Markov equation, whose evolution reveals that an EPR pair in a bosonic field will decohere in the spin eigenbasis, asymptotically approaching the ground state. In other words: after a very short time in such an environment, even an initially maximally entangled spin state will become effectively decohered. Two observers, at their respective stations, will measure approximate spin eigenstates whose comparative values nevertheless defy classical statistical explanation.

Now, back to the concept of classicality. Here are but a few definitions frequently given in the physics literature. A system is classical (a) when its motion is Newtonian or quasi-Newtonian, as characterized by Ehrenfest’s theorem; (b) when its probability distribution shadows a classical probability distribution, as characterized by the Liouville regime; (c) as the quantum number n approaches infinity; (d) as Planck’s constant approaches zero; and (e) as mass approaches infinity. These definitions are individually insufficient to characterize the unwieldy quantum-to-classical border:

(i)

That Ehrenfest’s theorem provides neither a necessary nor sufficient condition for defining the classical regime is the central thesis of Ballentine et al. (1994). It is unnecessary because not all definitions of classicality concern a system’s equations of motion. It is also insufficient because the theorem does not, strictly speaking, satisfy the correspondence principle: while in this limit the system is represented by a minimum uncertainty Gaussian in phase space, this Gaussian cannot ever become perfectly localized without introducing a physical collapse mechanism (and perhaps fails even then). And, although a Gaussian’s center of mass may trace out a near-Newtonian trajectory, this is only so if the (strictly speaking, incorrect) assumption is made that the system it represents is independent and linear. Aside from these minor issues, even in simple cases, Ehrenfest’s theorem fails to translate all quantum properties into classical ones (and vice versa). For example, the specific heat of a quantum harmonic oscillator never reaches classical values in this limit.

(ii)

Although Liouville’s theorem enables the recovery of probability distributions for some quantum systems that are formally identical to classical statistical distributions, formal similarity does not necessarily signify ontological similarity. Since the quantum Liouville equation is a density matrix, it captures the complete statistics of a system’s state space but cannot reveal whether it represents an improper or proper mixture. If Liouville’s theorem cannot determine whether the system of interest is in a superposition of states or whether it indeed occupies a single eigenstate that is unknown, then it is surely insufficient for defining classicality (Schlosshauer, 2008). But even aside from the issue of interpreting probabilities, the correspondence between the Wigner function (quantum distributions) and the Liouville equation (classical distributions) breaks down for robustly chaotic classical systems, in which case the Wigner distribution broadens exponentially and quickly stops shadowing its classical counterpart (Fox & Elston, 1994; Lan & Fox, 1991).

Relatedly, the classification of a chaotic system’s predicted behavior as quantum mechanical usually depends on its characteristic Lyapunov exponent, which quantifies the divergence of initially proximate classical trajectories (cf. Wang et al., 2010, and references therein). Thus the criterion for being classical depends crucially on the presumed existence of truly classical trajectories, and this is problematic in light of decoherence studies (Braun, 2001). More to the point, this particular characterization of classicality is clearly not generalizable, because it applies only to chaotic systems.

(iii)

The limit n is likewise insufficient for clearly circumscribing the classical domain. This was proven by Messiah (1965) and Liboff (1984), both of whom draw upon the fact that the uncertainty relations set an insurmountable limit on how continuous classical energy spectra can become.

(iv)

Neither will decreasing Planck’s constant suffice, because this classical limit is in many cases a singularity. Batterman (1995, 2002), Berry (1994, 2001), and Bokulich (2008) all described in great detail various problems with using the Planck limit to draw the quantum–classical border.

(v)

The naivety of relying on mass for the definition of classicality is evident both through examples of massive quantum systems like Weber bars, and through macroscopic quantum effects like circulation quantization in superfluid helium, the Josephson effect in superconducting, and the stimulated emission of photons in an everyday laser pointer.

This little taxonomy of classicality is not merely a philosophical exercise; it illustrates how one of the central claims made about decoherence—that it explains the emergence of classicality—is at once too broad and too narrow. It is too broad because the effects of decoherence on specific systems in specific environments must be examined in order to assess whether and how quickly superposed states of various system degrees of freedom will become delocalized. The above definitions of classicality do not all assess the same degrees of freedom, so while it is true that all explanations of the emergence of classicality from quantum theory will rely crucially on decoherence, not all such explanations will align. The devil, as ever, is in the details. It is especially important when considering the philosophical implications of this physics to appreciate that there is no unilateral domain of classicality—no one behavior, or limiting case, or definition suffices generally for this notion. What this reveals is the remarkable fragility (dare one say unnaturalness?) of classicality as an independent category of being or a genuine property of objects and behaviors, despite everyday observations to the contrary.

As for the emergence of classicality from decoherence being too narrow a claim: this is just the point made earlier in this subsection that decoherence does not always result in classicality (however construed), and that it explains much more besides. To restrict the explanatory wealth of this physics to only questions of classicality is to miss much of the point (evidence for which: this subsection is but one of several describing decoherence’s explanatory power). For instance, decoherence explains quantum phenomena like superconductivity and the difficulty of maintaining qubit fidelity in quantum computing, while at the same time explaining mesoscopic phenomena like relatively stable Schrödinger kittens in optical cavities and interference effects arising from molecular self-interference.7

In short, because none of the usual definitions is sufficient to explain classicality in a univocal way, discussions of the quantum-to-classical transition must never stray far from detailed information about specific systems and their interactions with specific environments. Since decoherence models are designed to examine precisely these dynamics in an increasingly wide range of scenarios, they constitute a highly promising method for probing, and ultimately explaining, the appearance of classicality.

3.2 The Lack of Macroscopic Superpositions

Based solely on formalism, superposed states ought to be the norm for physical systems at all scales, since all scales are describable by quantum mechanics. Yet this obviously conflicts with the experience of everyday objects apparently occupying well-defined positions and momenta. In this regime, systems are never, let alone typically, found occupying superpositions of macroscopically discernible states. Such a disagreement calls out for explanation.

It should already be clear that decoherence provides such an explanation. Indeed, as any experimentalist working in the quantum regime will readily attest, the ability to shield an apparatus from decoherence (e.g., engineering decoherence-free subspaces for qubits in order to build more powerful quantum computers) is one of the chief difficulties. That decoherence is the process most responsible for affecting the measurability of superpositions in mesoscopic regimes is evident by the sheer complexity of experiments attempting to maintain the stability of superpositions (e.g., the research cited in endnote 7). The difficulty of maintaining measurable superpositions is unsurprisingly exacerbated in macroscopic regimes. Zeh described this nicely in his 1970 paper (which most consider the starting point of decoherence studies):

If two systems are described in terms of basic states ϕk11 and ϕk22, the wave function of the total system can be written as ϕ=k1k2ck1k2ϕk11ϕk22. The case where the subsystems are in definite states (ϕ=ϕ1ϕ2) is therefore an exception. Any sufficiently effective interaction will induce correlations. The effectiveness may be measured by the ratios of the interaction matrix elements and the separation of the corresponding unperturbed energy levels. Macroscopic systems possess extremely dense energy spectra. . . . It must be concluded that macroscopic systems are always strongly correlated in their microscopic states.

(Zeh, 1970), pp. 72–73

If they are “strongly correlated in their microscopic states”—that is, entangled—then decoherence will in general follow. As decoherence causes the suppression of interference terms, it makes the likelihood of measuring the macroscopic system in a superposed state infinitesimally small.

To give an indication of the effectiveness of decoherence processes in the realm of everyday objects, consider a rather pedestrian interaction: a single photon (the environment) scattering off of a large dust mote (the system of interest) at room temperature. Following the estimates of Schlosshauer (2007, pp. 132–134), for a dust particle of diameter 103cm initially in a superposition of positions with interference spanning distances of 1012cm and assuming no recoil due to the radiation scattering (a fair assumption given the orders of magnitude difference in size between the particles), spatial coherence will be damped by a factor of e one second after interaction with the photon. Supposing an initial state with spatial coherence across a distance comparable to its own diameter (103cm), a single photon will decohere the dust mote’s position within one billionth of a nanosecond. Calculations estimating decoherence rates for a variety of systems in an array of environments make plain the difficulty or impossibility of observing quantum superpositions at any scale in typical bases of measurement without a highly controlled environment.

3.3 The Preferred Basis Problem

The previous subsection explains why the pointer on a measuring device is not observed in a superposition of positions (whatever that would look like!) but instead is observed to follow a smooth trajectory and land at a definite value, apparently occupying a position eigenstate correlated to the state of the system being measured. This is of course just another way to couch the problem of the preferred basis: Why is it that the pointer always appears to occupy an eigenstate of position? Why do the equally mathematically acceptable bases comprised of superpositions of position not serve as the basis in which the apparatus is found? And, is it some secret, capricious selection law of the universe that dictates that energy will be the preferred basis for microscopic systems, and position for larger ones?

As noted by Zurek (1981) while discussing this very question, the apparatus is not (he in fact says cannot be) observed in a superposition of pointer-basis states because such states are being monitored continuously by the environment, and the same correlation allowing for monitoring also allows environmental decoherence of superposed position states of the pointer. Thus, the position basis is understood not as “preferred” by mysterious selection rules governing macroscopic bodies, but rather as merely the apparatus basis that remains most stable under environmental monitoring. The total state of the apparatus–environment composite quickly becomes a mixture that is approximately diagonal in the pointer basis. Zurek later named this general process “einselection,” a term that is a portmanteau of “environmentally induced superselection” (Zurek, 2003).

3.4 The Measurement Problem

While decoherence provides answers to the three preceding queries often associated with the measurement problem, it does not resolve one crucial aspect: it cannot explain the occurrence of any particular (apparently definite) outcome over another equiprobable eigenstate in the einselected basis. So while decoherence does explain the lack of superpositions among, say, electron spin states in Stern–Gerlach measurements or the theoretically unexpected dearth of dead-and-alive cats, it does not explain why a given electron became approximately spin-up (and so was detected in the “definitely up” stream), or why the cat peeped in upon was found (approximately) alive instead of (approximately) dead. This particular problem left unanswered by decoherence—call it the problem of specific outcomes—is the precise point at which interpretations of quantum mechanics enter fully into the discussion.

Note that since decoherence is a consequence of the physics alone, all viable interpretations of quantum mechanics must incorporate it. This across-the-board reliance on decoherence will prove an advantage in terms of analysis: precisely because decoherence is a core aspect of quantum theory, the particular way it is deployed to answer the measurement problem creates a new vantage for assessment. Specifically: the more the explanatory work of a given interpretation is revealed to be due to decoherence, the less one may wish to buy into that interpretation, as it in fact contributes little beyond what is freely available to all. Furthermore, entering this debate on the understanding (made available via decoherence) that only the problem of specific outcomes remains unsolved, it may be made more plain whether the remaining unique benefits of a given interpretation are still worth the buy-in.

4 The Role of Decoherence in Interpretations of Quantum Mechanics

The focus here is on the four heavyweight contenders for realist interpretations: consistent histories, Everett/many worlds, Ghirardi–Rimini–Weber theory (GRW)/spontaneous collapse, and de Broglie-Bohm/hidden-variables. The interpretations themselves are not described in any detail because that has been done countless places elsewhere; instead, the focus is on the role of decoherence in each, followed by a brief analysis of how incorporation of this process alters its respective explanatory yield. Omitted from the discussion are relative-state formulations, pragmatic or instrumentalist interpretations, and information-theoretic approaches like quantum Bayesianism, as these interpretations are by their own admission mainly epistemic. That is, they aim only to recover the phenomena and do not seek a deeper, ontological explanation of the processes giving rise to them.

4.1 Consistent Histories

The consistent histories approach of Griffiths (1984), Gell-Mann and Hartle (1993), and Halliwell (1994) was developed for the express purpose of folding decoherence processes into a viable interpretation of quantum mechanics. This approach, though devised independently from Mensky’s RPI formalism, nevertheless closely parallels it: the system is considered to trace out a quantum trajectory, or history, in its phase space. Tracing over the set of all possible histories leaves vanishingly small (but nonzero) interference effects between disjoint paths in the set (measured by the decoherence functional on two such histories), such that only one path is realized. Weaker conditions for the trace can be implemented, depending on whether one or both of the real and imaginary parts of a pair of integrals are traced out. Imposing a “medium decoherence” condition where both real and imaginary parts of the decoherence functional vanish (or are vanishingly small, to be more precise) generates an exhaustive set of disjoint histories. Such a set—christened a “decoherent history space” by Gell-Mann and Hartle (1990)—is considered a quasi-classical domain if it cannot be further fine-grained without compromising the effective isolation of individual histories due to decoherence. According to these authors, by using the consistent histories formalism and starting from the unitary evolution of a global state, complete expectation values for a unique quasi-classical domain can be reproduced without direct appeal to measurements, observers, or even definite outcomes. Thus, the consistent histories approach is intended not as a solution to the measurement problem but as a way of avoiding it altogether.

However, if the measurement problem is understood as suggested in Section 3, problems do arise—and not in connection with the use of troubling terms like “measurement” or “observer.” Taking cues from Saunders (1995), there is concern that this approach is not truly independent of all assumptions regarding probabilities, specifically when it comes to the Born rule. In as much as the realized history is well defined and entirely disjoint, consistent histories protagonists insist on the weightier ontological interpretation of the Born rule, rather than stopping at the statistics. This may of course be easily avoided in this interpretation—it comes down to a matter of careful word choice. What is not so easily avoided is the following problem (also touched upon by Saunders). If the “actualization” process for a single history is interpreted as giving truly definite positive-valued projections at each time increment—the crucial difference between the decoherence functional being declared exactly zero between pairs of consistent histories versus effectively zero—then this approach will require an explanatory piece beyond what decoherence provides. Namely, it will require a process whereby all consistent histories are physically reduced to one consistent and real history. This seems suspiciously like wave-function collapse. Ceding this point, Hartle (1993) bit the bullet and included state reduction as part of the fundamental physical story. However, any true reduction process will result in the loss of unitarity of the global quantum state. Since maintaining unitarity is a key motivation for (and benefit of) adopting the consistent histories approach in the first place, it is unlikely many will follow Hartle down this road.

Instead, most advocates of the consistent histories approach opt for black-boxing the process of actualization in order to avoid invoking physical collapse. The issue still remains of understanding why one particular history turned out to be real instead of another—and now the discussion has come full circle, for this is just the problem of specific outcomes. If the consistent histories approach leaves this question—from the vantage point of decoherence, arguably the only question—unanswered, in what way does it prove advantageous over other interpretations?

4.2 Everett/Many Worlds

Everettianism—a class of approaches including many-worlds and many-minds interpretations—relies on decoherence at the global scale to produce structure (branches, worlds, minds . . . ) and at the local scale (in one branch, world, or mind) to describe the emergence of classicality. As one of the most vocal proponents of this view, Wallace (2012) was keenly aware that the status conferred upon the Born rule must be carefully examined.8 He was also careful to state that although both the RDM and RPI formalisms require a reducing step to recover discrete structure from the global quantum state, nevertheless “there is no intrinsic discreteness in the branching process” (p. 91). Decoherence effectively reproduces the requisite structure, but for each branching event, the environment records information about the global superposition wherein that branch is a (decohered) subsystem—including the probabilistic weighting of individual branches (p. 88)—thereby maintaining global coherence. In this way, the Everettian avoids positing complete wave-function collapse.

As for the problem of specific outcomes, the Everettian need not explain why this particular world was realized instead of that one, for on this approach all worlds are realized (modulo all branches, all minds). The issue is settled by allowing the agent who is asking the question to self-locate within a particular world, branch, or mind. But whether such a response is a genuine improvement over others, or whether it requires the same buy-in only using different currency, is a question worth considering. The Everettian answer still involves stochasticity (although arguably somewhat mitigated by a probability distribution over its global structure). Everettians also typically appeal to agency in their answer here—and not just one instance, but innumerable times, creating a universe full of self-locating agents. This might strike some as merely trading in the minimal ontology of consistent histories (“Only one trajectory is realized, and there is no reason why it is this history rather than that one”) for a maximal ontology (“All branches are realized, and there is no reason why it is this branch one finds oneself in rather than that one”). Thus, at least regarding the question of specific outcomes, it could be argued that these two interpretations are rather at an explanatory stalemate. It also means that the slogan of many Everettians—“the Everett interpretation just is quantum mechanics”—might equally well apply to consistent histories approaches.

A small additional point, perhaps of more interest to historians of this interpretation: according to Saunders (2022), in his dissertation Everett used diffusion phenomena to motivate position and momentum as preferred bases for macroscopic bodies. Now that a satisfactory explanation for the selection of bases at diverse scales can be found in decoherence, this aspect of Everett’s interpretation no longer counts to its particular advantage.

4.3 GRW/Spontaneous Collapse

In order to keep spontaneous collapse interpretations like GRW theory consistent with empirical findings from decoherence, certain details regarding the collapse process become constrained. For one, decoherence timescales must universally be faster than the collapse process if such theories wish to claim that state reduction genuinely explains what it is meant to explain. The entire purpose of a spontaneous collapse theory is presumably to answer the question of specific outcomes, with decoherence providing the prior explanation for why certain bases are more stable than others, ergo why one finds systems occupying approximate eigenstates of particular bases. If the collapse mechanism just is decoherence, then collapse theories have not answered the question of specific outcomes.

GRW, the most established collapse theory, answers the problem of specific outcomes using stochastic hit processes. Leaving aside the question of whether a stochastic process can truly explain the advent of one outcome over another, the rate of hits, the dynamical efficacy of the hit process, and the particular dimensions of the system Gaussian resulting from hit processes will surely need to agree closely with findings from decoherence studies. The hit function does more besides. Despite the ability of decoherence to explain einselection, GRW theorists typically assume that there exists before interaction an eigenbasis where the stochastic hit function commutes with the system of interest. They further assume, despite the explanation available from decoherence, that different bases are more common at different scales due to varying hit rates. For example, macroscopic systems have higher hit rates because they involve the correlation of larger numbers of particles.9 But surely this correlation in macroscopic bodies affects bases aside from position, including highly nonclassical ones. In which case, GRW has offered an explanation for the preferred basis problem only for macroscopic systems, and in terms of the hit function rather than (or possibly superfluous to) einselection.

The hit function does not perfectly localize a given state in a superposition; after collapse there remain nonzero amplitudes around the Gaussian peak (and the peak itself still maintains some width). This is the infamous tails problem, and although several so-called “flavors” of the GRW interpretation have been developed to address it, the tails problem is only a problem if the Born rule is taken to describe reality and not just appearances.10 Only if the Born rule is taken to associate probabilities to ontologically definite values instead of apparently definite values is a further story required about how those values become manifest, let alone why this specific outcome occurs.

If collapse interpretations take decoherence seriously, what is it that they add to the explanatory package? Decoherence already gets one einselection and effective eigenvalues, but collapse theorists will need to ensure that the tails of the post-hit Gaussian do not conflict with the degree and speed of interference suppression caused by decoherence, and that the probabilistic nature of the hit function nevertheless results in the same preferred basis for systems at various scales and in various environments. In addition to doing such work to remain consistent, a dilemma arises for the GRW theorist: on the one hand, if the Born rule is considered merely a guide to expectations, then decoherence is sufficient to explain the statistics of measurement outcomes through effective collapse—and this means there is little positive work left for GRW to do. Instead, the incorporation of decoherence seems to require some significant back-pedaling. On the other hand, if the Born rule is read ontologically, then the GRW theorist is forced to invoke one of the three supplementary interpretations described in endnote 10. This means that GRW can remain viable only if significant work is done to make the parameters of the extant theory agree with decoherence findings, and the original theory is significantly augmented by further semantic or philosophical assumptions (as in the case of the fuzzy link version of GRW), or with further physics (as in the cases of the flash ontology and mass density versions). This is rather a high cost to pay for an answer to the problem of specific outcomes, on top of GRW’s already adding a nonunitary term to the Schrödinger equation, requiring an as-yet-undiscovered collapse mechanism, and introducing two new fundamental constants.

4.4 de Broglie–Bohm/Hidden-Variables Theories

The de Broglie–Bohm hidden-variables theory primarily calls upon decoherence to explain preference for the position basis and the emergence of classicality. These are important features of the interpretation, which was in the beginning wholly motivated by the desire to recover an ontology of classical particles following classical trajectories. As has been argued, classicality is a slippery notion. For Bohm at least, it was encoded in his mechanics by the condition that the quantum potential have a value of zero or approximately zero (Bohm, 1952). But Rosaler (2015) has argued that it is entirely decoherence and not any special aspect of Bohmian mechanics that successfully explains the emergence of the sort of classicality required for the latter, and that notions of classicality made possible by decoherence undergird Bohm’s condition of a null quantum potential. Likewise, Rosaler showed that the Bohmian configuration only evolves classically due to decoherence: it successfully tracks the trajectory of the global quantum state because decoherence has already dynamically selected and effectively isolated the trajectory (Rosaler, 2015, p. 1176). Classicality in the de Broglie–Bohm theory is also often linked to Ehrenfest’s theorem (the same condition for classicality espoused by many Everettians, by the way): a classical system is one whose state is represented as minimum uncertainty Gaussians tracing a Newtonian path over coarse-grained-phase space cells. Again, Ehrenfest’s theorem is neither necessary nor sufficient for completely delimiting the classical domain.

Since one of, if not the, major selling point of this interpretation is precisely its recovery of familiar classical notions, discovering that the emergence of classicality is actually due to decoherence instead constitutes a significant blow. But what of the problem of specific outcomes? Bohmians do indeed provide a concrete answer for why one obtains definite outcomes from measurements by appeal to the (quasi) classical trajectory of Bohmian particles: a particle, as particles classically do, passes through just one slit in the double-slit apparatus while the guiding wave, as waves classically do, passes through both slits and produces interference. The story from decoherence is admittedly not as clean—there is no actual particle to follow a classical trajectory, for one thing—but it does explain why the particle appears to follow a path through just one slit. Decoherence stops there, of course; it does not explain why the particle went through the left slit (say) as opposed to the right. Can the Bohmian answer this question, though? Certainly not without appeal to the initial positions of all relevant Bohmian particles, whose collective trajectories in configuration space are meant to be truly deterministic. However, since such information about initial states is inaccessible even in the case of a double-slit experiment, subsystems still evolve nonunitarily, and from this perspective Bohmians are no further toward an answer to the problem of specific outcomes than others. Coupling this realization with decoherence’s stealing one of the Bohmians’s aces—the recovery of classical behaviors and objects—makes the net benefits of this approach harder to appreciate.

5 Conclusion

When decoherence processes are taken into account in a nuanced, careful way, they reveal that many of the old problems associated with quantum theory are either compellingly explained or at least afforded a fresh angle. There is promise of rich philosophical engagement ahead. And while some may remain skeptical regarding the purported power of decoherence to address certain issues raised here, it is clear that the burden of proof is shifting onto the shoulders of those who reject the suite of explanations commonly attributed to decoherence.

References

  • Ballentine, L., Yang, Y., & Zibin, J. (1994). Inadequacy of Ehrenfest’s theorem to characterize the classical regime. Physical Review A, 50, 2854–2859.
  • Batterman, R. (1995). Theories between theories: Asymptotic limiting intertheoretic relations. Synthese, 103, 171–201.
  • Batterman, R. (2002). The devil in the details: Asymptotic reasoning in explanation, reduction and emergence. Oxford University Press.
  • Berry, M. V. (1994). Asymptotics, singularities and the reduction of theories. In D. Prawitz, B. Skyrms, & D. Westerstahl (Eds.), Logic, methodology and philosophy of science IX. Elsevier.
  • Berry, M. V. (2001). Chaos and the semiclassical limit of quantum mechanics (Is the moon there when somebody looks?). In R. Russell, P. Clayton, K. Wegter-McNelly, & J. Polkinghorne (Eds.), Quantum mechanics: Scientific perspectives on divine action. CTNS Publications.
  • Bohm, D. (1952). A suggested interpretation of the quantum theory in terms of “hidden” variables. Part I. Physical Review, 85, 166–179.
  • Bokulich, A. (2008). Reexamining the quantum-classical relation: Beyond reductionism and pluralism. Cambridge University Press.
  • Braun, D. (2001). Dissipative quantum chaos and decoherence. Springer.
  • Chiorescu, I., Nakamura, Y., Harmans, C., & Mooij, J. (2003). Coherent quantum dynamics of a superconducting flux qubit. Science, 299, 1869–1871.
  • Crull, E. (2011). Quantum decoherence and interlevel relations [Doctoral dissertation]. University of Notre Dame.
  • Crull, E. (2013). Exploring philosophical implications of quantum decoherence. Philosophy Compass, 8(9), 875–885.
  • Crull, E. (2017). Yes, more decoherence: A reply to critics. Foundations of Physics, 47, 1428–1463.
  • Davidovich, L., Brune, M., Raimond, J.-M., & Haroche, S. (1996). Mesoscopic quantum coherences in cavity QED: Preparation and decoherence monitoring schemes. Physical Review A, 53, 1295.
  • Fox, R., & Elston, T. (1994). Chaos and the quantum-classical correspondence in the kicked pendulum. Physical Review E, 49(5), 3683–3696.
  • Gell-Mann, M., & Hartle, J. B. (1990). Quantum mechanics in the light of quantum cosmology. In W. H. Zurek (Ed.), Complexity, entropy, and the physics of information (pp. 425–459). Addison Wesley.
  • Gell-Mann, M., & Hartle, J. B. (1993). Classical equations for quantum systems. Physical Review D, 47(8), 3345–3382.
  • Griffiths, R. B. (1984). Consistent histories and the interpretation of quantum mechanics. Journal of Statistical Physics, 36, 219–272.
  • Hackermueller, L., Uttenthaler, S., Hornberger, K., Reiger, E., Brezger, B., Zeilinger, A., & Arndt, M. (2003). Wave nature of biomolecules and fluorofullerenes. Physical Review Letters, 91, 090408.
  • Halliwell, J. J. (1994). Aspects of the decoherent histories approach to quantum mechanics. In L. Diósi (Ed.), Stochastic evolution of quantum states in open systems and in measurement processes (pp. 54–68). World Scientific.
  • Hartle, J. B. (1993). Reduction of the state vector and limitations on measurement in the quantum mechanics of closed systems. USCBTH-92–16.
  • Hines, A. P., & Stamp, P. (2008). Decoherence in quantum walks and quantum computers. Canadian Journal of Physics, 86, 541–548.
  • Hornberger, K., Uttenthaler, S., Brezger, B., Hackermüller, L., Arndt, M., & Zeilinger, A. (2003). Collisional decoherence observed in matter wave interferometry. Physical Review Letters, 90(16), 160401(4).
  • Kokorowski, D. A., Cronin, A. D., Robers, T. D., & Pritchard, D. E. (2001). From single- to multiple-photon decoherence in an atom interferometer. Physical Review Letters, 86(11), 2191–2195.
  • Kupsch, J. (2003). Open quantum systems. In E. Joos, C. Kiefer, D. Giulini, & I.-O. Stamatescu (Eds.), Decoherence and the appearance of a classical world in quantum theory (2nd ed., pp. 317–356). Springer.
  • Lan, B., & Fox, R. (1991). Quantum–classical correspondence and quantum chaos in the periodically kicked pendulum. Physical Review A, 43(2), 646–655.
  • Liboff, R. (1984). The correspondence principle revisited. Physics Today, 37, 50–55.
  • Mensky, M. B. (1979). Quantum restrictions for continuous observation of an oscillator. Physical Review D, 20, 384–387.
  • Mensky, M. B. (1993). Continuous quantum measurements and path integrals. IOP Publishing.
  • Mensky, M. B. (2000). Quantum measurements and decoherence: Models and phenomenology. Kluwer Academic Publishers.
  • Mensky, M. B. (2002). Decoherence as a fundamental phenomenon in quantum dynamics. In Multiple facets of quantization and supersymmetry: Michael Marinov memorial volume (pp. 151–174). World Scientific.
  • Messiah, A. (1965). Quantum mechanics. North-Holland Publishing.
  • Paz, J., & Zurek, W. (1999). Quantum limit of decoherence: Environment induced super-selection of energy eigenstates. Physical Review Letters, 82, 5181–5185.
  • Raimond, J.-M., & Haroche, S. (2005). Monitoring the decoherence of mesoscopic quantum superpositions in a cavity. Séminar Poincaré, 2, 25–64.
  • Rosaler, J. (2015). Is de Broglie-Bohm theory specially equipped to recover classical behavior? Philosophy of Science, 82(5), 1175–1187.
  • Saunders, S. (1995). Time, quantum mechanics, and decoherence. Synthese, 102, 235–266.
  • Saunders, S. (2022). The Everett interpretation: Structure. In E. Knox & A. Wilson (Eds.), Routledge companion to philosophy of physics (pp. 213–229). Routledge.
  • Schlosshauer, M. (2007). Decoherence and the quantum-to-classical transition (2nd ed.). Springer.
  • Schlosshauer, M. (2008). Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute. Foundations of Physics, 38, 796–803.
  • Shor, P. (1995). Scheme for reducing decoherence in quantum memory. Physical Review A, 52, 2493–2496.
  • Unruh, W. G. (1995). Maintaining coherence in quantum computers. Physical Review A, 51, 992–997.
  • Wallace, D. (2008). Quantum mechanics. In D. Rickles (Ed.), The Ashgate companion to contemporary philosophy of physics. Ashgate.
  • Wallace, D. (2012). The emergent multiverse: Quantum theory according to the Everett interpretation. Oxford University Press.
  • Wang, W., Wang, L., & Yi, X. (2010). Lyapunov control on quantum open systems in decoherence-free subspaces. Physical Review A, 82(3), 034308(4).
  • Wilde, M. M., McCracken, J. M., & Mizel, A. (2010). Could light harvesting complexes exhibit non-classical effects at room temperature? Proceedings of the Royal Society of London A, 466(2117), 1347–1363.
  • Zeh, H. (1970). On the interpretation of measurement in quantum theory. Foundations of Physics, 1, 69–76.
  • Zilly, M., Ujsághy, O., & Wolf, D. E. (2010). Conductance of DNA molecules: Effects of decoherence and bonding. Physical Review B, 82, 125125(7).
  • Zurek, W. (1981). Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse? Physical Review D, 24, 1516–1525.
  • Zurek, W. (1982). Environment-induced superselection rules. Physical Review D, 26, 1862–1880.
  • Zurek, W. (2003). Decoherence, einselection, and the quantum origins of the classical. Reviews of Modern Physics, 75, 715–775.
  • Zurek, W. (2007). Decoherence and the transition from quantum to classical—Revisited. In B. Duplantier, J.-M. Raimond, & V. Rivasseau (Eds.), Quantum decoherence (pp. 1–32). Birkhäuser Verlag.

Notes

  • 1. That is, nonclassical (including nonlocal) interactions distinct from perturbative, thermal, and mechanical effects.

  • 2. There are many good resources for those who would like a thorough introduction to this formalism and to the physics of decoherence. For her part, the author learned it from Schlosshauer’s textbook on decoherence (Schlosshauer, 2007) and Zurek’s papers (in particular Zurek, 2003, 2007).

  • 3. Cf. Section 3 of Kupsch (2003) for a formally rigorous discussion of this point.

  • 4. In addition, as Wallace (2012, p. 22n) and others have pointed out, assuming the Born rule describes measurement outcomes on pre-existing definite values is clearly problematic for position and momentum, values of which rapidly disperse.

  • 5. The equivalence of the RDM and RPI formalisms for nonselective measurements is proven in Chapter 5 of Mensky (2000).

  • 6. The trajectory is exactly Newtonian only if the system is linear; since an entangled subsystem is considered in this scenario, precise Newtonian motion is not achievable.

  • 7. For decoherence and superconductivity, see Chiorescu et al. (2003). On shielding decoherence in quantum computing see Shor (1995) and Unruh (1995). For Schrödinger kitten experiments, see Davidovich et al. (1996) and Raimond and Haroche (2005). For interference effects of molecules, see Hornberger et al. (2003) and Hackermueller et al. (2003).

  • 8. In fact, see Wallace (2008) for a thorough discussion of the pitfalls of interpreting the Born rule more strongly than it is right and mete to do.

  • 9. The precise meaning of this term in this context is unclear to the author, but it seems to be the term of choice for practitioners of GRW on this point.

  • 10. These include the fuzzy link interpretation, flash ontology, and the mass density approach. Briefly, the fuzzy link interpretation solves the tails problem by relaxing the requirement that a system’s location (or any system degree of freedom, presumably) require a single definite value, and instead allows definite values to be assigned to a small region where most of the wave amplitude is located. The flash ontology considers “real” only the center point of each collapse event, and the mass density approach replaces the wave function altogether with a continuous, global mass density spectrum.