Decoherence is Dephasement, Not Disjointness

According to the many-worlds interpretation of quantum mechanics, when a measurement is performed a superposition between two microscopic states evolves into a macroscopic superposition between a universe where one outcome was observed and a universe where the other outcome was observed. This leads people to the popular picture of a tree-like structure for the evolution of the wavefunction, which is thought to be focused on many “branches” found in different parts of phase space, which are constantly splitting during quantum indeterminacies, but these split branches never rejoin once they’re macroscopically distinct. This picture is incorrect: Although the size of phase space is exponentially large (in the number of particles) the number of branches also grows exponentially and would quickly overtake it, densely occupying the physically plausible portion of phase space. It is statistically inevitable that macroscopically distinct branch would routinely reconverge. The reason why we don’t observe quantum interference between macroscopically distinct states is not that such states never converge to the same outcome state. Instead, when a outcome state can be reached by many macroscopically distinct pathways, the relative phases of these pathways are effectively random, so statistically the final amplitude corresponds to a probability that is approximately the sum of the probabilities of the distinct pathways as though there were no interference.

If you’re wondering why you haven’t heard of this form of macroscopic interference, well, I haven’t either. I’m not basing this on any article or folklore of professional physicists, but my own attempt to reason with the underlying physical laws. Tell me if someone published this before, I haven’t checked.

Model 1: Isolated equilibrium system

The simplest model system is an equilibrium gas perfectly isolated from its environment. The semiclassical description of it is a collection of particles with positions and velocities, occasionally interacting. For a quantum mechanical state, let N be the number of branches it has measured as the number of semiclassical states on which the wavefunction has a significant weight (if you’re really nitpicky make that a volume of phase space and divide by the appropriate power of \hbar). Every time two particles scatter the outgoing scattering state is spread out between multiple trajectories, so one branch turns into multiple branches. The frequency that scattering occurs is proportional to the volume, and this splitting occurs in every pre-existing branch, so we have approximately

\frac {d N} {d t} \propto V N
\frac {d \log N} {d t} \propto V

On the other hand, the size of the relevant portion of phase space is given by the entropy S, which is proportional to volume

S \propto V

It follows that there is a fixed time t_0 after which the branch count N must overtake the phase space volume \exp (S), and this time t_0 must be independent of the volume. After this time there will be interference between macroscopically distinct trajectories. This t_0 is approximately the mean free path time.

Model 2: Equilibrium system interacting through surface.

The previous model is unrealistic because it assumes the system is perfectly isolated. In reality, every system is entangled with the rest of the universe, and the phase space of the entire universe is much larger than the branching rate of a single small thermal system within it. Can this entanglement give space for the branches to remain apart? No. For one thing, while this one system is branching, so is the rest of the universe. In the previous model we can take our system to be the whole universe, and the time to phase space saturation is independent of volume and so is still small. Secondly, even with pessimistic assumptions about the amount of branching in the rest of the universe, as long as the system only interacts with the rest of the universe through their boundary, surface-area-to-volume considerations still favor the saturation of branches in phase space. If the system only interacts with the rest of the universe on its boundary (including radiation on the boundary that may have originated deep inside the system), then at least we can say that if two starting states are exactly the same on the boundaries at all time then the state of rest of the universe will be the same too. Therefore a conservative model of the rest of the universe is that creates and stores a complete copy of the state of the system at its boundary according to some basis. Then the entropy of the phase space is not constant, but increases as the size of the ledger increases. Since this ledger only records the information on the surface this increase is proportional to the surface area:

S \approx S_0 + C_0 A t

On the other hand, as discussed before the logarithm of the number of branches increases with volume:

\log N (t) \approx C_1 V t

If \frac {V} {A} > \frac {C_0} {C_1}, which is true for any big enough system, then eventually branch production will overtake the phase space for the branch to split even including the additional phase space from interaction of the system with its environment.

Disjointness through macroscopic records

There’s another way both of the above models are oversimplified, which is that I imagined the system in thermal equilibrium and ignoring the structure of the state space dynamics. By merely counting the number of branchings I can show that some branches converge with some other branches, but it’s still possible that two particular branches will remain permanently separate. This is exactly what happens when a quantum event leaves a permanent physical record. It’s unlikely that the two states where the record is different will lead to same final states; that’s exactly what a physical record means. So actual measurements in physicists’ experiments are likely to lead to permanent branch splits (even if the records are erased they are likely to leave some physical imprint). On the other hand, for equilibrium systems any two starting states can go to the same ending state after roughly the ergodic mixing time, so permanently separating branches is not feasible there.

That is, statistically there will often be macroscopic differences that merge together at a later time, but many of the really significant macroscopic differences do split the wavefunction to disjoint branches that don’t rejoin. It’s hard to tell the exact line between these things and it’s impossible to test experimentally.

1 thought on “Decoherence is Dephasement, Not Disjointness

  1. Pingback: True randomness and ontological commitments | Gentzen translated

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s