Lubos does so by defining "local" phenomena as acting on particles that are in the same light-cone from their common origin (i.e. that a decision that is at space-like separated point from a measurement can't influence the measurement), and stubbornly refusing to acknowledge that this is a far broader definition than most people who talk about non-locality in quantum mechanics mean. For example, he succinctly sums up his definition of "local" when he states:
When I say that the entanglement always and exclusively exists when the two subsystems have a common origin, it's important to realize "where this claim comes from" and "why we know it's right". It's right because:While surely this is a true statement, the mere fact of having a common origin is normally only considered proof of locality, in the usual sense of the word, if the ultimate outcome of the measurement, and not merely their correlation, (a concept that I call "fate") is determined at the time that the particles have a common origin.
1. it follows from the locality of the quantum field theory etc. at the beginning: a sudden creation of correlated bits at spacelike-separated points could be used to send information instantaneously, and that would violate locality (which can be mathematically proven to be impossible)
2. this fact is also compatible with all the experiments that have ever been done.
Normal people discussing non-locality, mean that measuring one formerly entangled particle allows you to know with certainty, something about the properties of the other particle whenever it happens to be measured, even if the particles are some distance apart from each other at the time of measurement and that the outcome of whichever measurement took place first (for example, in proper time relative to the point of entanglement) was not predetermined at the time of entanglement.
But, it isn't obvious that "fate" is what is going on in the quantum mechanics of entangled particles. If the outcome of the measurement is truly still indeterminate at the point in space and time of common origin (which is what we usually assume in quantum mechanics for good reason), then the correlation does not fit any conventional definition of non-locality in the case of entangled particles.
The definition of causality that Lubos uses is more conventional: "The relativistic locality ends up being equivalent to the relativistic causality: the cause must precede its effects, t is less than t′, in all inertial systems." The would preclude what I call the Feynman explanation of information going backward in time from the point of measurement, and then forward in time from the point of entanglement (even though this isn't necessarily inconsistent with Lorentz invariance, because photons and other massless particles travel at the speed of light and don't experience the passage of time "subjectively"; everything in the path of a photon or other massless particle in a light-cone happens simultaneously in the reference frame of the massless particle, so causality isn't a meaningful concept from its perspective).
The probability that any measurement made in isolation will take a particular value, however, is perfectly random.
One possibility is that the values that the entangled particles will take when they are ultimately measured is "fated" at the time of entanglement, even though it won't be learned until much later. This seems inconsistent with the general notion in quantum mechanics (established in many other contexts) that quantum mechanical measurements are indeterminate until measured and that an unmeasured set of probabilities for a quantum system before measurement differs from a measured quantum system. (Sometimes this is called the question of "reality"). The notion of "hidden variables" is similar although not necessarily identical.
The notion that Bertlmann's socks are anti-correlated (i.e. different in color) whenever they are measured because he always picks two opposite colored socks at the start of the day at some point, fits with the idea that the outcome you will get is predetermined at the time of entanglement even though, so that even though you have no way of knowing which sock you will measure in advance (making the probability of picking one or the other perfectly random), the answer was predetermined when that sock started on its path to being measured, as we the answer to the color of the other sock when it started on its path to being measured. The outcome of the measurement existed at the time that the entangled particles separated, even though it was not known until later.
Another possibility is that the values that the entangled particles will take when they are ultimately measured is determined at the time of the first measurement and "communicated" instantaneously when the other particle is measured (implying a superluminal exchange of information). This what is commonly called "non-locality" by people discussing the subject.
A third possibility is that the values that the entangled particles will taken when they are ultimately measured is determined at the time of the first measurement and "communicated" by a message that goes backward in time to the point of entanglement and then forward in time to the other particle when it is ultimately measured. This is what is commonly called a "causality" violation by people discussing the subject (because causality implies that information travels only forward in time).
It is called a "paradox" because if hidden variables and non-locality and acausality are all not true, then how this happens doesn't fit in our classical physics trained brains.
In a key passage he states:
All this confusion began with the flawed 1935 paper by Einstein, Podolsky, and Rosen. Einstein and the two postdocs were thinking in the classical way and they found it unbelievable that the correlations could exist for all the components j⃗ ⋅n⃗ simultaneously.
They thought that if the two electrons are guaranteed to have anticorrelated values of jz, they objectively have to exist either in the state |↑↓⟩ or the state |↓↑⟩ before the measurement. But because both |↑⟩ and |↓⟩ predict 50% probability for jx=+1/2 and 50% probability for jx=−1/2 and this "split" applies to each electron, EPR and their followers found it "necessary" for the probabilities of j1x,j2x to be either "positive,positive" or "positive,negative" or "negative, positive", or "negative,negative" to be 25%, 25%, 25%, 25%, respectively.
However, that's simply not what quantum mechanics predicts. Everyone who understands quantum mechanics agrees that the perfect anticorrelation will exist if we measure j1x and j2x, too. The wrong assumption in the EPR derivation is classical physics. They assume that the two spins already have some independent well-defined states before they are measured. But they don't. Before they are measured, the two spins are entangled – which is nothing else than the most accurate and most general quantum elaboration on the adjective correlated.
The correlations between the results of measurements is a correlation. The previous sentence is a tautology. There are still some people who try to pretend that the correlation is something else than a correlation even though they use the word "correlation" themselves. We say that the measurements of the two electrons are correlated because the probability distribution p(j1x,j2x) for all four possible arrangements of the values of j1x and j2x does not factorize:
∄p1(j1x),p2(j2x):p(j1x,j2x)=p1(j1x)p2(j2x)
The full probability distribution for the two objects (electrons' spins) simply cannot be written as a simple product of two distributions for one object (for the objects separately).Lubos is basically arguing that the equations of quantum mechanics tell us how reality behaves and that we shouldn't need to try to impose any further interpretation upon it, and that trying to ask the question of the mechanism by which quantum entanglement produces anti-correlations is essentially a category error. But, he also argues stringently that non-locality and acausality are not involved, which makes it feel like he is arguing for the "fate" version of the options, even though that overstates somewhat what he is really saying.
Assuming the initial singlet state, quantum mechanics predicts these correlations for all the spin measurements you can think of. There is nothing "paradoxical" about it. There isn't any classical theory (or a "classical model") that makes the predictions. This fact isn't a problem with quantum mechanics or a mystery about quantum mechanics; instead, this fact is a proof that all classical theories are ruled out as theories of Nature. They are wrong. People who keep on defending them may be easily proven to be complete idiots. That's obviously the only right interpretation of the result.
What is the reason of these correlations? According to the right theory – quantum mechanics (e.g. quantum field theory where this EPR experiment may be easily embedded), the reason of the correlation(s) is not an action at a distance. At the beginning, I reminded you of the proofs that there is no action at a distance in quantum field theory!
Instead, the reason of all these correlations – the reason of the entanglement – is the two subsystems' being in the contact in the past.
Indeed, he hedges in this paragraph:
The same comment applies to the anticorrelation of the spins in the singlet state. They're anticorrelated because they were prepared together. In the ER-EPR correspondence, this anticorrelation (or any entanglement) may be interpreted as a non-traversable wormhole. But such wormholes have to be created locally i.e. have a common origin, too. You create the two "throats" of the Einstein-Rosen bridge and then you may increase the distance between them. But there's no way to "suddenly" create a bridge between two spacelike-separated points!In other words, he's basically acknowledged both a "fate-like" interpretation with the correlations arising in the past at the time of entanglement, and a "non-local" non-traversible wormhole interpretation so long as the wormhole has to be created locally, are interpretations consistent with observation.
Nobody is actually seriously arguing that entanglement can arise without a common point of contact, a straw man that he argues against that is indeed contrary to scientific fact.
The argument that one is talking about a category error could arise one of a couple of ways. First, if there is not even in principle any way to distinguish different interpretations of what is going on in entanglement, then it would seem that one is asking a nonsensical question.
Second, if we have a true paradox where any proposed interpretation leads to a logical contradiction, then we again we seem to have asked a nonsensical question, although in that case the question of which of our assumptions is wrong comes to the fore, and the problem is that canonically two of our three assumptions may be wrong and yet reach the right answer, but again, it isn't obvious that there is any way to do experiments that consistently tell us that one assumption and not another, is incorrect.
The Bohmian formulation of quantum mechanics tries to use a "fate-like" interpretation.
Richard Feynmann, when he was explaining quantum mechanics to the public, tended to favor explaining the "acausal" approach in which information could travel both forward and backward in time. I tend to favor this description because:
1. It emphasizes that quantum mechanics does not itself have an arrow of time and treats space-like and time-like separations essentially identically. You can rotate the space-time coordinates of a Feynmann diagram and still have quantum mechanically equivalent statements.
The lack of a fundamental arrow of time in quantum mechanics, and Feynman's notion of antiparticles as particles traveling backward in time, also has potentially great utility in explaining the matter-antimatter asymmetry in the universe. My conjecture on this point is that anti-matter goes backward in time from the Big Bang (or much less commonly, from points after the Big Bang when it is created/annihilated depending upon your point of view), and ordinary matter matter goes forward in time. Photons, which are timeless, go in both directions at once. Thus, there is an absence of anti-matter in our universe, because most of the anti-matter created around the time of the Big Bang is in a parallel universe before the Big Bang in which time runs in the opposite direction. Feynman's characterization of anti-matter solves this cosmological mystery elegantly, with matter-antimatter pairs that are on opposite sides of the event horizon of the Big Bang, without the need for violations of baryon number or lepton number sufficient to explain the matter-antimatter asymmetry that we observe which is otherwise far too great to explain within the context of Standard Model sphalerons.
2. It emphasizes the pre-measurement indeterminacy of quantum mechanical objects (illlustrated, for example, by double slit experiments) which is of such importance for a variety of purposes in quantum mechanics, that it makes sense to believe may be rightly considered to be a general rule.
Now, if one of the particles is measured long in time before the other, you have some of the same issues as predetermination at the time of entanglement. But, perhaps the timelessness of the photon or massless communication of information in the information carrier's frame of reference eliminates these concerns.
3. It maintains the principle of locality in the stronger sense that I have defined, rather than merely in the weaker sense (existence in the same light cone from a point of common origin) that Lubos prefers to use.
Also, the mere fact that something doesn't contradict the laws of science, doesn't mean that a scientific reality that is wildly contrary to intuition isn't remarkable and worth a lot of discussion about its meaning, since that goes to the meaning of the universe itself and our assumptions about it.
No comments:
Post a Comment