In the last post, I showed you how a projectile in a superposition of moving to the left or of moving to the right can only be measured to be doing one or the other. But what happens to the wave function of the system when the measurement is made? Does it… does it… COLLAPSE!?
Sounds scary. But it is only scary when it is badly explained.
Today I’ll show you what wave function collapse would mean, what it would require, and what a couple of the alternatives are. Among other things, I’ll show you that:
- The standard way of explaining wave function collapse, which argues collapse is required to avoid a logical problem, is not legitimate;
- If the Schrödinger wave equation is correct, then wave function collapse can never happen (and anything resembling “collapse” is viewed not as a physical effect but as a user’s choice);
- Therefore, if wave function collapse really does occur, then the Schrödinger equation is wrong;
- But if the Schrödinger wave equation is correct, an understanding of why quantum theory predicts only probabilities for multiple possibilities, rather than definite outcomes, is still lacking.
Today’s post uses several previous posts and their figures as a foundation, so I’ll start with a review of the most recent one, with links to others of relevance.
Quick Review
[If this review does not make sense to you, you should definitely first read the posts that are linked below, where all these points are carefully explained.]

I’ll refer extensively to my last post, in which (see Fig. 1) we have
- a projectile initially located near x=0 and able to move only along the x-axis;
- a measuring device in the form of a simple microscopic ball to the projectile’s left at x=-1
- which may in turn be connected to other, more stable measuring devices, such as a Geiger-Müller counter,
- which may, in their turn, be measured by human ears and recorded in human brains
- an initial wave function which has the measuring device essentially stationary and the projectile in a superposition of going to the right OR to the left
The wave function of this system of two objects exists in the two-dimensional space of possibilities, with x1 the position of the projectile and x2 the position of the ball. Its behavior over time is as shown in Fig. 2. It has two peaks which initially travel in opposite directions, back-to-back, in the space of possibilities. However, the two peaks behave differently. One peak evolves through a collision of the projectile with the measuring ball, causing the ball to move to the left, while the other peak does not. As a consequence, the peaks do not remain back-to-back; one continues in steady motion to the right, while the other changes direction and ends up lower down (at more negative x2).

[Again, if any of this does not make sense to you, you should definitely first read the previous post, and perhaps more of the posts linked above, where all these points are carefully explained.]
How do we understand what this is showing us? As usual, we can refer to pre-quantum physics to guide us. In pre-quantum physics we would have said there are two possibilities, shown in Figs. 3a and 3b, with equal probability of occuring. Fig. 3a shows what happens if the projectile moves leftwards;
- the left panel shows the projectile and ball in physical space;
- the right panel shows the evolution of the system in the space of possibilities.
Notice that the system’s motion changes direction when it crosses the orange diagonal line, where the projectile and ball are in the same location and collisions can occur. Meanwhile, Fig. 3b shows us that if instead the projectile moves rightward, then its motion — and that of the system, also — is steady, and the ball remains largely stationary.

OR

Compare the right panels of Figs. 3a and 3b with the wave function in Fig. 2; the motion of the two peaks of the latter combines the motion of the stars in the former.
The Classic, Wrong View of Collapse
If you’ve ever read about wave-function collapse, you may well have seen it explained as follows. This approach is badly misleading, but I need to review it because I know many readers will have seen it. Later we’ll see both what’s right and what’s wrong with it.
Here we focus on the projectile alone, and ignore (unwisely) the ball that measures it. Before the measurement takes place, the wave function for the projectile looks like this, with its two peaks, telling us the ball may be to the left of center OR it may be to the right of center.

But what does it look like after the measurement carried out by the measuring ball?
Suppose our measurement did not detect the projectile to the left of the center — the measuring ball remained stationary. Then, given the two-peaked structure of the initial wave function in Fig. 4, the projectile must now be located to the right of center, and now has essentially zero probability to be found to the left of center.
Next, suppose we decide, immediately after the first measurement, to make a second measurement of where the projectile is. Based on the result of the first measurement and the highlighted conclusion in the previous paragraph, the second measurement should also find the projectile’s position to the right of center.
But this tells us that the projectile’s wave function, after the first measurement but before the second, can no longer look like Fig. 4. If the wave function still looked like Fig. 4, it would predict that our second measurement, like our first one, would have a 50% chance of finding the projectile left of center and a 50% chance of finding it right of center. That’s in complete contradiction to the highlighted conclusion that we drew from our first measurement, which instead implies that our second measurement, if done immediately thereafter, has a 100% chance of finding the projectile to the right of center.
In short, keeping the wave function of Fig. 4 would lead to a logical inconsistency!
Instead, the wave function after the first measurement must presumably now look more like Fig. 5, where there is a nearly 100% chance of finding the projectile to the right of center. Again, this form of the wave function seems logically required during the period after the first measurement is performed but before the second measurement.

How, exactly, did simply measuring the projectile cause its wave function to suddenly change — or “collapse”, as people say — from the form in Fig. 4 to the form Fig. 5, with the left peak disappearing and all its probability transferred the right peak? And if we want to say that it didn’t collapse in this way, how is a logical contradiction avoided?
This becomes even more worrisome if we imagine that we had waited a long time before the first measurement, so that the two peaks of the wave function were a mile apart, or a million miles apart, when it was performed. How does measuring (which, in this case, involves failing to detect) the projectile at the location of the ball cause a drastic change in the ball’s wave function a million miles away?
Stop, stop, stop! This entire line of argument, and the logical inconsistency that it points to, is completely unconvincing and totally misleading. That’s because particles don’t have wave functions — systems do. And our projectile is not an isolated system which can be treated as having its own wave function; if it were, we could never have performed the first measurement in the first place!
Instead, if we want to have a hope of understanding this properly, we need to include the projectile and the device that made that first measurement in a single system, with a combined wave function. Fortunately, we’ve already done this — in fact we did it in the last post, which I reviewed in Figs. 1-3 above — so we just have to look at the answer and think about it. We’ll see that there are still conceptual issues, but no logical contradictions.
Toward The Correct View
We have already seen that once we include the measurement device in the wave function, and even start to take seriously the fact that we must amplify the initial measurement (as in a Geiger-Müller counter) to assure that it can be recorded permanently, we need to be using a wave function for a much larger space of possibilities than the one used in Figs. 4 and 5; at a minimum we need the two-dimensional space used in Figs. 1-3. I’m going to ignore the amplification step here; it’s too complicated, and we won’t really need to look at it in detail. But remember that it is there, because it will have some conceptual implications.
Specifically, Fig. 2 shows us that if Schrödinger’s equation is correct, the wave function does not collapse when we include a measurement process.
- The wave function initially has two peaks representing the projectile moving in one direction (to positive x1) or the other (to negative x1), all while the ball sits still at left, ready to measure, at x2=-1.
- The peak that moves toward positive x1 represents the fact that if the projectile moves right, it misses the ball and the ball does not respond — nor does the Geiger-Müller counter, our ears, or our brains.
- The peak that moves to negative x1 represents the fact that if the projectile moves left, something striking happens when x1 and x2 are both at -1; this is roughly where the collision of the projectile and ball occurs.
- The collision causes the second peak to move toward more negative x2, representing the motion of the ball to the left in physical space.
- Then, although not shown, the ball’s motion gets drastically amplified as sketched here — perhaps in a Geiger-Müller counter, during which a billion or so electrons are liberated from their atoms and create a macroscopic electric current
- The response (or lack of response) of the Geiger-Müller counter can leave a permanent record in the world, as a mark upon a piece of paper, or in an electronic recording device, and/or in a person’s brain.
To understand this, we need both to understand Fig. 2 and to think beyond it. During the amplification process following the collision of the projectile with the ball, the second peak of the wave function isn’t merely moving across the measly two-dimension space of possibilities depicted in Fig. 2. Instead it is moving across a truly gigantic space of possibilities with billions (at least!!) of dimensions! This high-dimensional space has many, many axes, representing the positions of all the liberated electrons in the counter and all of the objects used to record the information.
But still, within that gigantic space, the peak remains a peak — because even though there’s only a 50% chance that the projectile went to the left, there’s a near-100% chance that if it goes left, then the ball will respond AND amplification will occur AND the Geiger-Müller counter will click AND my eardrums will respond AND my brain will record the event in memory.
[This way of saying things is too glib. Because of the complexity of the interactions involved, the original peak, carrying 50% probability, actually breaks up into a set of many mini-peaks, thanks to the liberated electrons taking many different paths in physical space. But all of these mini-peaks will cause the counter to click, and their total probability taken together is 50% — and so the measurement outcome, at a macroscopic level, is just as I originally stated it.]
The other peak tells us that there’s a 50% chance that the projectile went to the right. If so, then neither the ball nor the Geiger-Müller counter will respond, and my brain will record the counter’s silence.
In this simple situation with a perfect measuring ball, we start with two peaks in the wave function, and after the initial measurement there are still two peaks, as you can see in Fig. 2. The wave function has certainly not collapsed; although the measurement can redirect peaks (or break them into multiple peaks), the Schrödinger equation does not eliminate peaks in the wave function and and does not move the probability they carry to other, distant peaks.
However, this doesn’t necessarily address the following issue. It’s all fine and good to have an evolving wave function which gives me probabilities for the result of the first measurement, and then goes on to tell me the probabilities for the second measurement once the result of the first measurement is determined. But why can’t quantum physics just get serious for once, and actually predict the result of the first measurement? Why can it only give us probabilities? And where do those probabilities really come from?
If we want to have answers to these questions, we might need wave function collapse, or something like it.
In the following three sections, I’m going to discuss three of the various different approaches to dealing with this problematic issue, and interpreting what the equations might be telling us. I’m not going to advocate for any one of them; they’re all consistent with current data, at least in principle, and the first and last are consistent with the Schrödinger equation. One way that I think is useful to keep them straight is to look at how they deal with
- potentialities — those paths through possibility space for which the wave function is not vanishingly tiny and therefore have have a reasonable probability of actually occuring — and
- actualities — the one or more potentialities that actually do occur in the real world.
Interpretation | Standard Probabilistic | Wave Function Collapses | Many Worlds |
Potentialities | Many | One | Many |
Actualities | One | One | Many |
Conceptual Issues (incomplete list) | Maybe an incomplete description of nature? Is there an underlying non-probabilistic reality that explains where the probabilties come from? | When and how does wave function collapse occur, and how do the expected probabilities emerge? What equations should replace the Schrödinger equation? | What does having many “universes” mean and can it be verified? When do two “universes” actually separate? Where do the expected probabilities of predictions come from? |
Consistent but Incomplete?
Let’s start by considering a minimalist view. You may well find it unsatisfying. [But to quote a certain teenager I know, “maybe that’s a you problem.” 😉 ]
We’ll just accept that the wave function is a device for computing probabilities. That’s all it does and all it is supposed to do. Results of individual measurements on a physical system are random, but repeated attempts to perform the same experiment will give results that are distributed according to the probabilities assigned by wave function calculations. And we don’t worry about the details in between one measurement and the next; we just know what the system is doing when we make measurements, and in between we don’t and presumably can’t know.
Importantly, in contrast to the earlier argument for wave function collapse (see Figs. 4 and 5), no logical inconsistency arises here. The wave function’s two peaks in Fig. 2 imply that after the first measurement but before the second, the following logical statements are true:
- if the ball did not respond, then the projectile is on the right; and
- if the ball did respond, then the projectile is on the left.
The first of these if…then… statements is recorded in the peak on the right (see Fig. 6), while the second correponds to the peak at bottom left. As we see in the wave function’s shape, these logical statements about the initial measurement are correct, no matter what its outcome actually was.

And this means that if we quickly do a second measurement, then
- we will only find the projectile on the right (x1 > 0) if the first measurement observed the ball where it originally was (x2 very near -1).
- we will only find the projectile on the left of center (x1 < 0) if the first measurement observed the ball recoiling from a collision (x2 < -1).
Thus we will never find a logical contradiction.
We would indeed have a contradiction if the ball remained where it was (at x2 near -1) AND the projectile is found on the left (x1 < 0). But that situation, whose location in the space of possibilities is circled in red in Fig. 7, lies well outside either peak — and thus it has probability that is exceedingly small, effectively zero.

Compare that with the argument around Figs. 4 and 5, which claimed there was a logical inconsistency. That inconsistency comes from using a wave function which did not include the measuring device in the wave function. Without the measuring ball, the space of possibilities had only one dimension (x1), not two dimensions (x1 , x2); but it’s the second dimension that, by keeping track of the ball’s position x2, cleanly separates the circled area in FIG from the peak that lies below it. If you can’t distinguish these two regions on the plane because you’ve compressed the plane to a line, as in Figs. 4 and 5, then you will naturally become confused. The confusion is only avoided if one includes the measuring device in the wave function.
What Is the Apparent Collapse of the Wave Function?
Staying in this two-dimensional plane gives us a way to think about how we might fix the invalid wave function collapse argument of Figs. 4 and 5.
Suppose that we assume, or that we know, that the measuring ball did not respond to the projectile, and thus remains at x2 near -1. In this case it is perfectly fair for us to ignore everything in the space of possibilities that has x2 much larger or much smaller than -1, because those possibilities are hypotheticals that could have happened but are known to us (or assumed by us) not to have happened. Ignoring possibilities that are known (or assumed) not to have occurred is perfectly okay for the purpose of calculating the probabilities of future events that we might observe. [In basic logic, the question is: given known (or assumed) facts F, what is the probability of a specific possibility P?]
This logic then restricts our attention to the region between the two blue lines in Fig. 8. Only one of the two peaks in the wave function lies in this region. So if we write only the part of the wave function which remains relevant to us, given our knowledge or assumption about the measuring ball, we can ignore the other peak. The resulting wave function needed to calculate the system’s future probabilities will look similar to the one in Fig. 5, not the one in Fig. 4, although in principle it remains two-dimensional and contains the additional information that the measuring ball did not move from x2=-1.

It might seem as though the wave function really has collapsed from Fig. 4 initially to Fig. 5, since there is now only a peak localized at positive x1, with no remnant at negative x1. But the wave function did not in fact collapse as a matter of physics. We simply chose, for our own justifiable reasons, to ignore most of the wave function in Fig. 8.
In this view, wave function “collapse” is logic, not physical evolution. It isn’t a natural process that happens at a particular moment in time, or that literally transfers information from the far left to the far right of the x axis. We ourselves “collapsed” — or rather, “restricted” — the wave function by declaring a lack of interest in some of the possibilities that the original wave function describes, thereby cutting away most of the space of possibilities. We ignore the peak at the lower left of Fig. 8 because it applies to a set of possibilities that we have already rejected.
It’s perfectly legitimate for us to make choices of this kind. Indeed, if you look back over my recent dozen posts, I have been making such choices over and over again, picking which wave function to study for one purpose or another. We shouldn’t get upset that the Schrödinger equation didn’t predict our choices; after all, it didn’t predict what experiments we were going to do, either.
And so, from this perspective, there’s a logically consistent Schrödinger equation that tells us how wave functions evolve. We ourselves can always choose to declare assumptions or learn facts, and use those to restrict the space of possibilities to a smaller one, in which the wave function may have a more restricted shape. When this happens, the wave function naively might seem to have collapsed, but in fact it is we who have changed our questions.
But even if you accept this viewpoint, it’s not the end of the story. There’s are two more puzzles well worth thinking about.
If Many Things Can Happen, Why Do We Observe Only One?
Actually the first one really isn’t a puzzle; it only sounds like one at first.
Let’s say we’re fine with the wave function telling us the probabilities for one thing or another to happen, and agree that it does so without logical contradiction. Nevertheless, you and I only experience one thing happening. Doesn’t the Schrödinger equation fail to explain that? It gives two peaks in Fig. 2, and seems to have no explanation for why we only experience one or the other.
But in fact, it does explain it. Superposition is an OR, not an AND. The fact that you and I only experience one thing happening is exactly what the Schrödinger equation and the wave function say: that when you measure something that has multiple possibilities under a wave function — that is, a superposition of different possibilities — only one result will be observed. We can already see this with the measuring ball, which only “experiences” being struck or not being struck; it doesn’t experience both, even though the wave function’s two peaks describe the possibility of both. If you include the “macroball” of my 2nd post on measurement in the system, and/or maybe an entire Geiger-Müller counter and even our ears and brains, this would remain true:
- either the system is in one peak which gives a high probability for the projectile to go left and for us to hear the counter click,
- or the system is in the other peak which gives a high probability for the projectile to go right and for us not to hear the counter click.
So in fact, rather than this issue being a bug, it’s a feature; the Schrödinger equation and the simple probabilistic interpretation do indeed tell us that out of all the possible realities that the wave function tells us are probable, we will only experience one.
Why Can’t Quantum Physics Do Better Than Probabilities?
A much more serious concern, dating back to 1911 and Einstein, who (yet again!) was the first to notice the potential problem, is that nothing in the equations can tell us whether the projectile goes left or right, and whether the ball consequently does or does not respond. The direction of the projectile really is purely random, according to the Schrödinger equation.
That, in turn, would seem to suggest that quantum physics is a theory that is missing information. This is typical in situations where we can only predict probabilities.
If I flip a coin, I usually view it as having a 50-50 chance of landing tails or heads. But that’s not really true. The actual process of tossing the coin in a particular way and in a particular location, and the details of the floor in which it lands, could be measured carefully, and a sufficiently motivated physicist who measured all the forces exerted on the coin by the hand that tossed it and by the floor on which it landed, as well as that of any air resistance, could figure out which tosses of the coin would land on heads and which on tails. The flip of the coin seems random, but only because we’re ignorant and/or careless; it’s not really random at all.
Probability is useful, normally, when there is a completely knowable chain of events, but we happen not to know everything about it. In quantum physics we seem to have a logically consistent theory of probabilities. Is there a completely knowable chain of events — a deterministic process — that lies behind its probabilistic exterior?
If so, then the current quantum theory is incomplete. Schrödinger’s equation might be exactly (or largely) correct, but perhaps there is something missing from it (“hidden variables”, in the jargon) that needs to be added — missing information that lies behind the probabilities and that, if known, would make precise predictions possible, rather than probabilistic ones. This idea, that quantum physics is correct but incomplete, would have gotten Einstein’s vote, I believe.
Collapse: Schrödinger is Wrong?
However, everything I have written in the last couple of sections assumes that the Schrödinger equation is correct. Maybe it’s not.
I’ve shown you that wave function collapse is not required for a consistent theory of quantum physics; there’s no logical problem that makes it necessary. Nevertheless, wave function collapse might still occur, and might conceivably explain the probabilistic nature of the current quantum theory.
Maybe the evolution shown in Fig. 2 is not correct. Even if the wave function at the start of Fig. 2 remains the same, with its two peaks indicating two possibilities with 50% probability, perhaps the true equation of quantum physics causes the wave function to lose one the two peaks shown in Fig. 9. With only one peak, a single possibility with 100% probability, the wave function would then make a far less ambiguous prediction. In other words, maybe the correct equations do cause the wave function to collapse naturally, sometimes to the peak on the right of Fig. 9, and sometimes to the other peak at bottom left.

Perhaps hidden variables of some type help the equation choose which peak to select in each run of the experiment, so that the results come out 50-50 overall but are predictable each time. Or maybe the outcome is still random, but there’s still only one peak in the final wave function, not two, which then affects future probabilities.
Such suggestions are not logically inconsistent, nor proven to be inconsistent with data. They require that the Schrödinger equation be only an approximation to the truth; but lots of famous equations have turned out to be imperfect when examined with sufficient care. Newton’s second and third laws of motion and his law of gravity are all in that category.
Unfortunately, the notion of collapse raises many difficult questions:
- If this collapse happens, when does it happen?
- when the measurement begins?
- when the measurement ends?
- halfway through the measurement?
- What if the measurement isn’t completed immediately? Suppose I’m on vacation when my Geiger-Müller counter records a passing electron, and I don’t find out about it for weeks? Does the collapse happen before I get home, or only when I look at the readout?
- What equations govern the collapse?
- Does the collapse happen suddenly, or does it take a certain amount of time?
- If the latter, how long?
- How far must a measurement proceed before collapse begins?
- Can we measure the partial collapse when it is halfway done?
- Does collapse sometimes fail to complete, and can we see signs of this?
- If the parts of the wave function are millions of miles apart, does the collapse happen faster than the cosmic speed limit (a.k.a. the speed of light) would allow, or not?
- Since different observers who move relative to one another have different notions of time, and can even disagree as to which of two events happens first, can they disagree on the causes and effects of collapse?
- Does nature collapse the wave function only when an actual measurement is being performed? (and what is an “actual measurement”?)
- If an ordinary non-measurement interaction occurs, such as a collision among two atoms in a gas, how does nature know not to collapse the wave function in this case?
- Does the collapse still happen when a measurement fails partway, perhaps due to a breakdown in the device, such as an interruption of the amplification?
- What if we ourselves decide, halfway through a measurement, to abandon it? For instance, what if we refuse to read out the Geiger-Müller counter, so that its result is ultimately not recorded?
Some of these questions overlap. But this is not the complete list, either.
None of these questions can be answered without a detailed, consistent theory of how wave functions might collapse — i.e. a set of precise equations and concepts which make predictions for experiments. There have been attempts, but I have not seen one that I find particularly appealing. Also, so far there have been no experimental indications that there is anything incorrect about the Schrödinger equation (other than the very fact that it predicts only probabilities and not specific outcomes.)
Oh, and by the way, if we interpret wave function “collapse” as I described above in the context of Fig. 8 — as a restriction that we ourselves place based on our own assumptions or knowledge — then none of the questions listed above even need to be asked, much less answered. There are no new equations; we restrict whenever we want; doing so doesn’t affect the measured objects; there’s no worry about something happening faster than the cosmic speed limit; etc.
Because of this — and because there’s no great theory of wave function collapse that has attractive math, is consistent with Einstein’s special relativity and with quantum field theory, and answers the majority of the long list of questions that I’ve listed above — I’m personally pretty skeptical that the idea of a physical collapse of a wave function can provide a workable alternative to existing quantum theory. [But that might just be a “me” problem.]
Everything, Everywhere, All At “Once”?
Then there’s the many-worlds interpretation, which originates with Hugh Everett. In this interpretation of Fig. 2, the Schrödinger equation is correct, and both peaks that appear in the wave function represent things that do happen: the projectile goes to the right, leaving the measuring ball alone AND it goes to the left, hitting the measuring ball. It does this in two different “universes” — two strands of history that split at the time of the interaction between the projectile and the ball, or perhaps even earlier, when the ball is set in a superposition and starts moving in one direction or the other. Each of these strands carries off one of the possibilities and runs that possibility’s future.
It’s almost as though nature is a chess computer that takes the current board position and runs all the possible moves and all the possible responses and all the counter-responses — runs, in short, all possible chess matches — and views them all as equally real, rather than viewing them as a set of possible matches of which only one will actually be played. It takes all the reasonable potentialities and claims they are truly actualities, each one in its own version of reality.
In this view, superposition now becomes both AND and OR. In any one universe, it’s an OR; in our universe, our strand of history, the projectile goes left OR it goes right. But viewed across all the “simultaneous” or “parallel” universes, it’s also an AND; if in our universe it goes right, there’s another comparable universe in which it goes left. (Admittedly these universes are neither simultaneous nor parallel nor universes, so we could use better terminology here.)

In any one such world, a measurement device, including a human brain, will still never experience two contradictory measurement outcomes. The set of all strands of history will include all the possible outcomes, but no person’s brain will ever remember anything but a logically consistent past, the story of one particular strand. That’s because we ourselves live the OR in one strand, even though the wider universe/quantum-multiverse/meta-universe — the combination of all those strands — lives out the AND.
Is this picture logically consistent? Possibly, though actually counting strands is a nasty business; what does it mean for a strand to split, when does it split, etc? And how many strands are actually found in Fig. 10? To say there are only two, one for each peak, is presumably far too simple; after all, the peaks in a mountain range cannot be counted unambiguously. (Do we count every protruding rock? every ant hill?) What about the regions between the peaks, where the wave function is very small, but not zero? What was the situation at the start of the universe — was there just one strand to begin with, or were there always many?
Some of the conceptual questions that are naturally raised seem to me rather similar to those that arise for wave function collapse, though they don’t necessarily require the invention of new equations. [That said, one might wonder if grounding the idea might require a quantum theory with gravity, for which we don’t yet have a complete set of equations and concepts.]
But beyond these issues, a serious limitation on many-worlds is that it risks being metaphysics rather than physics. First of all, it says all physically possible worlds exist, although many are highly improbable — that potentiality almost guarantees actuality. Second, it’s impossible to verify the idea experimentally, because by the very construction of this interpretation, we can’t observe any strands of the universe except the one we’re living within. So I’m not sure whether this interpretation really helps us out compared to the one we started with; it might be a matter of personal preference rather than of any physical difference.
——-
In this brief tour of just a few of the issues with superposition in quantum physics, I hope I have helped clarify what some of the questions are, even though I have posed far more questions than I have answered. I look forward to your further questions, proposed answers, and criticisms in the comments.
23 Responses
You don’t mention Bell, of Bell’s inequalities. But they are proof of no hidden variables. Dr. Sue Feingold
Not so. Bell gives proof (at best) that hidden variables, if they exist, are not *local* in physical space. This does not seem like a very serious restriction, especially after the gauge/string correspondence in which entire dimensions of space can be emergent. But even this restriction needs a revisit.
Yeap, Schrödinger was wrong! Like Leonard Susskind says, “we need to start over”.
Is it correct to interpret this post as, the “collapse” of the wave function is due to the measuring particles changing the boundaries of “De Sitter space” and hence affecting the initial wave function, i.e. the end result is a totally different wave function.
Maybe we are measuring in the wrong position? We need to place the sensors “outside” the system we are observing with minimal interactions, or at least interactions that we then factor out and be left we the characterization of phenomena.
Yes, Susskind was taking string theory, but that theory is probably heading in the right direction as opposed to “probability space”.
Thanks Matt. You wrote: ‘But the wave function did not in fact collapse as a matter of physics. We simply chose, for our own justifiable reasons, to ignore most of the wave function.’
Could it be something like Qbism, where observers update their beliefs after making a measurement (our own justifiable reasons)? Could it be that reality is, in this context, a web of relationships between us and ‘things’ where the wave function is a description of a single observer’s knowledge? Thank you
Maybe. I’m not a Qbism expert. In general, I’m not an interpretation expert. I’m an expert in the equations and how they work, and I don’t find trying to interpret them very satisfying — perhaps because I tend to think that merely reinterpreting the equations is, by itself, not going to lead us to deeper understanding.
Hi Matt, I may be misunderstanding all of this, but regarding the first of your three approaches, and in particular the situation when we are between the first and second measurements, and:
“We simply chose, for our own justifiable reasons, to ignore most of the wave function in Fig. 8.”
Doesn’t that mean that the wavefunction is now failing to do its job (allowing us to calculate probabilities), since you now need to add in additional information (the outcome of the first measurement)?
Doesn’t that also violate a basic idea that everything that is knowable about an object is contained in the wavefunction (whereas the outcome of the first measurement is not contained in the wavefunction in fig 8)?
Further, if we propagate forward in time through a few billion further interactions (and thousands of “measurements”), wouldn’t we start needing to add in so much additional information (not contained in the wavefunction) that the basic quantum mechanics would become useless?
If so, doesn’t this suggest that, in order to be able to calculate anything, we do need, in practice, something equivalent to a “collapsed” wavefunction? That is, we need some way of collapsing down to the particular branches of the wavefunction that are, de facto, relevant to our observed universe?
Indeed, isn’t that pretty much what you’re doing when you start with a fairly simple wavefunction (as in fig 2), rather than starting with a vastly unwieldy wavefunction that contains unfathomable numbers of particles and their whole history of interactions since the distant past? In short, doesn’t Fig 2 start with a “collapsed” wavefunction?
Okay, you’ve asked five questions, each of which requires a long discussion. I won’t be able to answer this today. I’ll try to get to it. But I’ll address the first: no, the wave function calculates precisely those probabilities that you’ve asked of it: if you don’t include the result of the 1st experiment, then the result of the 2nd experiment has probability 50-50; if you do include the result of the 1st experiment, which is a conditional probability, that probability is 100-0, exactly as it should be. You use the full wave function if you want the probability independent of the 1st experiment; you use the restricted wave function if you want the probability conditional on the result of the 1st experiment.
A nitpick. When you say this and other things below it:
> Then there’s the many-worlds interpretation, which originates with Hugh Everett. In this interpretation of Fig. 2, the Schrödinger equation is correct, and both peaks that appear in the wave function represent things that do happen: the projectile goes to the right, leaving the measuring ball alone AND it goes to the left…
You are projecting too much onto the wave function in a way that is a disservice to the Everettian view, which is simply that the wave function is ontic (i.e. a physical thing that exists), full stop. That is, the view is NOT that a “projectile” goes both to the left and right; it is NOT that “all possibilities are calculated”; it is NOT that the universe “splits”; it is NOT that “universes exist in parallel”. It is only that there is unitary evolution of a wave function.
I’m sure you know all this (for example I recognize you have a parenthetical regarding the “parallel universe” terminology), so this is just for the benefit of the reader: the “many worlds” view is just that the wave function is a real physical wave (in Hilbert space to be clear), which I think is important to demystify confusion around the concept. All the loaded words like “projectile” or “splitting” or “parallel universe” are epiphenomenal concepts that depend on how we choose to “coarse-grain” a wave function into decoherent macroscopic pointer states, an idea not altogether foreign to how we would have to approach even a classical wave theory describing macroscopic objects, in order to approximate them as distinct rather than as part of a continuum.
So for example when you say “it says all physically possible worlds exist”, this is basically true but perhaps misleading to the reader. A better description that is less misleading is that “it says that the wave function evolves unitarily, which involves tensor products that make the wave function grow more and more complicated and higher dimensional with time.” This is less misleading in the sense that the statement is similarly true for other interpretations of quantum mechanics that seek to describe larger and larger systems; the fat can only be cut by a collapse postulate.
Your points are well taken. However, I do not think that the idea that “the wave function is a real wave in Hilbert space” is a very useful place for a person first encountering the many-worlds interpretation to start. After all, consider Fig. 2: how are we to translate from the wave in that space to something that has intuitive meaning? You have a lot of work to do if you start with a wave in a gigantic abstract space and then want to explain where the experience of local physical space is going to come from. I think it is better to work upward… for instance, I still haven’t addressed issues of change-of-basis, and we need that to see the full implications of taking the wave function to be reality.
And I’m still not sure I like the implications of taking something that in the usual formulation “describes” or “predicts” reality to now “be” reality. That’s not the way I would have said it.
On a related point: a confusion of my own is that linking many-worlds specifically to the wave function implies that the universe-wide Hamilton-Jacobi wave is the true representative of reality in Newtonian physics. I’m no expert, but I haven’t seen much discussion of that or of its implications. Wouldn’t this be the right thing to explain to pre-quantum students? Because I think what one is claiming, if you state many-worlds the way you have done, is that only Hamilton-Jacobi represents Newtonian reality; all other formulations, including Hamilton’s equations and the Lagrangian method, are reformulations that are less real than Hamilton-Jacobi.
Maybe you could clarify what the statement of many-worlds is in the Heisenberg and Feynman pictures of quantum physics.
As I understand it the Schrödinger wave equation is difficult to use to even give complete analytical solutions for an atom of iron from one second to the next. It feels like in popular science QM has sucked all the oxygen out of the room leaving little to nothing for QFT. The MWI is based on a “universal wave function” as best I understand it. Can’t do a moderate atom exactly but we posit a universal wave function. Frustrating.
Well, I understand your concerns, but in my opinion this is not really the issue.
First, the Schrodinger equation for QFT (or other methods for QFT) are even harder — much harder — to use than the Schrodinger equation for quantum mechanics. Trying to apply QFT to an atom of iron would be practically impossible unless one simplified the problem down to QM as a first approximation.
Second, QFT has a wave function too, even though people don’t usually write it down because it’s so awfully messy, and the many-worlds interpretation can be (and is) applied to that wave function. Certainly advocates such as Sean Carroll do exactly that. (For my recent attempt to show how that wave function works for a one-“particle” state, see https://profmattstrassler.com/2025/02/24/the-particle-and-the-particle-part-1/ and https://profmattstrassler.com/2025/02/25/the-particle-and-the-particle-part-2/ .) The question of whether quantum gravity in the real world has a wave function is still open, but it certainly seems to in string theory.
So I don’t think you’re worried about the right issues.
Dr.Stassler:
How would injecting an additional component change the probability? For instance, I put my hand in, and shove the projectile to the left, guaranteeing a collision. Would the probability of the projectile going right immediately vanish? Even before the collision physically happens?
Again, you have to put your hand into the wave function to see what happens. Let’s call its position x0. It moves from positive to negative x0, guaranteeing collision with the projectile, no matter which direction the latter is moving.
By your assumption:
1) if the projectile is initially heading to the left, you will help it along by collision with your hand, sending it faster to the left
2) if the projectile is initially heading to the right, you will reverse its course by collision with your hand and send it back to the left
Therefore there are still two peaks:
1) in this case, the collision with the ball occurs quickly
2) in this case, the collision with the ball occurs a little bit later than in case 1)
So now the wave function has two peaks, both of which are at x1<0 and x2<-1 --- i.e., a collision occurred --- but they differ in the the timing and precise locations of the ball and projectile.
Dr.Stassler:
I forgot to include my second question:
You made a statement in your article about Newtons laws.
If one constrains the definition of momentum as P=MV, then F=MA is no longer valid at high speed. However, isn’t 4 momentum the truly conserved quantity?
Newton’s third law of equal & opposite force is tantamount to conservation of momentum. However, there are cases in electromagnetism where you can not identify an “equal opposite force” (two charged particles crossing at right angles to each other). However, even in that case, if you include the momentum of the field, momentum is conserved, at least according to Feynman.
So, although momentum is redefined from its low speed value of MV to 4 momentum, and you include field momentum, isn’t conservation of momentum….redefined, still upheld?
Newton, for what it is worth, did not know about momentum and its conservation. If you want to claim that his third law is tantamount to conservation of momentum, that’s arguably revisionist history. So I think your final paragraph confirms my point; the law wasn’t stupid and there’s a remnant of it in today’s physics, but it’s not as it originally was. The same can be said for his second law and his law of gravitation. The overarching issue is that forces are not fundamental in modern physics, but they were for Newton.
In the standard probabilistic interpretation is there a problem with how to understand interference phenomena. It seems odd that one possibility can interfere with another possibility. In the usual sense that it’s either one possibly OR the other possibility.
In many worlds interpretation will you automatically get some probabilistic outcome simply because you are consiously aware of only one strand of history. ( I mean there will be many copies of oneself, but each consiously aware of just one strand). I’m sure I’ve read Sean Carroll say something along the line of: what could that probabilty actually be other than the one given by the Born rule.
There’s no problem in either interpretation with quantum interference of paths through possibility space — unless you insist that you understand already how the world works and therefore it isn’t possible. We’ll discuss this in a week or two.
It’s worth remembering that even in pre-quantum physics one can view Newton’s laws as the minimization (or, more generally, the extremization) of a certain function [the “action”] over all possible paths from a specific past to a specific future. It seems quite odd that nature would consider all possible paths and then choose the best one; how does it do that? The usual answer is that it doesn’t — that minimizing of the action is a mathematical technique for getting the right answer, and not something that nature actually does. One should keep that in mind when trying to decide whether and what to interpret in the quantum context.
As for the probabilities themselves: it’s far from obvious to me (and to many others) how the actual probabilities arise in many-worlds. You can’t derive them as a consquence of anything else — which is why presumably Carroll, rather than proving that the probabilities come out right, resorts to an argument by wise intimidation. “What else could it be?” isn’t a very convincing statement. “It can’t be anything else” is a stronger statement; let’s see a proof.
Hi Matt,
I’m both delighted that you are now taking on such issues, and (in a superposition? 🙂 terrified that we now have time to take on such issues (because the LHC is offering little excitement).
Anyways, Sean does have at least one paper claiming to derive it.
https://arxiv.org/abs/1405.7907
I admit I didn’t follow it in enough detail to determine its validity. But maybe that does determine its validity. 😅
“measurement”, “collapse” are anthropic mind constructs of the family of ideas like “luminiferous ether”, or The Four Elements in Greek cosmology: Fire, Water, Earth, and Air, etc.
I don’t think that’s true of “measurement”. I think it is a well-defined concept (though rarely defined well) involving the creation of a stable correlation between two previously uncorrelated systems.
I’ve never personally been the biggest fan of Many Worlds just because it feels so unsatisfying to (in my opinion) shuffle the issue off to many (perhaps nearly infinitely many) unobservable parallel realities. Granted little in modern physics is intuitively “satisfying” but I’m personally much more comfortable with the probabilistic view for now than Many Worlds
If like to understand what happens when say, a photon interacts with a surface and a it is totally absorbed while a photoelecton is ejected.
That case (or something much like it) is in fact on my to-do list. But this list is long. It will probably happen in 2025, but some months from now… there are many other things to cover first.