The “range” of a force is a measure of the distance across which it can easily be effective. Some forces, including electric and magnetic forces and gravity, are long-range, able to cause dramatic effects that can reach across rooms, planets, and even galaxies. Short-range forces tail off sharply, and are able to make a significant impact only at distances shorter than their “range”. The weak nuclear force, for instance, dies off at distances ten million times smaller than an atom! That makes its effects on atoms rather slow and rare, which is why it is called “weak”.
The difference between long-range and short-range is depicted schematically in Fig. 1. The green object at center is potentially able to create a force on a second object, not shown. The darkness of the shading at a particular location represents the strength of the force that the second object would be subjected to if it were placed at that location. A long-range force would be still be rather strong even at the edges of the left panel, while a short-range force would be exceedingly weak at the edges of the right panel.
Why is the weak-nuclear force so weak? Well, there’s a physics fib (or “phib”), widely promulgated by scientists and on websites, that tries to explain this. It shows up even on good quality websites, such as this one.
Typically the phib goes something like this:
- The weak nuclear force is weak because it is “short-range” — i.e. has little effect at long distances;
- true!
- it is short range because the particles that “mediate” the force, the W and Z bosons, have mass;
- meh… not entirely false, but rather misleading about what causes what …
- and that the reason particles with mass cause short-range forces has to do with the uncertainty principle of quantum physics and how it affects “virtual particles”.
- uhh… wait just a second…
As phibs go, it’s not the worst, as it’s not entirely wrong from beginning to end. But still, it scrambles some basic concepts in physics, and it should be replaced.
[Aside: sometimes you’ll see the incorrect statement — Google’s AI, for instance, and also here — that the virtual particles with mass actually “decay“, and that’s what makes the force short-range. That’s just plain wrong, and not a phib.]
It’s certainly true that fields that create long-range forces, such as electric and magnetic forces, are associated with particles that have zero mass, such as photons (particles of light.) [By “mass”, I mean “rest mass” throughout this article. For the subtleties of different meanings of “mass”, see chapters 5-8 of my book.] And it’s equally true that fields that create short-range forces, such as the weak nuclear force, are often associated with particles that have non-zero mass. But that doesn’t mean that the mass of the particles causes the short-range of the force. And the weirdness of quantum physics has no role to play, either.
Even good physics websites (for instance, this one) can be found mumbling about virtual particles. They claim that thanks to the quantum uncertainty principle, virtual particles that have mass can’t travel as far as virtual particles that don’t, and that’s why the former can’t “mediate” a long-range force, while the latter can.
By appealing to quantum uncertainty, this phib has crossed the line, going from being mostly harmless to badly misguided. Here’s the problem: the short range of the force has absolutely nothing to do with quantum uncertainty, and can be understood without any familiarity with quantum physics.
Later on I’ll explain this in more detail, but here’s a quick sketch of the correct logic (see Fig. 2), expressed for the weak nuclear force.
- There is a certain property of the W and Z fields — it has no technical name (to my knowledge) — which I’ll call, imprecisely, “stiffness” [a term I also used in my book] ;
- This stiffness attenuates the field and the associated weak nuclear force at long distance, as in Fig. 1’s right panel;
- Quantum physics meanwhile assures that waves in the W and Z fields are made from elementary “particles”, called W and Z bosons
- Quantum physics further assures that because the W and Z fields have stiffness, the W and Z bosons have mass.
So why do particles with mass give short-range forces? They don’t. As indicated in Fig. 2, the actual logic is that it’s stiffness that’s responsible for both effects: it gives certain fields short-range forces, and, separately, causes their waves to be made from “particles” with non-zero mass.
The True Story of Mass and Range, Without Math
Let’s start with a non-technical discussion; I’ll give the math, for those interested, in the next section.
All elementary forces arise from elementary fields (though not all fields lead to elementary forces; see Chapters 13-15 of my book for more info.) Leaving aside the deeper question of what a field is, here we’ll simply take a field to be something that has a value at each point in space and time. For example: at any moment and in any location in the room, there’s air pressure, which you can measure with a barometer; we can refer to the result of that measurement as the “value of the air pressure field at that place and time.” Similarly, there’s wind, a field whose value at a particular location and moment, as measured using an anemometer and a wind vane, tells us how fast the wind is blowing there, and in what direction.
For a field, what I mean by “stiffness” is crudely this: if a field is stiff, then making its value non-zero requires more energy than if the field is not stiff.
This narrow definition suffices for this article, but it has subtleties, and has to be broadened for certain fields, including the Higgs field. Click here for more details.
First, the narrow definition I’ve used assumes that a field’s average value across the universe is zero. That’s true for the electromagnetic, W and Z fields, but not for the Higgs field, whose average value is non-zero (i.e., it is “switched on”, in a sense.) Why the Higgs field has a non-zero value is a complicated story which I won’t try to explain here. But to include the Higgs field, we should really define stiffness as a measure of the amount of energy necessary to change a field’s value away from its average value.
Second, even that’s not quite enough. When it comes to electromagnetism, my narrow definition of stiffness applies not to the electric or magnetic fields themselves but rather to the more fundamental field known as the electromagnetic potential (from which the electric and magnetic fields can be obtained.) This detail requires my definition to come with fine print.
For a definition of stiffness that is unambiguous and avoids all these issues, one should look at the relation between the frequency and wavelength of waves in the fields, known as the “dispersion relation”. This relation appears later in this article, in the math-based section.
Stiffness and the range of fields
If a field is stiff, then any force it creates must have a finite range, as in the right panel of Fig. 1. Only if it lacks stiffness (let’s refer to it as “floppy”, for lack of a better term) can it be truly long range. The stiffer the field, the shorter its force’s range. In fact, the range is inversely proportional to the stiffness; for example, if the stiffness is doubled, then the range of the force drops in half.
In this section I’ll give you some qualitative intution for why this makes sense. To do so, I’ll use an analogy that isn’t precise, but does capture the basic physical phenomenon, at least at a qualitative level. The more precise explanation, with some undergraduate math, will come later in this post.
Instead of a field, we’ll study the behavior of a long string. Let’s start with a string that is taut but is only attached at its two ends. It has no stiffness — it’s “floppy” — because if the string’s ends were untied, it would just flop around, like a loose shoelace.
Now let’s grab its central point and pull it downward. As seen at the top panel of Fig. 3, the whole string bends downward; the response of the string is long-range, extending all the way to its two ends.
To make the string stiff, we’ll attach it to a rubber sheet. We’ll glue one side of the sheet to the horizontal line, and attach the opposite side of the sheet to the string along its length. Now, when the string is pulled down from the horizontal line, as in the middle panel of Fig. 3, the rubber sheet (blue) stretches between the string and the line, pulling the string back upward. This does indeed make the string stiffer, since it increases the energy needed to move the string away from the line.
If we pull on the string, as in the middle panel of Fig. 3, the force we exert overwhelms the pull of the rubber sheet, and so the center of the string distends downward. But the rubber sheet pulls the string back toward the horizontal line. As a result, the string only moves in a limited way,. In particular, toward the two ends of the string, the string remains on the horizontal line, thanks to the rubber sheet’s upward pull. And so the string’s response to our pull is now limited to a shorter, finite range compared to the previous case.
Finally, the bottom panel of Fig. 3 shows what would happen if we used a tougher rubber sheet that makes the string even stiffer. The response to our pull would then have an even shorter range.
This is the basic intuition: a stiff string resists moving away from the horizontal line, causing its response to a disturbance to reach out only over a small range.
The situation with fields is analogous. A stiff field resists having a non-zero value. Suppose an object disturbs a floppy field, causing the field to be non-zero in its neighborhood. If the field is floppy, the effects on the field can be significant even very far away (as in Fig. 1’s left panel). But if the field is stiff, the energy cost of disturbing the field is much higher, and so effect on the field is only significant in the immediate neighborhood of the object (as in the right panel of Fig. 1).
Note that I have not written the word “quantum” in this section. Just as for the string in Fig. 3, the short range of a force has to do with effects of stiffness that would arise even in a world without quantum physics.
Stiffness and the nature of waves
On our way to understanding why stiffness also leads to particle masses, let’s talk first about waves. We need to do this because what we call “elementary particles”, such as photons or electrons, are wave-like, as we’ll see.
Another effect of stiffness: it more difficult for a field to ripple. That’s not surprising: rippling involves the field’s value being non-zero in a wave-like pattern, as in Fig. 4, and being non-zero costs more energy for a field with stiffness than for one without.
There’s a big difference between the waves of fields with and without stiffness:
- A field without stiffness has only traveling waves, as in Fig. 4 and 5, which all travel at the same speed.
- A field with stiffness is more complicated;
- its waves of high frequency and small wavelength (Fig. 6) are like those of fields without stiffness (Fig. 4),
- but at a special low frequency it has “standing waves” (of an unfamiliar sort); these vibrate in place (Fig. 7) and do not travel anywhere.
Although I made these statements about floppy and stiff fields, the same statements are true of waves in floppy and stiff strings, which are what Figs. 4-7 actually show. We’ll use the intuition from these figures as we now proceed from waves to “particles”.
Stiffness and the mass of particles
Now, finally, we get to quantum physics. In a quantum world such as ours, the field’s waves are made from indivisible tiny waves, which for historical reasons we call “particles.” Despite their name, these objects aren’t little dots; see Fig. 8. Their shape and extent is similar to the shapes of the waves shown in Figs. 4-7 above, except that they are very small in height (or “amplitude”), and thus very small in the amount of energy they carry.
[I sometimes prefer to call these objects “wavicles“, because I think the word particle makes us all imagine electrons and photons as though they were little dots, which they are not… as Figure 8 emphasizes. Still, since particle is the standard term, I’ll continue to use it here, but with quotation marks. (And don’t ask me how indivisible waves can exist; I don’t know. Experiments teach us that they do. But no one can easily visualize how and why this happens.)]
A floppy field’s waves never stop traveling from place to place, and similarly, its “particles” never stop either. The electromagnetic field (which combines both electric and magnetic fields) gives us an example of a floppy field: its waves are light waves, and its “particles” are known as photons. In empty space, light waves and their photons are always traveling at a fixed speed, as illustrated in Figs. 4 and 5. That speed is called “c“, and referred to as “the cosmic speed limit” or “the speed of light.”
A stiff field’s high-frequency “particles” travel just like those of a floppy field; compare Fig. 6 to Fig. 4. The stiffening doesn’t affect high-frequency waves very much. But at a lower frequency, the situation is different. There is a frequency for which the “particle” becomes a standing wave; in shape it looks like the standing wave in Fig. 7, but it has an extremely tiny height (or “amplitude”.)
This “particle” (i.e. indivisible mini-version of a standing wave)
- has energy (it’s vibrating, after all) and
- is stationary (it’s standing, after all, and not traveling anywhere.)
Well, any stationary object with energy satisfies Einstein’s equation . Therefore, dividing Einstein’s formula by and switching the two sides, we get a relation between the “particle’s” mass and its energy of its vibration:
In short, this “particle” has mass simply because it is a stationary and vibrating object. (A reminder: I specifically mean “rest mass”.)
Only stiff fields can have standing waves in empty space, which in turn are made from “particles” that are stationary and vibrating. And so, the very existence of a “particle” with non-zero mass is a consequence of the field’s stiffness.
Furthermore, thanks to quantum physics, the amount of energy stored in the vibration of a “particle” is always proportional to its vibrational frequency. For a standing wave, that frequency is proportional to the field’s stiffness. And thus the mass of the “particle” is proportional to the stiffness of the field; if you could double the stiffness, you would double the “particle’s” mass too.
(If you want to understand why all these things are true without appealing to math, then I recommend my article in Quanta magazine — or, if that’s too advanced, my book, as its purpose was to explain these points carefully and non-technically. If you want to see the math behind these statements, stay tuned; that’s coming in just a moment.)
Summing up
This concludes the story at a non-technical level: as emphasized in Fig. 2, the stiffness of a field
- makes the associated force short range (no quantum physics required) with a range that is inversely proportional to the stiffness, analogous to what happens in Fig. 3;
- assures its associated “particles” (i.e. its wavicles) have rest mass proportional to the stiffness (using quantum physics both in the existence of “particles” and the relation between mass and stiffness, sketched in Figs. 4-7.)
So as you see, particle rest mass does not cause forces to be short range; field stiffness does. And while quantum physics is responsible for the existence of “particles” and their mass, virtual particles and quantum uncertainty play no role at all.
Unless you’re interested in seeing the math behind this, you can skip the next section and jump to the last section of this article, which asks: where did the phib actually come from?
The True Story of Mass and Range, With Math
Now, here’s the math that all these words and pictures are based on. (Some of this is also touched on in my series of articles “Fields and Particles: With Math.”)
Let’s take a field that has a value everywhere in space and time . (Often I won’t write the in order to keep equations shorter and easier to read.) In a universe whose space and time are governed by Einstein’s relativity, one of the equations for any elementary field must take the form
where is the second derivative with respect to time , and similarly for the derivatives with respect to space , and where is the stiffness of the field . I’ll refer to this as “Equation (*).”
In what sense is a stiffness? Recall the definition of stiffness at the very beginning of this article. The term in this equation implies that to make non-zero costs energy proportional to . The larger is , the larger the energy cost and the stiffer the field. If , then there’s no such cost and the field is floppy.
The range of the field’s force
The methods needed to show that the field and its force are short range were already known in the 19th century, and require only first- or second-year undergraduate physics and math.
Let’s place an object which interacts with at the location and wait a while. Soon the field will take on a time-independent form , where is the distance from the object:
The field’s form can be obtained from Equation (*). The derivatives with respect to time are zero, since the field is time-independent. Also, we can use the math fact that
[Here’s a proof. At first glance it appears very long. But because we only have a function of here, all the terms involving angular derivatives are zero, which makes the proof dramatically shorter.]
Thus, except at , the field satisfies
[Equation (**)]. The solution to this equation is
For a proof, click here
Take the first derivative of the field:
Now take the second derivative:
This last is simply , and so it cancels against the final term in Equation (**).
To get intuition for what this means, let’s see how varies with distance:
- When ,
- When ,
- When ,
- When ,
- When ,
- (i.e., )
Said another way, at small , the field decreases inversely proportional to the distance, but when , the field craters, exponentially.
As for the force created by , the strength of the force is given by the same kind of formula we see for electromagnetism: we take a radial derivative of the field .
For a proof, click here.
The force between two objects is the spatial derivative of the potential energy between them, and in the case of this field , the potential energy is proportional to the field times the strength of the interaction between the particle and the field. (Compare this with electromagnetism: the potential energy between two charged particles is proportional to the electric potential created by one particle times the electric charge of the other particle.)
Therefore we should take the field , multiply it by a constant corresponding to the interaction strength (which won’t affect the derivative we’re going to take), and then take the derivative with respect to . This gives, for the force,
The constant just changes the overall strength but not the dependence of the force on , which agrees with the formula above.
If , the exponential is 1, so we get, for a floppy field with no stiffness,
the famous inverse square law that we see for electric forces.
But if is nonzero — if the field is stiff — then although we still have an inverse square law at short distances, we have an exponential fall-off at long-distances.
And so both the field and any force associated with it have a range of . Within that range, they’re as important as any field and force. But once we go to greater distances than that, their effects become absurdly small.
NOTICE THERE IS NO QUANTUM PHYSICS IN THIS DISCUSSION! The short range of the field is a “classical” effect; i.e., it can be understood without any knowledge of the underlying role of quantum physics in our universe. It arises straightforwardly from ordinary field concepts and an ordinary differential equation. Nothing uncertain about it.
Waves and “Particles”/Wavicles
Even without quantum physics, the equation for has well-known wave solutions, such as
which is a wave of frequency and wavelength moving in the direction. Equation (*) for says
This relation between frequency and wavelength is known as the “dispersion relation” for these waves.
In quantum physics, these waves are made from “particles”, which are waves whose amplitude is such that the “particle’s” energy and momentum are related to its frequency and wavelength by Planck’s famous constant ““:
This turns the dispersion relation into the formula
But that is a famous equation! In Einstein’s relativity, this is the relationship between energy and momentum for a object with mass.
Specifically, for a stationary “particle”, whose momentum is zero, the above formula becomes
But since for a stationary object, this means
and thus this wavicle has rest mass that is proportional to the stiffness of the field .
This is a quantum effect! Planck’s constant , the mascot of quantum physics, appears here.
But that does not mean the short range of the force is a quantum effect. Nor does the mass of the particle cause the short range of the force (or vice versa.) Most important of all, there has been no mention of quantum uncertainty or virtual particles. None is needed.
Why the Phib about Quantum Uncertainty and Virtual Particles?
So why is there such a popular phib that talks all about virtual particles? Because of the cult of something called “Feynman diagrams.”
I refer to it as a cult, somewhat tongue in cheek, not because Feynman diagrams aren’t accurate and useful, but because they are less important and revealing than they are often made out to be. They are not a good guide for how to think about the physics of quantum field theory, the modern language of particle physics. Instead, they are merely a tool for doing certain types of particle-related calculations. (We see this today in the fact that quantum field theory has never been on a stronger footing, and yet Feynman diagrams are used less and less every decade, even by people who do these very same calculations.) Extracting more physics out of Feynman diagrams than they actually contain resembles a cult activity, and while this was common fifty years ago, before quantum field theory was well-understood, few theoretical physicists of the current generation subscribe to it.
The Feynman diagrams shown in Fig. 9 represent the electromagnetic force between two electrons (left panel), and the the weak nuclear force between two neutrinos (right panel). The wiggly lines do indeed represent “virtual particles”. The lingo is that these virtual particles are said to be “exchanged” between the two outer particles, as though they are objects that are being thrown back and forth.
But in this context, these virtual particles are actually nothing but a representation of a field’s shape — exactly the same thing that I drew in Fig. 1 at the beginning of this post! Nothing is moving back and forth; nothing is actually being “exchanged”. No quantum physics is actually involved.
More precisely, for those who have read the mathy section above, we have calculated these shapes already:
- in the left diagram, the “virtual photon” line simply represents a long-range field: the shape of the electromagnetic potential
- in the right diagram, the “virtual Z boson” line represents a short-range field: the shape of the field .
- (where , the field’s stiffness, is proportional to the boson’s mass.)
[For physics students: these statements are obvious when the electrons and neutrinos are at rest and the “propagators” — i.e., the corresponding Green functions — of the virtual photon and Z boson are written in position space; just take the Fourier transform of the more familiar momentum-space expressions and .]
A field’s shape around an object, as in Fig. 1, is something clear and comprehensible. Undergraduate physics students encounter floppy fields in their first physics class, when they study electric fields and potentials around a charged object. The math of a stiff field involves the same ideas, with a small twist; it can certainly be explained to first- or second-year students. Why do we take these simple concepts and restate them in terms of “virtual particles” and Feynman diagrams? It’s an unnecessary mystification, making forces seem quantum, uncertain and physically bizarre. (I say “bizarre” because the “virtual particles” in the diagrams, carrying zero energy and positive momentum, are tachyons, with negative mass-squared!… yet another reason not to try to view “virtual particles” as if they really were “particles”.)
It’s a bad idea to take something accessible with first-year calculus and transform it into something that only physics graduate students can potentially understand. Why do we physicists so often make things harder to explain, instead of the opposite? I have no answer, but I hope it will change.
4 Responses
gracias. i will think about stiffness and maybe its relation to observation
I just finished reading the book on one of Feynman’s popular lectures from 50 years ago – QED,
where he explains QED by thinking just in terms of particles and quantum amplitudes.
I guess history hasn’t really favoured that approach to teaching fundamental physics.
Needless to say, Feynman was one of the great geniuses of the 20th century, and the discovery of the path integral method and Feynman diagram method for calculations completely revolutionized what was then understood about quantum field theory. [Along with Schwinger’s methods, which were more powerful for certain calculations, but only Schwinger could understand how to use them.]
However, it is often the case that when physicists discover something new and try to interpret it, they don’t get it right the first time. In the 1905-1910 years, Einstein didn’t believe the notion of Minkowski space-time was useful; then he changed his mind. He also changed his mind about the interpretation of E=mc^2 between his early years and his later years. Einstein never did have black holes or gravitational waves straight throughout his lifetime; it took physicists several decades.
So it is no criticism of Feynman to say that it turns out that the language that he introduced — focused on his diagrams, and full of the language of virtual particles — proves really to be more math language than physics language. He could not have known that right away (though he was probably more aware of it than most people by the 1960s — unfortunately I never met him and wouldn’t have known to ask until ten years after he died.) Quantum field theory does many, many things that Feynman diagrams do not explain or reveal, and even in theories like QED and QCD, there are conceptual issues that are very hard to approach using Feynman diagrams and that really require other approaches.
In that sense, you are right: history has not favoured this approach. Even the calculations that Feynman diagrams make possible can often be done much more efficiently using other methods. But Feynman’s method is general and almost foolproof, and so novices in quantum field theory could use it and understand it even from the beginning (which was certainly not true of Schwinger’s methods.) In this it was completely revolutionary. But it was revolutionary in that it made calculations much easier, which led to deeper conceptual understanding — from which we eventually learned that we actually often don’t need Feynman’s methods after all, and there are better ways to understand quantum field theory.
This is why we admire the great physicists of the past, but we do not worship every word and bow down to ancestral authority. Nobody, no matter how smart, gets it all right, and even the greatest scientists make statements that they — or we — come to recognize as naive, even if they seemed sensible at the time and are understandable in retrospect. (Just look at all the many wrong statements that were made over the decades about infinities and renormalization in QFT.)
I seem to be dealing with some incomprehensible bugs on this webpage. If you notice that equations are not displaying correctly, leave a comment here if you can. If you try to leave a comment here, and cannot, please let me know at https://profmattstrassler.com/contact-me/ .