Of Particular Significance

Why The Weak Nuclear Force Is Short Range

The “range” of a force is a measure of the distance across which it can easily be effective. Some forces, including electric and magnetic forces and gravity, are long-range, able to cause dramatic effects that can reach across rooms, planets, and even galaxies. Short-range forces tail off sharply, and are able to make a significant impact only at distances shorter than their “range”. The weak nuclear force, for instance, dies off at distances ten million times smaller than an atom! That makes its effects on atoms rather slow and rare, which is why it is called “weak”.

The difference between long-range and short-range is depicted schematically in Fig. 1. The green object at center is potentially able to create a force on a second object, not shown. The darkness of the shading at a particular location represents the strength of the force that the second object would be subjected to if it were placed at that location. A long-range force would be still be rather strong even at the edges of the left panel, while a short-range force would be exceedingly weak at the edges of the right panel.

Figure 1: The central green object can potentially exert a force on another object nearby, with a strength that decreases with distance, as indicated schematically by the blue shading. (Left) A long-range force can have effects that decrease gradually with distance. (Right) A short range force tails off rapidly beyond a certain distance (its “range”) from the central object.

Why is the weak-nuclear force so weak? Well, there’s a physics fib (or “phib”), widely promulgated by scientists and on websites, that tries to explain this. It shows up even on good quality websites, such as this one.

Typically the phib goes something like this:

  • The weak nuclear force is weak because it is “short-range” — i.e. has little effect at long distances;
    • true!
  • it is short range because the particles that “mediate” the force, the W and Z bosons, have mass;
    • meh… not entirely false, but rather misleading about what causes what
  • and that the reason particles with mass cause short-range forces has to do with the uncertainty principle of quantum physics and how it affects “virtual particles”.
    • uhh… wait just a second…

As phibs go, it’s not the worst, as it’s not entirely wrong from beginning to end. But still, it scrambles some basic concepts in physics, and it should be replaced.

[Aside: sometimes you’ll see the incorrect statement — Google’s AI, for instance, and also here — that the virtual particles with mass actually “decay“, and that’s what makes the force short-range. That’s just plain wrong, and not a phib.]

It’s certainly true that fields that create long-range forces, such as electric and magnetic forces, are associated with particles that have zero mass, such as photons (particles of light.) [By “mass”, I mean “rest mass” throughout this article. For the subtleties of different meanings of “mass”, see chapters 5-8 of my book.] And it’s equally true that fields that create short-range forces, such as the weak nuclear force, are often associated with particles that have non-zero mass. But that doesn’t mean that the mass of the particles causes the short-range of the force. And the weirdness of quantum physics has no role to play, either.

Even good physics websites (for instance, this one) can be found mumbling about virtual particles. They claim that thanks to the quantum uncertainty principle, virtual particles that have mass can’t travel as far as virtual particles that don’t, and that’s why the former can’t “mediate” a long-range force, while the latter can.

By appealing to quantum uncertainty, this phib has crossed the line, going from being mostly harmless to badly misguided.  Here’s the problem: the short range of the force has absolutely nothing to do with quantum uncertainty, and can be understood without any familiarity with quantum physics.

Later on I’ll explain this in more detail, but here’s a quick sketch of the correct logic (see Fig. 2), expressed for the weak nuclear force.

  • There is a certain property of the W and Z fields — it has no technical name (to my knowledge) — which I’ll call, imprecisely, “stiffness” [a term I also used in my book] ;
  • This stiffness attenuates the field and the associated weak nuclear force at long distance, as in Fig. 1’s right panel;
  • Quantum physics meanwhile assures that waves in the W and Z fields are made from elementary “particles”, called W and Z bosons
  • Quantum physics further assures that because the W and Z fields have stiffness, the W and Z bosons have mass.

So why do particles with mass give short-range forces?  They don’t.  As indicated in Fig. 2, the actual logic is that it’s stiffness that’s responsible for both effects: it gives certain fields short-range forces, and, separately, causes their waves to be made from “particles” with non-zero mass.

Figure 2: (Top) The incorrect logic implied by the phib. (Bottom) The correct logic; stiffness of a field implies the short range of its force, as can be shown without methods of modern physics.

The True Story of Mass and Range, Without Math

Let’s start with a non-technical discussion; I’ll give the math, for those interested, in the next section.

All elementary forces arise from elementary fields (though not all fields lead to elementary forces; see Chapters 13-15 of my book for more info.)  Leaving aside the deeper question of what a field is, here we’ll simply take a field to be something that has a value at each point in space and time. For example: at any moment and in any location in the room, there’s air pressure, which you can measure with a barometer; we can refer to the result of that measurement as the “value of the air pressure field at that place and time.” Similarly, there’s wind, a field whose value at a particular location and moment, as measured using an anemometer and a wind vane, tells us how fast the wind is blowing there, and in what direction.

For a field, what I mean by “stiffness” is crudely this: if a field is stiff, then making its value non-zero requires more energy than if the field is not stiff.

Stiffness and the range of fields

If a field is stiff, then any force it creates must have a finite range, as in the right panel of Fig. 1. Only if it lacks stiffness (let’s refer to it as “floppy”, for lack of a better term) can it be truly long range. The stiffer the field, the shorter its force’s range. In fact, the range is inversely proportional to the stiffness; for example, if the stiffness is doubled, then the range of the force drops in half.

In this section I’ll give you some qualitative intution for why this makes sense. To do so, I’ll use an analogy that isn’t precise, but does capture the basic physical phenomenon, at least at a qualitative level. The more precise explanation, with some undergraduate math, will come later in this post.

Instead of a field, we’ll study the behavior of a long string. Let’s start with a string that is taut but is only attached at its two ends. It has no stiffness — it’s “floppy” — because if the string’s ends were untied, it would just flop around, like a loose shoelace.

Now let’s grab its central point and pull it downward. As seen at the top panel of Fig. 3, the whole string bends downward; the response of the string is long-range, extending all the way to its two ends.

To make the string stiff, we’ll attach it to a rubber sheet. We’ll glue one side of the sheet to the horizontal line, and attach the opposite side of the sheet to the string along its length. Now, when the string is pulled down from the horizontal line, as in the middle panel of Fig. 3, the rubber sheet (blue) stretches between the string and the line, pulling the string back upward. This does indeed make the string stiffer, since it increases the energy needed to move the string away from the line.

If we pull on the string, as in the middle panel of Fig. 3, the force we exert overwhelms the pull of the rubber sheet, and so the center of the string distends downward. But the rubber sheet pulls the string back toward the horizontal line. As a result, the string only moves in a limited way,. In particular, toward the two ends of the string, the string remains on the horizontal line, thanks to the rubber sheet’s upward pull. And so the string’s response to our pull is now limited to a shorter, finite range compared to the previous case.

Figure 3: (Top) Pulling on a floppy string (thick grey) causes it to respond across its entire length. (Middle) Pulling on a string that has been stiffened by a rubber sheet (blue) causes the string to respond only across a finite range. (Bottom) A stronger rubber sheet (dark blue) makes the string even stiffer, and the range of the response even shorter.

Finally, the bottom panel of Fig. 3 shows what would happen if we used a tougher rubber sheet that makes the string even stiffer. The response to our pull would then have an even shorter range.

This is the basic intuition: a stiff string resists moving away from the horizontal line, causing its response to a disturbance to reach out only over a small range.

The situation with fields is analogous. A stiff field resists having a non-zero value. Suppose an object disturbs a floppy field, causing the field to be non-zero in its neighborhood. If the field is floppy, the effects on the field can be significant even very far away (as in Fig. 1’s left panel). But if the field is stiff, the energy cost of disturbing the field is much higher, and so effect on the field is only significant in the immediate neighborhood of the object (as in the right panel of Fig. 1).

Note that I have not written the word “quantum” in this section. Just as for the string in Fig. 3, the short range of a force has to do with effects of stiffness that would arise even in a world without quantum physics.

Stiffness and the nature of waves

On our way to understanding why stiffness also leads to particle masses, let’s talk first about waves. We need to do this because what we call “elementary particles”, such as photons or electrons, are wave-like, as we’ll see.

Another effect of stiffness: it more difficult for a field to ripple. That’s not surprising: rippling involves the field’s value being non-zero in a wave-like pattern, as in Fig. 4, and being non-zero costs more energy for a field with stiffness than for one without. 

There’s a big difference between the waves of fields with and without stiffness:

  • A field without stiffness has only traveling waves, as in Fig. 4 and 5, which all travel at the same speed.
  • A field with stiffness is more complicated;
    • its waves of high frequency and small wavelength (Fig. 6) are like those of fields without stiffness (Fig. 4),
    • but at a special low frequency it has “standing waves” (of an unfamiliar sort); these vibrate in place (Fig. 7) and do not travel anywhere.

Although I made these statements about floppy and stiff fields, the same statements are true of waves in floppy and stiff strings, which are what Figs. 4-7 actually show. We’ll use the intuition from these figures as we now proceed from waves to “particles”.

Figure 4: A high-frequency wave travels across a floppy string.
Figure 5: As in Fig. 4, but for a lower-frequency wave with a longer wavelength (the space between wave crests); note it moves at the same speed as the wave in Fig. 4.
Figure 6: A high-frequency wave moves down a stiff string; although the pull of the rubber sheet is visible below the crests and above the troughs, its impact is limited, and the wave is similar to that of Fig. 4.
Figure 7: In contrast to Fig. 5, a sufficiently low-frequency wave in a stiff string may form a stationary (“standing”) wave. The stiffness provided by the rubber sheet allows the string to vibrate in place.

Stiffness and the mass of particles

Now, finally, we get to quantum physics. In a quantum world such as ours, the field’s waves are made from indivisible tiny waves, which for historical reasons we call “particles.” Despite their name, these objects aren’t little dots; see Fig. 8. Their shape and extent is similar to the shapes of the waves shown in Figs. 4-7 above, except that they are very small in height (or “amplitude”), and thus very small in the amount of energy they carry.

Figure 8: There’s no perfect intuition for quantum physics. But it’s not helpful to imagine photons and electrons as particles (top right), meaning a “tiny speck”. Nor is it helpful to imagine them as both wave (top left) and particle (top right). Instead (as quantum field theory makes clear), better intuition comes from understanding a photon or electron as a “particle”, meaning a “minimal wave” (bottom left). Because a minimal wave’s height (or “amplitude”), and therefore its energy, are as small as nature allows, it is indivisible.

A floppy field’s waves never stop traveling from place to place, and similarly, its “particles” never stop either.  The electromagnetic field (which combines both electric and magnetic fields) gives us an example of a floppy field: its waves are light waves, and its “particles” are known as photons. In empty space, light waves and their photons are always traveling at a fixed speed, as illustrated in Figs. 4 and 5. That speed is called “c“, and referred to as “the cosmic speed limit” or “the speed of light.”

A stiff field’s high-frequency “particles” travel just like those of a floppy field; compare Fig. 6 to Fig. 4. The stiffening doesn’t affect high-frequency waves very much. But at a lower frequency, the situation is different. There is a frequency for which the “particle” becomes a standing wave; in shape it looks like the standing wave in Fig. 7, but it has an extremely tiny height (or “amplitude”.)

This “particle” (i.e. indivisible mini-version of a standing wave)

  • has energy (it’s vibrating, after all) and
  • is stationary (it’s standing, after all, and not traveling anywhere.) 

Well, any stationary object with energy satisfies Einstein’s equation E=mc^2. Therefore, dividing Einstein’s formula by c^2 and switching the two sides, we get a relation between the “particle’s” mass and its energy E of its vibration:

  • m = E / c^2

In short, this “particle” has mass simply because it is a stationary and vibrating object. (A reminder: I specifically mean “rest mass”.)

Only stiff fields can have standing waves in empty space, which in turn are made from “particles” that are stationary and vibrating. And so, the very existence of a “particle” with non-zero mass is a consequence of the field’s stiffness.  

Furthermore, thanks to quantum physics, the amount of energy stored in the vibration of a “particle” is always proportional to its vibrational frequency. For a standing wave, that frequency is proportional to the field’s stiffness. And thus the mass of the “particle” is proportional to the stiffness of the field; if you could double the stiffness, you would double the “particle’s” mass too.

(If you want to understand why all these things are true without appealing to math, then I recommend my article in Quanta magazine — or, if that’s too advanced, my book, as its purpose was to explain these points carefully and non-technically. If you want to see the math behind these statements, stay tuned; that’s coming in just a moment.)

Summing up

This concludes the story at a non-technical level: as emphasized in Fig. 2, the stiffness of a field 

  • makes the associated force short range (no quantum physics required) with a range that is inversely proportional to the stiffness, analogous to what happens in Fig. 3;
  • assures its associated “particles” (i.e. its wavicles) have rest mass proportional to the stiffness (using quantum physics both in the existence of “particles” and the relation between mass and stiffness, sketched in Figs. 4-7.)

So as you see, particle rest mass does not cause forces to be short range; field stiffness does. And while quantum physics is responsible for the existence of “particles” and their mass, virtual particles and quantum uncertainty play no role at all.

Unless you’re interested in seeing the math behind this, you can skip the next section and jump to the last section of this article, which asks: where did the phib actually come from?

The True Story of Mass and Range, With Math

Now, here’s the math that all these words and pictures are based on. (Some of this is also touched on in my series of articles “Fields and Particles: With Math.”)

Let’s take a field \phi(x,y,z,t) that has a value everywhere in space x,y,z and time t.  (Often I won’t write the (x,y,z,t) in order to keep equations shorter and easier to read.) In a universe whose space and time are governed by Einstein’s relativity, one of the equations for any elementary field must take the form

where {d^2}/{dt^2} is the second derivative with respect to time t, and similarly for the derivatives with respect to space x,y,z, and where S is the stiffness of the field \phi.  I’ll refer to this as “Equation (*).”

In what sense is S a stiffness? Recall the definition of stiffness at the very beginning of this article. The S^2 \phi term in this equation implies that to make \phi non-zero costs energy proportional to S^2 \phi^2. The larger is S, the larger the energy cost and the stiffer the field. If S=0, then there’s no such cost and the field is floppy.

The range of the field’s force

The methods needed to show that the field and its force are short range were already known in the 19th century, and require only first- or second-year undergraduate physics and math.

Let’s place an object which interacts with \phi at the location  x=y=z=0 and wait a while. Soon the field will take on a time-independent form \phi(r), where r is the distance from the object:

r = \sqrt{x^2 + y^2 + z^2 }  

The field’s form can be obtained from Equation (*). The derivatives with respect to time are zero, since the field is time-independent. Also, we can use the math fact that

[Here’s a proof. At first glance it appears very long. But because we only have a function of r here, all the terms involving angular derivatives are zero, which makes the proof dramatically shorter.]

Thus, except at r=0, the field satisfies

-c^2\left[\frac{1}{r^2} \frac{d}{dr} r^2 \frac{d}{dr}- S^2\right] \phi(r) = 0 \ \ (**)

[Equation (**)]. The solution to this equation is

\phi(r) ={e^{ -S r}}/{r}

To get intuition for what this means, let’s see how  \phi(r) varies with distance:

  • When r = 0.01 (1/S),  \phi = 99.0\ S
  • When r = 0.1 (1/S),  \phi = 9.04\ S
  • When r = (1/S), \phi = .368\ S
  • When r = 10 (1/S),  \phi = .000004\ S
  • When r = 100 (1/S),  \phi = 4\times 10^{-46} S
    • (i.e.,  0.0000000000000000000000000000000000000000000004\ S)

Said another way, at small r \ll 1/S, the field decreases inversely proportional to the distance, but when r \gg 1/S, the field craters, exponentially.

As for the force created by \phi, the strength of the force is given by the same kind of formula we see for electromagnetism: we take a radial derivative of the field \phi.

F(r) = \frac{d}{dr} \phi(r) = e^{-Sr}\left(\frac{1}{r^2} + \frac{S}{r}\right)

If S=0, the exponential is 1, so we get, for a floppy field with no stiffness,

F(r)=\frac{1}{r^2} ,

the famous inverse square law that we see for electric forces. 

But if S is nonzero — if the field is stiff — then although we still have an inverse square law at short distances, we have an exponential fall-off at long-distances.

  • F(r)={1}/{r^2} \ \ {\rm for} \ \ r \ll 1/S
  • F(r)={Se^{-Sr}}/{r} \ \ {\rm for} \ \ r \gg 1/S

And so both the field and any force associated with it have a range of 1/S. Within that range, they’re as important as any field and force. But once we go to greater distances than that, their effects become absurdly small.

NOTICE THERE IS NO QUANTUM PHYSICS IN THIS DISCUSSION!  The short range of the field is a “classical” effect; i.e., it can be understood without any knowledge of the underlying role of quantum physics in our universe. It arises straightforwardly from ordinary field concepts and an ordinary differential equation. Nothing uncertain about it.

Waves and “Particles”/Wavicles

Even without quantum physics, the equation for \phi has well-known wave solutions, such as

  • \phi(x,y,z,t) = A \cos [2 \pi (ft-x/\lambda)]

which is a wave of frequency f and wavelength \lambda moving in the x direction.  Equation (*) for \phi says

  • f^2 - c^2/\lambda^2 - (S/2\pi)^2 = 0

This relation between frequency and wavelength is known as the “dispersion relation” for these waves.

In quantum physics, these waves are made from “particles”, which are waves whose amplitude A is such that the “particle’s” energy E and momentum p are related to its frequency and wavelength by Planck’s famous constant “h“:

  • E = h f
  • p = h/\lambda

This turns the dispersion relation into the formula

  • E^2 - p^2 c^2 = (h S/2 \pi)^2

But that is a famous equation! In Einstein’s relativity, this is the relationship between energy and momentum for a object with mass.

Specifically, for a stationary “particle”, whose momentum p is zero, the above formula becomes

  • E^2 = (h S/2 \pi)^2

But since E=mc^2 for a stationary object, this means

  • (mc^2)^2 = (h S/2 \pi)^2 \ \Rightarrow \ \ m = h S /(2 \pi c^2)

and thus this wavicle has rest mass m that is proportional to the stiffness S of the field \phi

This is a quantum effect! Planck’s constant h, the mascot of quantum physics, appears here. 

But that does not mean the short range of the force is a quantum effect. Nor does the mass of the particle cause the short range of the force (or vice versa.) Most important of all, there has been no mention of quantum uncertainty or virtual particles. None is needed.

Why the Phib about Quantum Uncertainty and Virtual Particles?

So why is there such a popular phib that talks all about virtual particles? Because of the cult of something called “Feynman diagrams.” 

I refer to it as a cult, somewhat tongue in cheek, not because Feynman diagrams aren’t accurate and useful, but because they are less important and revealing than they are often made out to be. They are not a good guide for how to think about the physics of quantum field theory, the modern language of particle physics. Instead, they are merely a tool for doing certain types of particle-related calculations.  (We see this today in the fact that quantum field theory has never been on a stronger footing, and yet Feynman diagrams are used less and less every decade, even by people who do these very same calculations.)  Extracting more physics out of Feynman diagrams than they actually contain resembles a cult activity, and while this was common fifty years ago, before quantum field theory was well-understood, few theoretical physicists of the current generation subscribe to it.

The Feynman diagrams shown in Fig. 9 represent the electromagnetic force between two electrons (left panel), and the the weak nuclear force between two neutrinos (right panel). The wiggly lines do indeed represent “virtual particles”. The lingo is that these virtual particles are said to be “exchanged” between the two outer particles, as though they are objects that are being thrown back and forth.

Figure 9: (Left) Feynman diagram for computing the electric force between two electrons existing over time and separated by a distance in space; a “virtual photon” appears in the calculation. (Right) The same for the calculation of the weak nuclear force between two neutrinos, involving a virtual Z boson.

But in this context, these virtual particles are actually nothing but a representation of a field’s shape — exactly the same thing that I drew in Fig. 1 at the beginning of this post! Nothing is moving back and forth; nothing is actually being “exchanged”. No quantum physics is actually involved.

More precisely, for those who have read the mathy section above, we have calculated these shapes already:

  • in the left diagram, the “virtual photon” line simply represents a long-range field: the shape of the electromagnetic potential  (1/r)
  • in the right diagram, the “virtual Z boson” line represents a short-range field: the shape of the Z field (\exp[-Sr]/r).
    • (where S, the Z field’s stiffness, is proportional to the Z boson’s mass.)  

[For physics students: these statements are obvious when the electrons and neutrinos are at rest and the “propagators” — i.e., the corresponding Green functions — of the virtual photon and Z boson are written in position space; just take the Fourier transform of the more familiar momentum-space expressions 1/|\vec p|^2 and 1/(|\vec p|^2 + m_Z^2).]

A field’s shape around an object, as in Fig. 1, is something clear and comprehensible. Undergraduate physics students encounter floppy fields in their first physics class, when they study electric fields and potentials around a charged object. The math of a stiff field involves the same ideas, with a small twist; it can certainly be explained to first- or second-year students. Why do we take these simple concepts and restate them in terms of “virtual particles” and Feynman diagrams? It’s an unnecessary mystification, making forces seem quantum, uncertain and physically bizarre. (I say “bizarre” because the “virtual particles” in the diagrams, carrying zero energy and positive momentum, are tachyons, with negative mass-squared!… yet another reason not to try to view “virtual particles” as if they really were “particles”.)

It’s a bad idea to take something accessible with first-year calculus and transform it into something that only physics graduate students can potentially understand.  Why do we physicists so often make things harder to explain, instead of the opposite? I have no answer, but I hope it will change.

5 Responses

  1. Thanks – I have always assumed that when the inverse square law no longer applies the force can no longer be naively described by the tool of field lines and is by necessity short range. (Maybe so, but now I know why.)

    “And don’t ask me how indivisible waves can exist; I don’t know. Experiments teach us that they do. But no one can easily visualize how and why this happens.”

    If they didn’t, I assume the ultraviolet energy catastrophe of black body thermal radiation would occur. Speaking of necessity, it seems to me that often when nature meets a constraint that we haven’t evolved to experience such as classical relativity or quantum physics, it has figuratively thrown up its hands and say “if I have to, just so there’s no problem – but excuse me if it doesn’t seem to make sense”. Time dilation, length contraction, quantized “wavicles”, …

  2. I just finished reading the book on one of Feynman’s popular lectures from 50 years ago – QED,
    where he explains QED by thinking just in terms of particles and quantum amplitudes.

    I guess history hasn’t really favoured that approach to teaching fundamental physics.

    1. Needless to say, Feynman was one of the great geniuses of the 20th century, and the discovery of the path integral method and Feynman diagram method for calculations completely revolutionized what was then understood about quantum field theory. [Along with Schwinger’s methods, which were more powerful for certain calculations, but only Schwinger could understand how to use them.]

      However, it is often the case that when physicists discover something new and try to interpret it, they don’t get it right the first time. In the 1905-1910 years, Einstein didn’t believe the notion of Minkowski space-time was useful; then he changed his mind. He also changed his mind about the interpretation of E=mc^2 between his early years and his later years. Einstein never did have black holes or gravitational waves straight throughout his lifetime; it took physicists several decades.

      So it is no criticism of Feynman to say that it turns out that the language that he introduced — focused on his diagrams, and full of the language of virtual particles — proves really to be more math language than physics language. He could not have known that right away (though he was probably more aware of it than most people by the 1960s — unfortunately I never met him and wouldn’t have known to ask until ten years after he died.) Quantum field theory does many, many things that Feynman diagrams do not explain or reveal, and even in theories like QED and QCD, there are conceptual issues that are very hard to approach using Feynman diagrams and that really require other approaches.

      In that sense, you are right: history has not favoured this approach. Even the calculations that Feynman diagrams make possible can often be done much more efficiently using other methods. But Feynman’s method is general and almost foolproof, and so novices in quantum field theory could use it and understand it even from the beginning (which was certainly not true of Schwinger’s methods.) In this it was completely revolutionary. But it was revolutionary in that it made calculations much easier, which led to deeper conceptual understanding — from which we eventually learned that we actually often don’t need Feynman’s methods after all, and there are better ways to understand quantum field theory.

      This is why we admire the great physicists of the past, but we do not worship every word and bow down to ancestral authority. Nobody, no matter how smart, gets it all right, and even the greatest scientists make statements that they — or we — come to recognize as naive, even if they seemed sensible at the time and are understandable in retrospect. (Just look at all the many wrong statements that were made over the decades about infinities and renormalization in QFT.)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.