*[This is the third post in a series; here’s #1 and #2.]*

The quantum field theory that we use to describe the known particles and forces is called the “Standard Model”, whose structure is shown schematically in Figure 1. It involves an interweaving of three quantum field theories — one for the electromagnetic force, one for the weak nuclear force, and one for the strong nuclear force — into a single more complex quantum field theory.

We particle physicists are extremely fortunate that this particular quantum field theory is just one step more complicated than the very simplest quantum field theories. If this hadn’t been the case, we might still be trying to figure out how it works, and we wouldn’t be able to make detailed and precise predictions for how the known elementary particles will behave in our experiments, such as those at the Large Hadron Collider [LHC].

In order to make predictions for processes that we can measure at the LHC, using the equations of the Standard Model, we employ a method of* successive approximation* (with the jargon name “method of perturbations”, or “perturbation `theory’ ”). It’s a very common method in math and science, in which

- we make an initial rough estimate,
- and then correct the estimate,
- and then correct the correction,
- etc.,

until we have a prediction that is precise enough for our needs.

What are those needs? Well, the precision of any measurement, in any context, is always limited by having

- a finite amount of data (so small statistical flukes are common)
- imperfect equipment (so small mistakes are inevitable).

What we need, for each measurement, is a prediction a little more precise than the measurement will be, but not much more so. In the difficult environment of the LHC, where measurements are really hard, we often need only the first correction to the original estimate; sometimes we need the second (see Figure 2).

Until recently the calculations were done by starting with Feynman’s famous diagrams, but the diagrams are not as efficient as one would like, and new techniques have made them mostly obsolete for really hard calculations.

The method of successive approximation works as long *as all the forces involved are rather “weak”*, in a technical sense. Now this notion of “weak” is complicated enough (and important enough) that I wrote a whole article on it, so those who really want to understand this should read that article. The brief summary suitable for today is this: suppose you took two particles that are attracted to each other by a force, and allowed them to glom together, like an electron and a proton, to form an atom-like object. Then *if the relative velocity of the two particles is small compared to the speed of light, the force is weak*. The stronger the force, the faster the particles will move around inside their “atom”. (For more details see this article. )

For a weak force, the method of successive approximation is very useful, because the correction to the initial estimate is small, and the correction to the correction is smaller, and the correction to the correction to the correction is even smaller. So for a weak force, the first or second correction is usually enough; one doesn’t have to calculate forever in order to get a sufficiently precise prediction. The “stronger” the force, in this technical sense, the harder you have to work to get a precise prediction, because the corrections to your estimate are larger.

If a force is truly strong, though, everything breaks down. In that case, the correction to the estimate is as big as the estimate, and the next correction is again just as big, so no method of successive approximation will get you close to the answer. In short, *for truly strong forces,* * you need a completely different approach* if you are to make predictions.

In the Standard Model, the electromagnetic force and the weak nuclear force are “weak” in all contexts. However, the strong nuclear force is (technically) “strong” for any processes that involve distances comparable to or larger than a proton‘s size (about 100,000 times smaller than an atom) or energies comparable to or smaller than a proton’s mass-energy (about 1 GeV). For such processes, successive approximation does not work at all; it can’t be used to calculate a proton’s size or mass or internal structure. In fact *the first step in that method would estimate that quarks and anti-quarks and gluons are free to roam independently and the proton should not exist at all*… which is so obviously completely wrong that no method of correcting it will ever give the right answer. I’ll get back to how we show protons are predicted by these equations, using big computers, in a later post.

But there’s a remarkable fact about the strong nuclear force. As I said, at distances the size of a proton or larger, the strong nuclear force is so strong that successive approximation doesn’t work. Yet, at distances shorter than this, **the force actually becomes “weak”**, *in the technical sense*, and successive approximation *does* work there.

Let me make sure this is absolutely clear, because the difference between what we think of colloquially as “weak” is different from “weak” in the sense I’m using it here. Suppose you put two quarks very close together, at a distance r closer together than the radius R of a proton. In Figure 3 I’ve plotted how big the strong nuclear force (purple) and the electromagnetic force (blue) would be between two quarks, as a function of the distance between them. Notice both forces are very strong (colloquially) at short distances (r << R), but (I assert) both forces are weak (technically) there. The electromagnetic force is much the weaker of the two, which is why its curve is lower in the plot.

Now if you move the two quarks apart a bit (increasing r, but still with r << R), both forces become smaller; in fact both decrease almost like 1/r², which would be your first, naive estimate, same as in your high school science class. If this naive estimate were correct, both forces would maintain the same strength (technically) at all distances r.

But this isn’t quite right. Since the 1950s, it was well-known that the correction to this estimate (using successive approximation methods) is to make the electromagnetic force decrease *just a little faster* than that; it becomes a tiny bit weaker (technically) at longer distances. In the 60s, that’s what most people thought any force described by quantum field theory would do. But they were wrong. In 1973, David Politzer, and David Gross and Frank Wilczek, showed that ** for the quantum field theory of quarks and gluons, the correction to the naive estimate goes the other direction**; it makes the force decrease just a little more

*than 1/r².*

**slowly***[Gerard 't Hooft had also calculated this, but apparently without fully recognizing its importance...?]*It is the small, accumulating excess above the naive estimate — the gradual deviation of the purple curve from its naive 1/r² form — that leads us to say that this force becomes technically “stronger” and “stronger” at larger distances. Once the distance r becomes comparable to a proton’s size R, the force becomes so “strong” that successive approximation methods fail. As shown in the figure, we have some evidence that the force becomes constant for r >> R, independent of distance. It is this effect that, as we’ll see next time, is responsible for the existence of protons and neutrons, and therefore of all ordinary matter.

So: *at very short distances and high energies, the strong nuclear force is a somewhat “weak” force*, stronger still than the electromagnetic and weak nuclear forces, but similar to them. And therefore, successive approximation **can** tell you what happens when a quark inside one speeding proton hits a quark in a proton speeding the other direction, as long as the quarks collide with energy far more than 1 GeV. * If this weren’t true, we could make scarcely any predictions at the LHC, and at similar proton-proton and proton-antiproton colliders!* (We do also need to know about the proton’s structure, but we don’t calculate that: we simply measure it in other experiments.) In particular, we would never have been able to calculate how often we should be making top quarks, as in Figure 2. And we would not have been able to calculate what the Standard Model, or any other quantum field theory, predicts for the rate at which Higgs particles are produced, so we’d never have been sure that the LHC would either find or exclude the Standard Model Higgs particle. Fortunately, it

*is*true, and that is why precise predictions can be made, for so many processes, at the LHC. And the success of those and other predictions underlies our confidence that the Standard Model correctly describes most of what we know about particle physics.

But still, the equations of the strong nuclear force have only quarks and anti-quarks and gluons in them — no protons, neutrons, or other hadrons. Our understanding of the real world would certainly be incomplete if we didn’t know why there are protons. Well, it turns out that * if we want to know whether protons and neutrons and other hadrons are actually predicted by the strong nuclear force’s equations, we have to test this notion using big computers*. And that’s tricky, even trickier than you might guess.

*Continued here*

* *