Of Particular Significance

The Standard Model More Deeply: The Magic Angle Nailed Down

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/19/2024

In a previous post, I showed you that the Standard Model, armed with its special angle θw of approximately 30 degrees, does a pretty good job of predicting a whole host of processes in the Standard Model. I focused attention on the decays of the Z boson, but there were many more processes mentioned in the bonus section of that post.

But the predictions aren’t perfect. They’re not enough to convince a scientist that the Standard Model might be the whole story. So today let’s bring these predictions into better focus.

There are two major issues that we have to correct in order to make more precise predictions using the Standard Model:

  • In contrast to what I assumed in the last post, θw isn’t exactly 30 degrees (i.e. sin θw isn’t 1/2)
  • Although I ignored them so far, the strong nuclear force makes small but important effects

But before we deal with these, we have to fix something with the experimental measurements themselves.

Knowledge and Uncertainty: At the Center of Science

No one complained — but everyone should have — that when I presented the experimental results in my previous post, I expressed them without the corresponding uncertainties. I did that to keep things simple. But it wasn’t professional. As every well-trained scientist knows, when you are comparing an experimental result to a theoretical prediction, the uncertainties, both experimental and theoretical, are absolutely essential in deciding whether your prediction works or not. So we have to discuss this glaring omission.

Here’s how to read typical experimental uncertainties (see Figure 1). Suppose a particle physicist says that a quantity is measured to be x ± y — for instance, that the top quark mass is measured to be 172.57± 0.29 GeV/c2. Usually (unless explicitly noted) that means that the true value has a 68% chance of lying between x-y and x+y — “within one standard deviation” — and a 95% chance of lying between x-2y and x+2y — “within two standard deviations.” (See Figure 1, where x and y are called  \mu and  \sigma .) The chance of the true value being more than two standard deviations away from x is about 5% — about 1/20. That’s not rare! It will happen several times if you make a hundred different measurements.

Figure 1: Experimental uncertainties corresponding to  \mu \pm \sigma , where  \mu is the “central value” and “ \sigma ” is a “standard deviation.

But the chance of being more than three standard deviations away from x is a small fraction of a percent — as long as the cause is purely a statistical fluke — and that is indeed rare. (That said, one has to remember that big differences between prediction and measurement can also be due to an unforseen measurement problem or feature. That won’t be an issue today.)

W Boson Decays, More Precisely

Let’s first look at W decays, where we don’t have the complication of θw , and see what happens when we account for the effect of the strong nuclear force and the impact of experimental uncertainies.

The strong nuclear force slightly increases the rate for the W boson to decay to any quark/anti-quark pair, by about 3%. This is due to the same effect discussed in the “Understanding the Remaining Discrepancy” and “Strength of a Force” sections of this post… though the effect here is a little smaller (as it decreases at shorter distances and higher energies.) This slightly increases the percentages for quarks and, to compensate, slightly reduces the percentages for the electron, muon and tau (the “leptons”).

In Figure 2 are shown predictions of the Standard Model for the probabilities of the W- boson’s various decays:

  • At left are the predictions made in the previous post.
  • At center are better predictions that account for the strong nuclear force.

(To do this properly, uncertainties on these predictions should also be provided. But I don’t think that doing so would add anything to this post, other than complications.) These predictions are then compared with the experimental measurements of several quantities, shown at right: certain combinations of these decays that are a little easier to measure are also shown. (The measurements and uncertainties are published by the Particle Data Group here.)

Figure 2: The decay probabilities for W bosons, showing the percentage of W bosons that decay to certain particles. Predictions are given both before (left) and after (center) accounting for effects of the strong nuclear force. Experimental results are given at right, showing all measurements that can be directly performed.

The predictions and measurements do not perfectly agree. But that’s fine; because of the uncertainties in the measurements, they shouldn’t perfectly agree! All of the differences are less than two standard deviations, except for the probability for decay of a W to a tau and its anti-neutrino. That deviation is less than three standard deviations — and as I noted, if you have enough measurements, you’ll occasionally get one that differs by more than two standard deviations. We still might wonder if something funny is up with the tau, but we don’t have enough evidence of that yet. Let’s see what the Z boson teaches us later.

In any case, to a physicist’s eye, there is no sign here of any notable disgreement between theory and experiment in these results. Within current uncertainties, the Standard Model correctly predicts the data.

Z Boson Decays, More Precisely

Now let’s do the same for the Z boson, but here we have three steps:

  • first, the predictions when we take sin θw = 1/2, as we did in the previous post;
  • second, the predictions when we take sin θw = 0.48;
  • third, the better predictions when we also include the effect of the strong nuclear force.

And again Figure 3 compares predictions with the data.

Figure 3: The decay probabilities for Z bosons, showing the percentage of Z bosons that decay to certain particles. Predictions are given (left to right) for sin θw = 0.5, for sin θw =0.48, and again sin θw = 0.48 with the effect of strong nuclear force accounted for. Experimental results are given at right, showing all measurements that can be directly performed.

You notice that some of the experimental measurements have extremely small uncertainties! This is especially true of the decays to electrons, to muons, to taus, and (collectively) to the three types of neutrinos. Let’s look at them closely.

If you look at the predictions with sin θw = 1/2 for the electrons, muons and taus, they are in disagreement with the measurements by a lot. For example, in Z decay to muons, the initial prediction differs from the data by 19 standard deviations!! Not even close. For sin θw = 0.48 but without accounting for the strong nuclear force, the disagreement drops to 11 standard deviations; still terrible. But once we account also for the strong nuclear force, the predictions agree with data to within 1 to 2 standard deviations for all three types of particles.

As for the decays to neutrinos, the three predictions differ by 16 standard deviations, 9 standard deviations, and… below 2 standard deviations.

My reaction, when this data came in in the 1990s, was “Wow.” I hope yours is similar. Such close matching of the Standard Model’s predictions with highly precise measurements is a truly stunning sucesss.

Notice that the successful prediction requires three of the Standard Model’s forces: the mixture of the electromagnetic and weak nuclear forces given by the magic angle, with a small effect from the strong nuclear force. Said another way, all of the Standard Model’s particles except the Higgs boson and top quark play a role in Figs. 2 and 3. (The Higgs field, meanwhile, is secretly in the background, giving the W and Z bosons their masses and affecting the Z boson’s interactions with the other particles; and the top quark is hiding in the background too, since it can’t be removed without changing how the Z boson interacts with bottom quarks.) You can’t take any part of the Standard Model out without messing up these predictions completely.

Oh, and by the way, remember how the probability for W decay to a tau and a neutrino in Fig. 2 was off the prediction by more than two standard deviations? Well there’s nothing weird about the tau or the neutrinos in Fig. 3 — predictions and measurements agree just fine — and indeed, no numbers in Z decay differ from predictions by more than two standard deviations. As I said earlier, the expectation is that about one in every twenty measurements should differ from its true value by more than two standard deviations. Since we have over a dozen measurements in Figs. 2 and 3, it’s no surprise that one of them might be two standard deviations off… and so we can’t use that single disagreement as evidence that the Standard Model doesn’t work.

Asymmetries, Precisely

Let’s do one more case: one of the asymmetries that I mentioned in the bonus section of the previous post. Consider a forward-backward asymmetry shown in Fig. 4. Take all collisions in which an electron strikes a positron (the anti-particle of an electron) and turns into a muon and an anti-muon. Now compare the probability that the muon goes “forward” (roughly the direction that the electron is heading) to the probability that it goes “backward” (roughly the direction that the positron is heading.) If the two probabilities are equal, then the asymmetry would be zero; if the muon always goes to the left, then the asymmetry would be 100%; if always to the right, the asymmetry would be -100%.

Figure 4: In electron-positron collisions that make a muon/anti-muon pair, the forward-backward asymmetry compares the rate for “forward” production (where the muon travels roughly in the same direction as the electron) to “backward” production.

Asymmetries are special because the effect of the strong nuclear force cancels out of them completely, and so they only depend on sin θw. And this particular “leptonic forward-backward” asymmetry is an example with a special feature: if sin θw were exactly 1/2, this asymmetry for lepton production would be predicted to be exactly zero.

But the measured value of this asymmetry, while quite small (less than 2%), is definitely not zero, and so this is another confirmation that sin θw is not exactly 1/2. So let’s instead compare the prediction for this asymmetry using sin θw = 0.48, the choice that worked so well for the Z boson’s decays in Fig. 3, with the data.

In Figure 5, the horizontal axis shows the lepton forward-backward asymmetry. The prediction of 1.8% that one obtains for sin θw = 0.48, widened slightly to cover 1.65% to 2.0%, which is what obtains for sin θw between 0.479 and 0.481, is shown in pink. The four open circles represent four measurements of the asymmetry by the four experiments that were located at the LEP collider; the dashes through the circles show the standard deviations on their measurements. The dark circle shows what one gets when one combines the four experiments’ data together, obtaining an even better statistical estimate: 1.71 ± 0.10%, the uncertainty being indicated both as the dash going through the solid circle and as the yellow band. Since the yellow band extends to just above 1.8%, we see that the data differs from the sin θw = 0.480 prediction (the center of the pink band) by less than one standard deviation… giving precise agreement of the Standard Model with this very small but well-measured asymmetry.

Figure 5: The data from four experiments at the LEP collider (open circles, with uncertainties shown as dashes), and the combination of their results (closed circle) giving an asymmetry of 1.70% with an uncertainty of ±0.10% (yellow bar.) The prediction of the Standard Model for sin θw between 0.479 and 0.481 is shown in pink; its central value of 1.8% is within one standard deviation of the data.

Predictions of other asymmetries show similar success, as do numerous other measurements.

The Big Picture

Successful predictions like these, especially ones in which both theory and experiment are highly precise, explain why particle physicists have such confidence in the Standard Model, despite its clear limitations.

What limitations of the Standard Model am I referring too? They are many, but one of them is simply that the Standard Model does not predict θw . No one can say why θw takes the value that it has, or whether the fact that it is close to 30 degrees is a clue to its origin or a mere coincidence. Instead, of the many measurements, we use a single one (such as one of the asymmetries) to extract its value, and then can predict many other quantities.

One thing I’ve neglected to do is to convey the complexity of the calculations that are needed to compare the Standard Model predictions to data. To carry out these computations much more carefully than I did in Figs. 2, 3 and 5, in order to make them as precise as the measurements, demands specialized knowledge and experience. (As an example of how tricky these computations can be: even defining what one means by sin θw can be ambiguous in precise enough calculations, and so one needs considerable expertise [which I do not have] to define it correctly and use that definition consistently.) So there are actually still more layers of precision that I could go into…!

But I think perhaps I’ve done enough to convince you that the Standard Model is a fortress. Sure, it’s not a finished construction. Yet neither will it be easily overthrown.

Share via:

Twitter
Facebook
LinkedIn
Reddit

23 Responses

  1. Assuming gluons DON’T exist, could this explain the cosmological constant problem? Maybe there is less energy in the universe?

    If, instead, of a force fields, like gluons, and the other bosons, could supersymmetry (with no extra mass required) be the mechanism that holds the proton, neutrons and electron together?

    Supersymmetry could also keep Black Holes together, too, which can also explain Einstein’s General Relativity.

    Postulate: As the spacetime curvature tends to zero the supersymmetry increase exponentially, and the “matter fields” null out. It is this nulling out that keeps the particles together since the energy will tend to the lowest state.

  2. I didn’t give th uncertainties any mind in the last post as the data was upfront admitted to be bad. Unless theere is controversy of defense planned, I’ve always left uncertainties for data that’s actually important or being used. In the 1920s Lemaitre gave us our first estimate of the Hubble parameter around I believe 500 or higher. There’s uncertainty there and in Hubble’s results a few years later. But it doesn’t matter as both early values are very very wrong, something Lemaitre readily admitted. In regards to the current Hubble tension uncertainties ahve been of prime importance in the discussion, but for the century old work they’re just distractions.

    All that aside, it’s interesting to see here how the various predictions relate and correlate, especially in regards to your note on how nearly every particle or force of the model is involved.

    1. I do not know what you say the “data was upfront admitted to be bad.” That’s not the right way to think about it. All data has uncertainties; the presence of uncertainties is not an indication that the data is bad, but rather that it has been collected by trained individuals.

      In fact, the uncertainties are very small — meaning that the data is, in fact, excellent. (Some of this data shown in the current post is precise to less than one percent.) Much of the data in the previous post was accurate to better than a few percent, though I did oversimplify it in a few places by averaging multiple quantities.

      What was “bad” in the previous post was theory, not data. We had a sin theta_W that was 4% off (and sin^2 theta_W is what usually appears, so that’s 8% off) and we negelected the strong nuclear force, a 3% effect that went in the same direction — and so the theory in the last post had 10%-ish errors [strictly speaking an *error* NOT an *uncertainty*! we were very certain and 10% wrong, which is a problem of accuracy, not precision].

      But the point of the last post was to sketch the theory, so 10% errors were okay for that purpose. That’s why the main point of today’s post was to fix the theory, not the data.

      1. Perhaps my wording was poorly chosen. The data is outdated, it has been surpassed. Nobody is arguing for it to be a correct, best model of reality. A Model T was built by skilled people, it is a complex system that worked well. But it is a very outdated car. I do not care for information on its gas mileage, nor do I expect to be provided any even if I want to purchase one, because that isn’t relevant, there is no world where that factor will have any impact on my opinion of it.

        Likewise nobody is arguing that the data in the last article is competitive with the data we have now, nor that it needs analysis to determine whether or not it accurately reflects reality. That analysis has been done, thoroughly, some time ago. It might be interesting as an exercise, in the way one might go over errors in measurements of the speed of light a century ago or the shape of the Earth. If someone demands an analysis because they have a theory of protons being tiny black holes or something then again analysis may be prudent.

        But as it is, with the data being acknowledged as outdated, with no challenges to this, there’s nothing really to gain from mentioning the errors in that context. Questioning everything can be taken too far I have found and diverts you from more productive things.

        1. Okay. But the data was not outdated; I simply did not reproduce the data in the first article with the care it deserved, because the point I was making did not require me to do so.

          As in all things, one should be as precise and accurate as necessary for the purpose… a lesson that most physics students seem to have trouble learning today, as even graduate students now give me their simulation results to nine digits.

              1. It was amusing: “efficiency” seems to be the more efficient description of what Einstein strived for.

                “Important note: Occam’s razor does not mean that given two different explanations, the simpler one is likely to be true. History has shown that this is often false, not only in particle physics but also in other areas of science.”

                My experience with parsimony as method isn’t the best, and when I learned that in biology parsimony often gives the worst non-random model it was simply an efficient means of devaluing it.

                It’s not always the worst bet either, that would have been too parsimonious. The new really deep phylogenies that can’t be rooted in an outgroup has been shown to be best rooted by parsimonious midpoint estimates instead of more complex methods.

        2. Hey kudnuz, good afternoon! I have a few questions for you:
          I see that you have been a regular follower since 2012. the answers and information are very high quality and clear. of course, to say this information you need to have a very good knowledge base. sorry for my ignorance, but do you work in physics or something similar? Or something else?

          1. Hello there. I actually have a BS-Tech in physical chemistry. So I know a decent amount about physics, if only on the level of atoms, and subjects such as covered by this blog can be familiar (or deeply surprising.) Most certainly this blog was a wonderful find.

    2. I see that you’ve been a regular follower since 2012 (as far as I can see, maybe 2013) and your answers are of high quality and clear. At the same time, you have to have a lot of knowledge to be able to give these answers. Sorry for my ignorance. Do you have a specialization in physics or something similar?

  3. Seems like your close to having the groundwork for a discussion of C/P/T symmetries, expectations, measurements, and violations. Is this on the roadmap for future articles?

      1. Looking forward to your explanation of spin. Not sure how many descriptions of spin I have read over the years, with it still being not fully grasped. Don’t want to add stress, but I am anticipating many cobwebs being cleared from my brain!

        1. While it worked to display figures, I found the now retired CERN blog Quantum Diaries’ posts by Flip Tanedo helpful. Like Matt, Flip minimized but also helpfully pointed out his phibs when they were necessary for ease of exposition. [ https://www.quantumdiaries.org/2011/06/19/helicity-chirality-mass-and-the-higgs/ is an example I found by search – the follow up https://www.quantumdiaries.org/2011/08/23/the-spin-of-gauge-bosons-vector-particles/ describes why starting with four vector descriptions are necessary but collapse to three or sometimes two vector descriptions, and why spin should not become larger than 2. FWIW the new blog “Particle People [is] highlighting a new blogger involved in particle physics research each month” but also seems to describe experiments, but that isn’t very helpful here.]

    1. Any one complete grand unified theory predicts the angle, yes. But you can always mess with that prediction by complicating the theory — e.g. by adding some unnecessary particles that shift the angle somewhat without messing up unification. So the prediction isn’t sharp.

  4. The Magic Angle comes from the structure that yields the Higgs field, time and more. For history see: [Editor’s note: this URL was deleted, as it links to your personal theory of the universe. My website’s purpose is to explain mainstream physics, not to advertise individuals’ personal theories. Please use your own website.]

  5. Dr.Stassler:
    That was a great write up. Can you provide a link again to the previous article, explaining the significance of the sin omega w? For review

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

When it comes to the weak nuclear force and why it is weak, there’s a strange story which floats around. It starts with a true

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 01/10/2025

This week I’ll be at the University of Michigan in Ann Arbor, and I’ll be giving a public talk for a general audience at 4

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/02/2024