Tag Archives: LHC

A Big Think Made of Straw: Bad Arguments Against Future Colliders

Here’s a tip.  If you read an argument either for or against a successor to the Large Hadron Collider (LHC) in which the words “string theory” or “string theorists” form a central part of the argument, then you can conclude that the author (a) doesn’t understand the science of particle physics, and (b) has an absurd caricature in mind concerning the community of high energy physicists.  String theory and string theorists have nothing to do with whether such a collider should or should not be built.

Such an article has appeared on Big Think. It’s written by a certain Thomas Hartsfield.  My impression, from his writing and from what I can find online, is that most of what he knows about particle physics comes from reading people like Ethan Siegel and Sabine Hossenfelder. I think Dr. Hartsfield would have done better to leave the argument to them. 

An Army Made of Straw

Dr. Hartsfield’s article sets up one straw person after another. 

  • The “100 billion” cost is just the first.  (No one is going to propose, much less build, a machine that costs 100 billion in today’s dollars.)  
  • It refers to “string theorists” as though they form the core of high-energy theoretical physics; you’d think that everyone who does theoretical particle physics is a slavish, mindless believer in the string theory god and its demigod assistant, supersymmetry.  (Many theoretical particle physicists don’t work on either one, and very few ever do string theory. Among those who do some supersymmetry research, it’s often just one in a wide variety of topics that they study. Supersymmetry zealots do exist, but they aren’t as central to the field as some would like you to believe.)
  • It makes loud but tired claims, such as “A giant particle collider cannot truly test supersymmetry, which can evolve to fit nearly anything.”  (Is this supposed to be shocking? It’s obvious to any expert. The same is true of dark matter, the origin of neutrino masses, and a whole host of other topics. Its not unusual for an idea to come with a parameter which can be made extremely small. Such an idea can be discovered, or made obsolete by other discoveries, but excluding it may take centuries. In fact this is pretty typical; so deal with it!)
  • “$100 billion could fund (quite literally) 100,000 smaller physics experiments.”  (Aside from the fact that this plays sleight-of-hand, mixing future dollars with present dollars, the argument is crude. When the Superconducting Supercollider was cancelled, did the money that was saved flow into thousands of physics experiments, or other scientific experiments?  No.  Congress sent it all over the place.)  
  • And then it concludes with my favorite, a true laugher: “The only good argument for the [machine] might be employment for smart people. And for string theorists.”  (Honestly, employment for string theorists!?!  What bu… rubbish. It might have been a good idea to do some research into how funding actually works in the field, before saying something so patently silly.)

Meanwhile, the article never once mentions the particle physics experimentalists and accelerator physicists.  Remember them?  The ones who actually build and run these machines, and actually discover things?  The ones without whom the whole enterprise is all just math?

Although they mostly don’t appear in the article, there are strong arguments both for and against building such a machine; see below.  Keep in mind, though, that any decision is still years off, and we may have quite a different perspective by the time we get to that point, depending on whether discoveries are made at the LHC or at other experimental facilities.  No one actually needs to be making this decision at the moment, so I’m not sure why Dr. Hartsfield feels it’s so crucial to take an indefensible position now.

Continue reading

5th Webpage on the Triplet Model is Up

Advanced particle physics today:

Another page completed on the explanation of the “triplet model,”  (a classic and simple variation on the Standard Model of particle physics, in which the W boson mass can be raised slightly relative to Standard Model predictions without affecting other current experiments.) The math required is still pre-university level, though complex numbers are now becoming important.

The firstsecond and third webpages in this series provided a self-contained introduction that concluded with a full cartoon of the triplet model. On our way to the full SU(2)xU(1) Standard Model, the fourth webpage gave a preliminary explanation of what SU(2) and U(1) are.

Today, the fifth page explains how a U(1)xU(1) Standard Model-like theory would work… and why the photon comes out massless in such a theory. Comments welcome!

Long Live LLPs!

Particle physics news today...

I’ve been spending my mornings this week at the 11th Long-Lived Particle Workshop, a Zoom-based gathering of experts on the subject.  A “long-lived particle” (LLP), in this context, is either

  • a detectable particle that might exist forever, or
  • a particle that, after traveling a macroscopic, measurable distance — something between 0.1 millimeters and 100 meters — decays to detectable particles

Many Standard Model particles are in these classes (e.g. electrons and protons in the first category, charged pions and bottom quarks in the second).

Typical distances traveled by some of the elementary particles and some of the hadrons in the Standard Model; any above 10-4 on the vertical axis count as long-lived particles. Credit: Prof. Brian Shuve

But the focus of the workshop, naturally, is on looking for new ones… especially ones that can be created at current and future particle accelerators like the Large Hadron Collider (LHC).

Back in the late 1990s, when many theorists were thinking about these issues carefully, the designs of the LHC’s detectors — specifically ATLAS, CMS and LHCb — were already mostly set. These detectors can certainly observe LLPs, but many design choices in both hardware and software initially made searching for signs of LLPs very challenging. In particular, the trigger systems and the techniques used to interpret and store the data were significant obstructions, and those of us interested in the subject had to constantly deal with awkward work-arounds. (Here’s an example of one of the challenges... an older article, so it leaves out many recent developments, but the ideas are still relevant.)

Additionally, this type of physics was widely seen as exotic and unmotivated at the beginning of the LHC run, so only a small handful of specialists focused on these phenomena in the first few years (2010-2014ish).  As a result, searches for LLPs were woefully limited at first, and the possibility of missing a new phenomenon remained high.

More recently, though, this has changed. Perhaps this is because of an increased appreciation that LLPs are a common prediction in theories of dark matter (as well as other contexts).  The number of new searches, new techniques, and entirely new proposed experiments has ballooned, as has the number of people participating. Many of the LLP-related problems with the LHC detectors have been solved or mitigated. This makes this year’s workshop, in my opinion, the most exciting one so far.  All sorts of possibilities that aficionados could only dream of fifteen years ago are becoming a reality. I’ll try to find time to explore just a few of them in future posts.

  But before we get to that, there’s an interesting excess in one of the latest measurements… more on that next time.

Just a few of the unusual signatures that can arise from long-lived particles; (Credit: Prof. Heather Russell)

A Few Remarks on the W Boson Mass Measurement

Based on some questions I received about yesterday’s post, I thought I’d add some additional comments this morning.

A natural and persistent question has been: “How likely do you think it is that this W boson mass result is wrong?” Obviously I can’t put a number on it, but I’d say the chance that it’s wrong is substantial. Why? This measurement, which took several many years of work, is probably among the most difficult ever performed in particle physics. Only first-rate physicists with complete dedication to the task could attempt it, carry it out, convince their many colleagues on the CDF experiment that they’d done it right, and get it through external peer review into Science magazine. But even first-rate physicists can get a measurement like this one wrong. The tiniest of subtle mistakes will undo it.

And that mistake, if there is one, might not even be their own, in a sense. Any measurement like this has to rely on other measurements, on simulation software, and on calculations involving other processes, and even though they’ve all been checked, perhaps they need to be rechecked.

Another question about the new measurement is that it seems inconsistent not only with the Standard Model but also with previous, less precise measurements by other experiments, which were closer to the Standard Model’s result. (It is even inconsistent with CDF’s own previous measurement.) That’s true, and you can see some evidence in the plot in yesterday’s post. But

  • it could be that one or more of the previous measurements has an error;
  • there is a known risk of unconscious experimental bias that tends to push results toward the Standard Model (i.e. if the result doesn’t match your expectation, you check everything again and tweak it and then stop when it better matches your expectation. Performing double-blinded experiments, as this one was, helps mitigate this risk, but it doesn’t entirely eliminate it.);
  • CDF has revised their old measurement slightly upward to account for things they learned while performing this new one, so their internal inconsistency is less than it appears, and
  • even if the truth lies between this new measurement and the old ones, that would still leave a big discrepancy with the Standard Model, and the implication for science would be much the same.

I’ve heard some cynicism: “Is this just an old experiment trying to make a name for itself and get headlines?” Don’t be absurd. No one seeking publicity would go through the hell of working on one project for several years, running down every loose end multiple times and checking it twice and cross-checking it three times, spending every living hour asking oneself “what did I forget to check?”, all while knowing that in the end one’s reputation will be at stake when the final result hits the international press. There would be far easier ways to grab headlines if that were the goal.

Someone wisely asked about the Z boson mass; can one study it as well? This is a great question, because it goes to the heart of how the Standard Model is checked for consistency. The answer is “no.” Really, when we say that “the W mass is too large,” what we mean (roughly) is that “the ratio of the W mass to the Z mass is too large.” One way to view it (not exactly right) is that certain extremely precise measurements have to be taken as inputs to the Standard Model, and once that is done, the Standard Model can be used to make predictions of other precise measurements. Because of the precision with which the Z boson mass can be measured (to 2 MeV, two parts in 100,000), it is effectively taken as an input to the Standard Model, and so we can’t then compare it against a prediction. (The Z boson mass measurement is much easier, because a Z boson can decay (for example) to an electron and a positron, which can both be observed directly. Meanwhile a W boson can only decay (for example) to an electron and a neutrino, but a neutrino can only be inferred indirectly, making determination of its energy and momentum much less precise.)

In fact, one of the ways that the experimenters at CDF who carried out this measurement checked their methods is that they remeasured the Z boson mass too, and it came out to agree with other, even more precise measurements. They’d never have convinced themselves, or any of us, that they could get the W boson mass right if the Z boson mass measurement was off. So we can even interpret the CDF result as a measurement of the ratio of the W boson mass to the Z boson mass.

One last thing for today: once you have measured the Z boson mass and a few other things precisely, it is the consistency of the top quark mass, the Higgs boson mass and the W boson mass that provide one of the key tests of the Standard Model. Because of this, my headline from yesterday (“The W Boson isn’t Behaving”) is somewhat misleading. The cause of the discrepancy may not involve the W boson at all. The issue might turn out to be a new effect on the Z boson, for instance, or perhaps even the top quark. Working that out is the purview of theoretical physicists, who have to understand the complex interplay between the various precise measurements of masses and interactions of the Standard Model’s particles, and the many direct (and so far futile) searches for unknown types of particles that could potentially shift those masses and interactions. This isn’t easy, and there are lots of possibilities to consider, so there’s a lot of work yet to be done.

The W Boson Isn’t Behaving

The mass of the W boson, one of the fundamental particles within the Standard Model of particle physics, is apparently not what the Higgs boson, top quark, and the rest of the Standard Model say it should be.  Such is the claim from the CDF experiment, from the long-ago-closed but not forgotten Tevatron.  Analysis of their old data, carried out with extreme care, and including both more data and improved techniques, calibrations, and modeling, has led to the conclusion that the W boson mass is off by 1/10 of one percent (by about 80 MeV/c2 out of about 80,400 MeV/c2).  That may not sound like much, but it’s seven times larger than what is believed to be the accuracy of the theoretical calculation.

  • New CDF Result: 80,443.5 ± 9.4 MeV/c2
  • SM Calculation: 80,357± 4 [inputs]± 4[theory] MeV/c2
The new measurement of the W mass and its uncertainty (bottom point) versus previous ones, and the current Standard Model prediction (grey band.)

What could cause this discrepancy of 7 standard deviations (7 “sigma”), far above the criteria for a discovery?  Unfortunately we must always consider the possibility of an error.  But let’s set that aside for today.  (And we should expect the experiments at the Large Hadron Collider to weigh in over time with their own better measurements, not quite as good as this one but still good enough to test its plausibility.) 

A shift in the W boson mass could occur through a wide variety of possible effects.  If you add new fields (and their particles) to the Standard Model, the interactions between the Standard Model particles and the new fields will induce small indirect effects, including tiny shifts in the various masses.  That, in turn, will cause the relation between the W boson mass, top quark mass, and Higgs boson mass to come into conflict with what the Standard Model predicts. So there are lots of possibilities. Many of these possible new particles would have been seen already at the Large Hadron Collider, or affected other experiments, and so are ruled out. But this is clearly not true in all cases, especially if one is conservative in interpreting the new result. Theorists will be busy even now trying to figure out which possibilities are still allowed.

It will be quite some time before the experimental and theoretical dust settles.  The implications are not yet obvious and they depend on the degree to which we trust the details.  Even if this discrepancy is real, it still might be quite a bit smaller than CDF’s result implies, due to statistical flukes or small errors.  [After all, if someone tells you they find a 7 sigma deviation from expectation, that would be statistically compatible with the truth being only a 4 or 5 sigma deviation.] I expect many papers over the coming days and weeks trying to make sense of not only this deviation but one or more of the other ones that are hanging about (such as this one.)

Clearly this will require follow-up posts.

Note added: To give you a sense of just how difficult this measurement is, please see this discussion by someone who knows much more about the nitty-gritty than a theorist like me ever could.

A Prediction from String Theory

(An advanced particle physics topic today…)

There have been various intellectual wars over string theory since before I was a graduate student. (Many people in my generation got caught in the crossfire.) But I’ve always taken the point of view that string theory is first and foremost a tool for understanding the universe, and it should be applied just like any other tool: as best as one can, to the widest variety of situations in which it is applicable. 

And it is a powerful tool, one that most certainly makes experimental predictions… even ones for the Large Hadron Collider (LHC).

These predictions have nothing to do with whether string theory will someday turn out to be the “theory of everything.” (That’s a grandiose term that means something far less grand, namely a “complete set of equations that captures the behavior of spacetime and all its types of particles and fields,” or something like that; it’s certainly not a theory of biology or economics, or even of semiconductors or proteins.)  Such a theory would, presumably, resolve the conceptual divide between quantum physics and general relativity, Einstein’s theory of gravity, and explain a number of other features of the world. But to focus only on this possible application of string theory is to take an unjustifiably narrow view of its value and role.

The issue for today involves the behavior of particles in an unfamiliar context, one which might someday show up (or may already have shown up and been missed) at the LHC or elsewhere. It’s a context that, until 1998 or so, no one had ever thought to ask about, and even if someone had, they’d have been stymied because traditional methods are useless. But then string theory drew our attention to this regime, and showed us that it has unusual features. There are entirely unexpected phenomena that occur there, ones that we can look for in experiments.

Continue reading

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

The Importance and Challenges of “Open Data” at the Large Hadron Collider

A little while back I wrote a short post about some research that some colleagues and I did using “open data” from the Large Hadron Collider [LHC]. We used data made public by the CMS experimental collaboration — about 1% of their current data — to search for a new particle, using a couple of twists (as proposed over 10 years ago) on a standard technique.  (CMS is one of the two general-purpose particle detectors at the LHC; the other is called ATLAS.)  We had two motivations: (1) Even if we didn’t find a new particle, we wanted to prove that our search method was effective; and (2) we wanted to stress-test the CMS Open Data framework, to assure it really does provide all the information needed for a search for something unknown.

Recently I discussed (1), and today I want to address (2): to convey why open data from the LHC is useful but controversial, and why we felt it was important, as theoretical physicists (i.e. people who perform particle physics calculations, but do not build and run the actual experiments), to do something with it that is usually the purview of experimenters.

The Importance of Archiving Data

In many subfields of physics and astronomy, data from experiments is made public as a matter of routine. Usually this occurs after an substantial delay, to allow the experimenters who collected the data to analyze it first for major discoveries. That’s as it should be: the experimenters spent years of their lives proposing, building and testing the experiment, and they deserve an uninterrupted opportunity to investigate its data. To force them to release data immediately would create a terrible disincentive for anyone to do all the hard work!

Data from particle physics colliders, however, has not historically been made public. More worrying, it has rarely been archived in a form that is easy for others to use at a later date. I’m not the right person to tell you the history of this situation, but I can give you a sense for why this still happens today. Continue reading