Tag Archives: cms

Long Live LLPs!

Particle physics news today...

I’ve been spending my mornings this week at the 11th Long-Lived Particle Workshop, a Zoom-based gathering of experts on the subject.  A “long-lived particle” (LLP), in this context, is either

  • a detectable particle that might exist forever, or
  • a particle that, after traveling a macroscopic, measurable distance — something between 0.1 millimeters and 100 meters — decays to detectable particles

Many Standard Model particles are in these classes (e.g. electrons and protons in the first category, charged pions and bottom quarks in the second).

Typical distances traveled by some of the elementary particles and some of the hadrons in the Standard Model; any above 10-4 on the vertical axis count as long-lived particles. Credit: Prof. Brian Shuve

But the focus of the workshop, naturally, is on looking for new ones… especially ones that can be created at current and future particle accelerators like the Large Hadron Collider (LHC).

Back in the late 1990s, when many theorists were thinking about these issues carefully, the designs of the LHC’s detectors — specifically ATLAS, CMS and LHCb — were already mostly set. These detectors can certainly observe LLPs, but many design choices in both hardware and software initially made searching for signs of LLPs very challenging. In particular, the trigger systems and the techniques used to interpret and store the data were significant obstructions, and those of us interested in the subject had to constantly deal with awkward work-arounds. (Here’s an example of one of the challenges... an older article, so it leaves out many recent developments, but the ideas are still relevant.)

Additionally, this type of physics was widely seen as exotic and unmotivated at the beginning of the LHC run, so only a small handful of specialists focused on these phenomena in the first few years (2010-2014ish).  As a result, searches for LLPs were woefully limited at first, and the possibility of missing a new phenomenon remained high.

More recently, though, this has changed. Perhaps this is because of an increased appreciation that LLPs are a common prediction in theories of dark matter (as well as other contexts).  The number of new searches, new techniques, and entirely new proposed experiments has ballooned, as has the number of people participating. Many of the LLP-related problems with the LHC detectors have been solved or mitigated. This makes this year’s workshop, in my opinion, the most exciting one so far.  All sorts of possibilities that aficionados could only dream of fifteen years ago are becoming a reality. I’ll try to find time to explore just a few of them in future posts.

  But before we get to that, there’s an interesting excess in one of the latest measurements… more on that next time.

Just a few of the unusual signatures that can arise from long-lived particles; (Credit: Prof. Heather Russell)

A Prediction from String Theory

(An advanced particle physics topic today…)

There have been various intellectual wars over string theory since before I was a graduate student. (Many people in my generation got caught in the crossfire.) But I’ve always taken the point of view that string theory is first and foremost a tool for understanding the universe, and it should be applied just like any other tool: as best as one can, to the widest variety of situations in which it is applicable. 

And it is a powerful tool, one that most certainly makes experimental predictions… even ones for the Large Hadron Collider (LHC).

These predictions have nothing to do with whether string theory will someday turn out to be the “theory of everything.” (That’s a grandiose term that means something far less grand, namely a “complete set of equations that captures the behavior of spacetime and all its types of particles and fields,” or something like that; it’s certainly not a theory of biology or economics, or even of semiconductors or proteins.)  Such a theory would, presumably, resolve the conceptual divide between quantum physics and general relativity, Einstein’s theory of gravity, and explain a number of other features of the world. But to focus only on this possible application of string theory is to take an unjustifiably narrow view of its value and role.

The issue for today involves the behavior of particles in an unfamiliar context, one which might someday show up (or may already have shown up and been missed) at the LHC or elsewhere. It’s a context that, until 1998 or so, no one had ever thought to ask about, and even if someone had, they’d have been stymied because traditional methods are useless. But then string theory drew our attention to this regime, and showed us that it has unusual features. There are entirely unexpected phenomena that occur there, ones that we can look for in experiments.

Continue reading

The Importance and Challenges of “Open Data” at the Large Hadron Collider

A little while back I wrote a short post about some research that some colleagues and I did using “open data” from the Large Hadron Collider [LHC]. We used data made public by the CMS experimental collaboration — about 1% of their current data — to search for a new particle, using a couple of twists (as proposed over 10 years ago) on a standard technique.  (CMS is one of the two general-purpose particle detectors at the LHC; the other is called ATLAS.)  We had two motivations: (1) Even if we didn’t find a new particle, we wanted to prove that our search method was effective; and (2) we wanted to stress-test the CMS Open Data framework, to assure it really does provide all the information needed for a search for something unknown.

Recently I discussed (1), and today I want to address (2): to convey why open data from the LHC is useful but controversial, and why we felt it was important, as theoretical physicists (i.e. people who perform particle physics calculations, but do not build and run the actual experiments), to do something with it that is usually the purview of experimenters.

The Importance of Archiving Data

In many subfields of physics and astronomy, data from experiments is made public as a matter of routine. Usually this occurs after an substantial delay, to allow the experimenters who collected the data to analyze it first for major discoveries. That’s as it should be: the experimenters spent years of their lives proposing, building and testing the experiment, and they deserve an uninterrupted opportunity to investigate its data. To force them to release data immediately would create a terrible disincentive for anyone to do all the hard work!

Data from particle physics colliders, however, has not historically been made public. More worrying, it has rarely been archived in a form that is easy for others to use at a later date. I’m not the right person to tell you the history of this situation, but I can give you a sense for why this still happens today. Continue reading

A Broad Search for Fast Hidden Particles

A few days ago I wrote a quick summary of a project that we just completed (and you may find it helpful to read that post first). In this project, we looked for new particles at the Large Hadron Collider (LHC) in a novel way, in two senses. Today I’m going to explain what we did, why we did it, and what was unconventional about our search strategy.

The first half of this post will be appropriate for any reader who has been following particle physics as a spectator sport, or in some similar vein. In the second half, I’ll add some comments for my expert colleagues that may be useful in understanding and appreciating some of our results.  [If you just want to read the comments for experts, jump here.]

Why did we do this?

Motivation first. Why, as theorists, would we attempt to take on the role of our experimental colleagues — to try on our own to analyze the extremely complex and challenging data from the LHC? We’re by no means experts in data analysis, and we were very slow at it. And on top of that, we only had access to 1% of the data that CMS has collected. Isn’t it obvious that there is no chance whatsoever of finding something new with just 1% of the data, since the experimenters have had years to look through much larger data sets? Continue reading

Breaking a Little New Ground at the Large Hadron Collider

Today, a small but intrepid band of theoretical particle physicists (professor Jesse Thaler of MIT, postdocs Yotam Soreq and Wei Xue of CERN, Harvard Ph.D. student Cari Cesarotti, and myself) put out a paper that is unconventional in two senses. First, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public. And second, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public.

And no, there’s no error in the previous paragraph.

1) We used a small amount of actual data from the CMS experiment, even though we’re not ourselves members of the CMS experiment, to do a search for a new particle. Both ATLAS and CMS, the two large multipurpose experimental detectors at the Large Hadron Collider [LHC], have made a small fraction of their proton-proton collision data public, through a website called the CERN Open Data Portal. Some experts, including my co-authors Thaler, Xue and their colleagues, have used this data (and the simulations that accompany it) to do a variety of important studies involving known particles and their properties. [Here’s a blog post by Thaler concerning Open Data and its importance from his perspective.] But our new study is the first to look for signs of a new particle in this public data. While our chances of finding anything were low, we had a larger goal: to see whether Open Data could be used for such searches. We hope our paper provides some evidence that Open Data offers a reasonable path for preserving priceless LHC data, allowing it to be used as an archive by physicists of the post-LHC era.

2) Since only had a tiny fraction of CMS’s data was available to us, about 1% by some count, how could we have done anything useful compared to what the LHC experts have already done? Well, that’s why we examined the data in a slightly unconventional way (one of several methods that I’ve advocated for many years, but has not been used in any public study). Consequently it allowed us to explore some ground that no one had yet swept clean, and even have a tiny chance of an actual discovery! But the larger scientific goal, absent a discovery, was to prove the value of this unconventional strategy, in hopes that the experts at CMS and ATLAS will use it (and others like it) in future. Their chance of discovering something new, using their full data set, is vastly greater than ours ever was.

Now don’t all go rushing off to download and analyze terabytes of CMS Open Data; you’d better know what you’re getting into first. It’s worthwhile, but it’s not easy going. LHC data is extremely complicated, and until this project I’ve always been skeptical that it could be released in a form that anyone outside the experimental collaborations could use. Downloading the data and turning it into a manageable form is itself a major task. Then, while studying it, there are an enormous number of mistakes that you can make (and we made quite a few of them) and you’d better know how to make lots of cross-checks to find your mistakes (which, fortunately, we did know; we hope we found all of them!) The CMS personnel in charge of the Open Data project were enormously helpful to us, and we’re very grateful to them; but since the project is new, there were inevitable wrinkles which had to be worked around. And you’d better have some friends among the experimentalists who can give you advice when you get stuck, or point out aspects of your results that don’t look quite right. [Our thanks to them!]

All in all, this project took us two years! Well, honestly, it should have taken half that time — but it couldn’t have taken much less than that, with all we had to learn. So trying to use Open Data from an LHC experiment is not something you do in your idle free time.

Nevertheless, I feel it was worth it. At a personal level, I learned a great deal more about how experimental analyses are carried out at CMS, and by extension, at the LHC more generally. And more importantly, we were able to show what we’d hoped to show: that there are still tremendous opportunities for discovery at the LHC, through the use of (even slightly) unconventional model-independent analyses. It’s a big world to explore, and we took only a small step in the easiest direction, but perhaps our efforts will encourage others to take bigger and more challenging ones.

For those readers with greater interest in our work, I’ll put out more details in two blog posts over the next few days: one about what we looked for and how, and one about our views regarding the value of open data from the LHC, not only for our project but for the field of particle physics as a whole.

An Interesting Result from CMS, and its Implications

UPDATE 10/26: In the original version of this post, I stupidly forgot to include an effect, causing an error of a factor of about 5 in one of my estimates below. I had originally suggested that a recent result using ALEPH data was probably more powerful than a recent CMS result.  But once the error is corrected, the two experiments appear have comparable sensitivity. However, I was very conservative in my analysis of ALEPH, and my guess concerning CMS has a big uncertainty band — so it might go either way.  It’s up to ALEPH experts and CMS experts to show us who really wins the day.  Added reasoning and discussion marked in green below.

In Friday’s post, I highlighted the importance of looking for low-mass particles whose interactions with known particles are very weak. I referred to a recent preprint in which an experimental physicist, Dr. Arno Heister, reanalyzed ALEPH data in such a search.

A few hours later, Harvard Professor Matt Reece pointed me to a paper that appeared just two weeks ago: a very interesting CMS analysis of 2011-2012 data that did a search of this type — although it appears that CMS [one of the two general purpose detectors at the Large Hadron Collider (LHC)] didn’t think of it that way.

The title of the paper is obscure:  “Search for a light pseudo–scalar Higgs boson produced in association with bottom quarks in pp collisions at 8 TeV“.  Such spin-zero “pseudo-scalar” particles, which often arise in speculative models with more than one Higgs particle, usually decay to bottom quark/anti-quark pairs or tau/anti-tau pairs.  But they can have a very rare decay to muon/anti-muon, which is much easier to measure. The title of the paper gives no indication that the muon/anti-muon channel is the target of the search; you have to read the abstract. Shouldn’t the words “in the dimuon channel” or “dimuon resonance” appear in the title?  That would help researchers who are interested in dimuons, but not in pseudo-scalars, find the paper.

Here’s the main result of the paper:

At left is shown a plot of the number of events as a function of the invariant mass of the muon/anti-muon pairs.  CMS data is in black dots; estimated background is shown in the upper curve (with top quark backgrounds in the lower curve); and the peak at bottom shows what a simulated particle decaying to muon/anti-muon with a mass of 30 GeV/c² would look like. (Imagine sticking the peak on top of the upper curve to see how a signal would affect the data points).  At right are the resulting limits on the rate for such a resonance to be produced and then decay to muon/anti-muon, if it is radiated off of a bottom quark. [A limit of 100 femtobarns means that at most two thousand collisions of this type could have occurred during the year 2012.  But note that only about 1 in 100 of these collisions would have been observed, due to the difficulty of triggering on these collisions and some other challenges.]

[Note also the restriction of the mass of the dimuon pair to the range 25 GeV to 60 GeV. This may have done purely been for technical reasons, but if it was due to the theoretical assumptions, that restriction should be lifted.]

While this plot places moderate limits on spin-zero particles produced with a bottom quark, it’s equally interesting, at least to me, in other contexts. Specifically, it puts limits on any light spin-one particle (call it V) that mixes (either via kinetic or mass mixing) with the photon and Z and often comes along with at least one bottom quark… because for such particles the rate to decay to muons is not rare.  This is very interesting for hidden valley models specifically; as I mentioned on Friday, new spin-one and spin-zero particles often are produced together, giving a muon/anti-muon pair along with one or more bottom quark/anti-quark pairs.

But CMS interpreted its measurement only in terms of radiation of a new particle off a bottom quark.  Now, what if a V particle decaying sometimes to muon/anti-muon were produced in a Z particle decay (a possibility alluded to already in 2006).  For a different production process, the angles and energies of the particles would be different, and since many events would be lost (due to triggering, transverse momentum cuts, and b-tagging inefficiencies at low transverse momentum) the limits would have to be fully recalculated by the experimenters.  It would be great if CMS could add such an analysis before they publish this paper.

Still, we can make a rough back-of-the-envelope estimate, with big caveats. The LHC produced about 600 million Z particles at CMS in 2012. The plot at right tells us that if the V were radiated off a bottom quark, the maximum number of produced V’s decaying to muons would be about 2000 to 8000, depending on the V mass.  Now if we could take those numbers directly, we’d conclude that the fraction of Z’s that could decay to muon/anti-muon plus bottom quarks in this way would be 3 to 12 per million. But sensitivity of this search to a Z decay to V is probably much less than for a V radiated off bottom quarks [because (depending on the V mass) either the bottom quarks in the Z decay would be less energetic and more difficult to tag, or the muons are less energetic on average, or both.] So I’m guessing that the limits on Z decays to V are always worse than one per hundred thousand, for any V mass.  (Thanks to Wei Xue for catching an error as I was finalizing my estimate.)  

If that guess/estimate is correct, then the CMS search does not rule out the possibility of a hundred or so Z decays to V particles at each of the various LEP experiments.  That said, old LEP searches might rule this possibility out; if anyone knows of such a search, please comment or contact me.

As for whether Heister’s analysis of the ALEPH experiment’s data shows signs of such a signal, I think it unlikely (though some people seemed to read my post as saying the opposite.)  As I pointed out in Friday’s post, not only is the excess too small for excitement on its own, it also is somewhat too wide and its angular correlations look like the background (which comes, of course, from bottom quarks that decay to charm quarks plus a muon and neutrino.)  The point of Friday’s post, and of today’s, is that we should be looking.

In fact, because of Heister’s work (which, by the way, is his own, not endorsed by the ALEPH collaboration), we can draw interesting if rough conclusions.  Ignore for now the bump at 30 GeV/c²; that’s more controversial.  What about the absence of a bump between 35 and 50 GeV/c²? Unless there are subtleties with his analysis that I don’t understand, we learn that at ALEPH there were fewer than ten Z decays to a V particle (plus a source of bottom quarks) for V in this mass range.  That limits such Z decays to about 2 to 3 per million.  OOPS: Dumb mistake!! At this step, I forgot to include the fact that requiring bottom quarks in the ALEPH events only works about 20% of the time (thanks to Imperial College Professor Oliver Buchmuller for questioning my reasoning!) The real number is therefore about 5 times larger, more like 10 to 15 per million. If that rough estimate is correct, it would provide a more powerful constraint than constraint roughly comparable to the current CMS analysis.

[[BUT: In my original argument I was very conservative.  When I said “fewer than 10”, I was trying to be brief; really, looking at the invariant mass plot, the allowed numbers of excess events for a V with mass above 36 GeV is typically fewer than 7 or even 5.  And that doesn’t include any angular information, which for many signals would reduce the numbers to 3.   Including these effects properly brings the ALEPH bound back down to something close to my initial estimate.  Anyway, it’s clear that CMS is nipping at ALEPH’s heels, but I’m still betting they haven’t passed ALEPH — yet.]]

So my advice would be to set Heister’s bump aside and instead focus on the constraints that one can obtain, and the potential discoveries that one could make, with this type of analysis, either at LEP or at LHC. That’s where I think the real lesson lies.

A Hidden Gem At An Old Experiment?

This summer there was a blog post from   claiming that “The LHC `nightmare scenario’ has come true” — implying that the Large Hadron Collider [LHC] has found nothing but a Standard Model Higgs particle (the simplest possible type), and will find nothing more of great importance. With all due respect for the considerable intelligence and technical ability of the author of that post, I could not disagree more; not only are we not in a nightmare, it isn’t even night-time yet, and hardly time for sleep or even daydreaming. There’s a tremendous amount of work to do, and there may be many hidden discoveries yet to be made, lurking in existing LHC data.  Or elsewhere.

I can defend this claim (and have done so as recently as this month; here are my slides). But there’s evidence from another quarter that it is far too early for such pessimism.  It has appeared in a new paper (a preprint, so not yet peer-reviewed) by an experimentalist named Arno Heister, who is evaluating 20-year old data from the experiment known as ALEPH.

In the early 1990s the Large Electron-Positron (LEP) collider at CERN, in the same tunnel that now houses the LHC, produced nearly 4 million Z particles at the center of ALEPH; the Z’s decayed immediately into other particles, and ALEPH was used to observe those decays.  Of course the data was studied in great detail, and you might think there couldn’t possibly be anything still left to find in that data, after over 20 years. But a hidden gem wouldn’t surprise those of us who have worked in this subject for a long time — especially those of us who have worked on hidden valleys. (Hidden Valleys are theories with a set of new forces and low-mass particles, which, because they aren’t affected by the known forces excepting gravity, interact very weakly with the known particles.  They are also often called “dark sectors” if they have something to do with dark matter.)

For some reason most experimenters in particle physics don’t tend to look for things just because they can; they stick to signals that theorists have already predicted. Since hidden valleys only hit the market in a 2006 paper I wrote with then-student Kathryn Zurek, long after the experimenters at ALEPH had moved on to other experiments, nobody went back to look in ALEPH or other LEP data for hidden valley phenomena (with one exception.) I didn’t expect anyone to ever do so; it’s a lot of work to dig up and recommission old computer files.

This wouldn’t have been a problem if the big LHC experiments (ATLAS, CMS and LHCb) had looked extensively for the sorts of particles expected in hidden valleys. ATLAS and CMS especially have many advantages; for instance, the LHC has made over a hundred times more Z particles than LEP ever did. But despite specific proposals for what to look for (and a decade of pleading), only a few limited searches have been carried out, mostly for very long-lived particles, for particles with mass of a few GeV/c² or less, and for particles produced in unexpected Higgs decays. And that means that, yes, hidden physics could certainly still be found in old ALEPH data, and in other old experiments. Kudos to Dr. Heister for taking a look. Continue reading

A Flash in the Pan Flickers Out

Back in the California Gold Rush, many people panning for gold saw a yellow glint at the bottom of their pans, and thought themselves lucky.  But more often than not, it was pyrite — iron sulfide — fool’s gold…

Back in December 2015, a bunch of particle physicists saw a bump on a plot.  The plot showed the numbers of events with two photons (particles of light) as a function of the “invariant mass” of the photon pair.  (To be precise, they saw a big bump on one ATLAS plot, and a bunch of small bumps in similar plots by CMS and ATLAS [the two general purpose experiments at the Large Hadron Collider].)  What was that bump?  Was it a sign of a new particle?

A similar bump was the first sign of the Higgs boson, though that was far from clear at the time.  What about this bump?

As I wrote in December,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to describe what it might be if the bump were more than just a statistical fluke.  A lot of us — theoretical particle physicists like me — had a lot of fun, and learned a lot of physics, by considering what that bump might mean if it were a sign of something real.  (In fact I’ll be giving a talk here at CERN next week entitled “Lessons from a Flash in the Pan,” describing what I learned, or remembered, along the way.)

But updated results from CMS, based on a large amount of new data taken in 2016, have been seen.   (Perhaps these have leaked out early; they were supposed to be presented tomorrow along with those from ATLAS.)  They apparently show that where the bump was before, they now see nothing.  In fact there’s a small dip in the data there.

So — it seems that what we saw in those December plots was a fluke.  It happens.  I’m certainly disappointed, but hardly surprised.  Funny things happen with small amounts of data.

At the ICHEP 2016 conference, which started today, official presentation of the updated ATLAS and CMS two-photon results will come on Friday, but I think we all know the score.  So instead our focus will be on  the many other results (dozens and dozens, I hear) that the experiments will be showing us for the first time.  Already we had a small blizzard of them today.  I’m excited to see what they have to show us … the Standard Model, and naturalness, remain on trial.