Tag Archives: LHC

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

The Importance and Challenges of “Open Data” at the Large Hadron Collider

A little while back I wrote a short post about some research that some colleagues and I did using “open data” from the Large Hadron Collider [LHC]. We used data made public by the CMS experimental collaboration — about 1% of their current data — to search for a new particle, using a couple of twists (as proposed over 10 years ago) on a standard technique.  (CMS is one of the two general-purpose particle detectors at the LHC; the other is called ATLAS.)  We had two motivations: (1) Even if we didn’t find a new particle, we wanted to prove that our search method was effective; and (2) we wanted to stress-test the CMS Open Data framework, to assure it really does provide all the information needed for a search for something unknown.

Recently I discussed (1), and today I want to address (2): to convey why open data from the LHC is useful but controversial, and why we felt it was important, as theoretical physicists (i.e. people who perform particle physics calculations, but do not build and run the actual experiments), to do something with it that is usually the purview of experimenters.

The Importance of Archiving Data

In many subfields of physics and astronomy, data from experiments is made public as a matter of routine. Usually this occurs after an substantial delay, to allow the experimenters who collected the data to analyze it first for major discoveries. That’s as it should be: the experimenters spent years of their lives proposing, building and testing the experiment, and they deserve an uninterrupted opportunity to investigate its data. To force them to release data immediately would create a terrible disincentive for anyone to do all the hard work!

Data from particle physics colliders, however, has not historically been made public. More worrying, it has rarely been archived in a form that is easy for others to use at a later date. I’m not the right person to tell you the history of this situation, but I can give you a sense for why this still happens today. Continue reading

A Broad Search for Fast Hidden Particles

A few days ago I wrote a quick summary of a project that we just completed (and you may find it helpful to read that post first). In this project, we looked for new particles at the Large Hadron Collider (LHC) in a novel way, in two senses. Today I’m going to explain what we did, why we did it, and what was unconventional about our search strategy.

The first half of this post will be appropriate for any reader who has been following particle physics as a spectator sport, or in some similar vein. In the second half, I’ll add some comments for my expert colleagues that may be useful in understanding and appreciating some of our results.  [If you just want to read the comments for experts, jump here.]

Why did we do this?

Motivation first. Why, as theorists, would we attempt to take on the role of our experimental colleagues — to try on our own to analyze the extremely complex and challenging data from the LHC? We’re by no means experts in data analysis, and we were very slow at it. And on top of that, we only had access to 1% of the data that CMS has collected. Isn’t it obvious that there is no chance whatsoever of finding something new with just 1% of the data, since the experimenters have had years to look through much larger data sets? Continue reading

Breaking a Little New Ground at the Large Hadron Collider

Today, a small but intrepid band of theoretical particle physicists (professor Jesse Thaler of MIT, postdocs Yotam Soreq and Wei Xue of CERN, Harvard Ph.D. student Cari Cesarotti, and myself) put out a paper that is unconventional in two senses. First, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public. And second, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public.

And no, there’s no error in the previous paragraph.

1) We used a small amount of actual data from the CMS experiment, even though we’re not ourselves members of the CMS experiment, to do a search for a new particle. Both ATLAS and CMS, the two large multipurpose experimental detectors at the Large Hadron Collider [LHC], have made a small fraction of their proton-proton collision data public, through a website called the CERN Open Data Portal. Some experts, including my co-authors Thaler, Xue and their colleagues, have used this data (and the simulations that accompany it) to do a variety of important studies involving known particles and their properties. [Here’s a blog post by Thaler concerning Open Data and its importance from his perspective.] But our new study is the first to look for signs of a new particle in this public data. While our chances of finding anything were low, we had a larger goal: to see whether Open Data could be used for such searches. We hope our paper provides some evidence that Open Data offers a reasonable path for preserving priceless LHC data, allowing it to be used as an archive by physicists of the post-LHC era.

2) Since only had a tiny fraction of CMS’s data was available to us, about 1% by some count, how could we have done anything useful compared to what the LHC experts have already done? Well, that’s why we examined the data in a slightly unconventional way (one of several methods that I’ve advocated for many years, but has not been used in any public study). Consequently it allowed us to explore some ground that no one had yet swept clean, and even have a tiny chance of an actual discovery! But the larger scientific goal, absent a discovery, was to prove the value of this unconventional strategy, in hopes that the experts at CMS and ATLAS will use it (and others like it) in future. Their chance of discovering something new, using their full data set, is vastly greater than ours ever was.

Now don’t all go rushing off to download and analyze terabytes of CMS Open Data; you’d better know what you’re getting into first. It’s worthwhile, but it’s not easy going. LHC data is extremely complicated, and until this project I’ve always been skeptical that it could be released in a form that anyone outside the experimental collaborations could use. Downloading the data and turning it into a manageable form is itself a major task. Then, while studying it, there are an enormous number of mistakes that you can make (and we made quite a few of them) and you’d better know how to make lots of cross-checks to find your mistakes (which, fortunately, we did know; we hope we found all of them!) The CMS personnel in charge of the Open Data project were enormously helpful to us, and we’re very grateful to them; but since the project is new, there were inevitable wrinkles which had to be worked around. And you’d better have some friends among the experimentalists who can give you advice when you get stuck, or point out aspects of your results that don’t look quite right. [Our thanks to them!]

All in all, this project took us two years! Well, honestly, it should have taken half that time — but it couldn’t have taken much less than that, with all we had to learn. So trying to use Open Data from an LHC experiment is not something you do in your idle free time.

Nevertheless, I feel it was worth it. At a personal level, I learned a great deal more about how experimental analyses are carried out at CMS, and by extension, at the LHC more generally. And more importantly, we were able to show what we’d hoped to show: that there are still tremendous opportunities for discovery at the LHC, through the use of (even slightly) unconventional model-independent analyses. It’s a big world to explore, and we took only a small step in the easiest direction, but perhaps our efforts will encourage others to take bigger and more challenging ones.

For those readers with greater interest in our work, I’ll put out more details in two blog posts over the next few days: one about what we looked for and how, and one about our views regarding the value of open data from the LHC, not only for our project but for the field of particle physics as a whole.

An Interesting Result from CMS, and its Implications

UPDATE 10/26: In the original version of this post, I stupidly forgot to include an effect, causing an error of a factor of about 5 in one of my estimates below. I had originally suggested that a recent result using ALEPH data was probably more powerful than a recent CMS result.  But once the error is corrected, the two experiments appear have comparable sensitivity. However, I was very conservative in my analysis of ALEPH, and my guess concerning CMS has a big uncertainty band — so it might go either way.  It’s up to ALEPH experts and CMS experts to show us who really wins the day.  Added reasoning and discussion marked in green below.

In Friday’s post, I highlighted the importance of looking for low-mass particles whose interactions with known particles are very weak. I referred to a recent preprint in which an experimental physicist, Dr. Arno Heister, reanalyzed ALEPH data in such a search.

A few hours later, Harvard Professor Matt Reece pointed me to a paper that appeared just two weeks ago: a very interesting CMS analysis of 2011-2012 data that did a search of this type — although it appears that CMS [one of the two general purpose detectors at the Large Hadron Collider (LHC)] didn’t think of it that way.

The title of the paper is obscure:  “Search for a light pseudo–scalar Higgs boson produced in association with bottom quarks in pp collisions at 8 TeV“.  Such spin-zero “pseudo-scalar” particles, which often arise in speculative models with more than one Higgs particle, usually decay to bottom quark/anti-quark pairs or tau/anti-tau pairs.  But they can have a very rare decay to muon/anti-muon, which is much easier to measure. The title of the paper gives no indication that the muon/anti-muon channel is the target of the search; you have to read the abstract. Shouldn’t the words “in the dimuon channel” or “dimuon resonance” appear in the title?  That would help researchers who are interested in dimuons, but not in pseudo-scalars, find the paper.

Here’s the main result of the paper:

At left is shown a plot of the number of events as a function of the invariant mass of the muon/anti-muon pairs.  CMS data is in black dots; estimated background is shown in the upper curve (with top quark backgrounds in the lower curve); and the peak at bottom shows what a simulated particle decaying to muon/anti-muon with a mass of 30 GeV/c² would look like. (Imagine sticking the peak on top of the upper curve to see how a signal would affect the data points).  At right are the resulting limits on the rate for such a resonance to be produced and then decay to muon/anti-muon, if it is radiated off of a bottom quark. [A limit of 100 femtobarns means that at most two thousand collisions of this type could have occurred during the year 2012.  But note that only about 1 in 100 of these collisions would have been observed, due to the difficulty of triggering on these collisions and some other challenges.]

[Note also the restriction of the mass of the dimuon pair to the range 25 GeV to 60 GeV. This may have done purely been for technical reasons, but if it was due to the theoretical assumptions, that restriction should be lifted.]

While this plot places moderate limits on spin-zero particles produced with a bottom quark, it’s equally interesting, at least to me, in other contexts. Specifically, it puts limits on any light spin-one particle (call it V) that mixes (either via kinetic or mass mixing) with the photon and Z and often comes along with at least one bottom quark… because for such particles the rate to decay to muons is not rare.  This is very interesting for hidden valley models specifically; as I mentioned on Friday, new spin-one and spin-zero particles often are produced together, giving a muon/anti-muon pair along with one or more bottom quark/anti-quark pairs.

But CMS interpreted its measurement only in terms of radiation of a new particle off a bottom quark.  Now, what if a V particle decaying sometimes to muon/anti-muon were produced in a Z particle decay (a possibility alluded to already in 2006).  For a different production process, the angles and energies of the particles would be different, and since many events would be lost (due to triggering, transverse momentum cuts, and b-tagging inefficiencies at low transverse momentum) the limits would have to be fully recalculated by the experimenters.  It would be great if CMS could add such an analysis before they publish this paper.

Still, we can make a rough back-of-the-envelope estimate, with big caveats. The LHC produced about 600 million Z particles at CMS in 2012. The plot at right tells us that if the V were radiated off a bottom quark, the maximum number of produced V’s decaying to muons would be about 2000 to 8000, depending on the V mass.  Now if we could take those numbers directly, we’d conclude that the fraction of Z’s that could decay to muon/anti-muon plus bottom quarks in this way would be 3 to 12 per million. But sensitivity of this search to a Z decay to V is probably much less than for a V radiated off bottom quarks [because (depending on the V mass) either the bottom quarks in the Z decay would be less energetic and more difficult to tag, or the muons are less energetic on average, or both.] So I’m guessing that the limits on Z decays to V are always worse than one per hundred thousand, for any V mass.  (Thanks to Wei Xue for catching an error as I was finalizing my estimate.)  

If that guess/estimate is correct, then the CMS search does not rule out the possibility of a hundred or so Z decays to V particles at each of the various LEP experiments.  That said, old LEP searches might rule this possibility out; if anyone knows of such a search, please comment or contact me.

As for whether Heister’s analysis of the ALEPH experiment’s data shows signs of such a signal, I think it unlikely (though some people seemed to read my post as saying the opposite.)  As I pointed out in Friday’s post, not only is the excess too small for excitement on its own, it also is somewhat too wide and its angular correlations look like the background (which comes, of course, from bottom quarks that decay to charm quarks plus a muon and neutrino.)  The point of Friday’s post, and of today’s, is that we should be looking.

In fact, because of Heister’s work (which, by the way, is his own, not endorsed by the ALEPH collaboration), we can draw interesting if rough conclusions.  Ignore for now the bump at 30 GeV/c²; that’s more controversial.  What about the absence of a bump between 35 and 50 GeV/c²? Unless there are subtleties with his analysis that I don’t understand, we learn that at ALEPH there were fewer than ten Z decays to a V particle (plus a source of bottom quarks) for V in this mass range.  That limits such Z decays to about 2 to 3 per million.  OOPS: Dumb mistake!! At this step, I forgot to include the fact that requiring bottom quarks in the ALEPH events only works about 20% of the time (thanks to Imperial College Professor Oliver Buchmuller for questioning my reasoning!) The real number is therefore about 5 times larger, more like 10 to 15 per million. If that rough estimate is correct, it would provide a more powerful constraint than constraint roughly comparable to the current CMS analysis.

[[BUT: In my original argument I was very conservative.  When I said “fewer than 10”, I was trying to be brief; really, looking at the invariant mass plot, the allowed numbers of excess events for a V with mass above 36 GeV is typically fewer than 7 or even 5.  And that doesn’t include any angular information, which for many signals would reduce the numbers to 3.   Including these effects properly brings the ALEPH bound back down to something close to my initial estimate.  Anyway, it’s clear that CMS is nipping at ALEPH’s heels, but I’m still betting they haven’t passed ALEPH — yet.]]

So my advice would be to set Heister’s bump aside and instead focus on the constraints that one can obtain, and the potential discoveries that one could make, with this type of analysis, either at LEP or at LHC. That’s where I think the real lesson lies.

A Hidden Gem At An Old Experiment?

This summer there was a blog post from   claiming that “The LHC `nightmare scenario’ has come true” — implying that the Large Hadron Collider [LHC] has found nothing but a Standard Model Higgs particle (the simplest possible type), and will find nothing more of great importance. With all due respect for the considerable intelligence and technical ability of the author of that post, I could not disagree more; not only are we not in a nightmare, it isn’t even night-time yet, and hardly time for sleep or even daydreaming. There’s a tremendous amount of work to do, and there may be many hidden discoveries yet to be made, lurking in existing LHC data.  Or elsewhere.

I can defend this claim (and have done so as recently as this month; here are my slides). But there’s evidence from another quarter that it is far too early for such pessimism.  It has appeared in a new paper (a preprint, so not yet peer-reviewed) by an experimentalist named Arno Heister, who is evaluating 20-year old data from the experiment known as ALEPH.

In the early 1990s the Large Electron-Positron (LEP) collider at CERN, in the same tunnel that now houses the LHC, produced nearly 4 million Z particles at the center of ALEPH; the Z’s decayed immediately into other particles, and ALEPH was used to observe those decays.  Of course the data was studied in great detail, and you might think there couldn’t possibly be anything still left to find in that data, after over 20 years. But a hidden gem wouldn’t surprise those of us who have worked in this subject for a long time — especially those of us who have worked on hidden valleys. (Hidden Valleys are theories with a set of new forces and low-mass particles, which, because they aren’t affected by the known forces excepting gravity, interact very weakly with the known particles.  They are also often called “dark sectors” if they have something to do with dark matter.)

For some reason most experimenters in particle physics don’t tend to look for things just because they can; they stick to signals that theorists have already predicted. Since hidden valleys only hit the market in a 2006 paper I wrote with then-student Kathryn Zurek, long after the experimenters at ALEPH had moved on to other experiments, nobody went back to look in ALEPH or other LEP data for hidden valley phenomena (with one exception.) I didn’t expect anyone to ever do so; it’s a lot of work to dig up and recommission old computer files.

This wouldn’t have been a problem if the big LHC experiments (ATLAS, CMS and LHCb) had looked extensively for the sorts of particles expected in hidden valleys. ATLAS and CMS especially have many advantages; for instance, the LHC has made over a hundred times more Z particles than LEP ever did. But despite specific proposals for what to look for (and a decade of pleading), only a few limited searches have been carried out, mostly for very long-lived particles, for particles with mass of a few GeV/c² or less, and for particles produced in unexpected Higgs decays. And that means that, yes, hidden physics could certainly still be found in old ALEPH data, and in other old experiments. Kudos to Dr. Heister for taking a look. Continue reading

A Flash in the Pan Flickers Out

Back in the California Gold Rush, many people panning for gold saw a yellow glint at the bottom of their pans, and thought themselves lucky.  But more often than not, it was pyrite — iron sulfide — fool’s gold…

Back in December 2015, a bunch of particle physicists saw a bump on a plot.  The plot showed the numbers of events with two photons (particles of light) as a function of the “invariant mass” of the photon pair.  (To be precise, they saw a big bump on one ATLAS plot, and a bunch of small bumps in similar plots by CMS and ATLAS [the two general purpose experiments at the Large Hadron Collider].)  What was that bump?  Was it a sign of a new particle?

A similar bump was the first sign of the Higgs boson, though that was far from clear at the time.  What about this bump?

As I wrote in December,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to describe what it might be if the bump were more than just a statistical fluke.  A lot of us — theoretical particle physicists like me — had a lot of fun, and learned a lot of physics, by considering what that bump might mean if it were a sign of something real.  (In fact I’ll be giving a talk here at CERN next week entitled “Lessons from a Flash in the Pan,” describing what I learned, or remembered, along the way.)

But updated results from CMS, based on a large amount of new data taken in 2016, have been seen.   (Perhaps these have leaked out early; they were supposed to be presented tomorrow along with those from ATLAS.)  They apparently show that where the bump was before, they now see nothing.  In fact there’s a small dip in the data there.

So — it seems that what we saw in those December plots was a fluke.  It happens.  I’m certainly disappointed, but hardly surprised.  Funny things happen with small amounts of data.

At the ICHEP 2016 conference, which started today, official presentation of the updated ATLAS and CMS two-photon results will come on Friday, but I think we all know the score.  So instead our focus will be on  the many other results (dozens and dozens, I hear) that the experiments will be showing us for the first time.  Already we had a small blizzard of them today.  I’m excited to see what they have to show us … the Standard Model, and naturalness, remain on trial.

The Summer View at CERN

For the first time in some years, I’m spending two and a half weeks at CERN (the lab that hosts the Large Hadron Collider [LHC]). Most of my recent visits have been short or virtual, but this time* there’s a theory workshop that has collected together a number of theoretical particle physicists, and it’s a good opportunity for all of us to catch up with the latest creative ideas in the subject.   It’s also an opportunity to catch a glimpse of the furtive immensity of Mont Blanc, a hulking bump on the southern horizon, although only if (as is rarely the case) nature offers clear and beautiful weather.

More importantly, new results on the data collected so far in 2016 at the LHC are coming very soon!  They will be presented at the ICHEP conference that will be held in Chicago starting August 3rd. And there’s something we’ll be watching closely.

You may remember that in a post last December I wrote:

  “Everybody wants to know. That bump seen on the ATLAS and CMS two-photon plots!  What… IS… it…?

Why the excitement? A bump of this type can be a signal of a new particle (as was the case for the Higgs particle itself.) And since a new particle that would produce a bump of this size was both completely unexpected and completely plausible, there was hope that we were seeing a hint of something new and important.

However, as I wrote in the same post,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to discuss briefly what it might mean if it wasn’t just a statistical fluke. But speculation may be about to end: finally, we’re about to find out if it was indeed just a fluke — or a sign of something real.

Since December the amount of 13 TeV collision data available at ATLAS and CMS (the two general purpose experiments at the LHC) has roughly quadrupled, which means that typical bumps and wiggles on their 2015-2016 plots have decreased in relative size by about a factor of two (= square root of four). If the December bump is just randomness, it should also decrease in relative size. If it’s real, it should remain roughly the same relative size, but appear more prominent relative to the random bumps and wiggles around it.

Now, there’s a caution to be added here. The December ATLAS bump was so large and fat compared to what was seen at CMS that (since reality has to appear the same at both experiments, once enough data has been collected) it was pretty obvious that even if it there were a real bump there, at ATLAS it was probably in combination with a statistical fluke that made it look larger and fatter than its true nature. [Something similar happened with the Higgs; the initial bump that ATLAS saw was twice as big as expected, which is why it showed up so early, but it gradually has shrunk as more data has been collected and it is now close to its expected size.  In retrospect, that tells us that ATLAS’s original signal was indeed combined with a statistical fluke that made it appear larger than it really is.] What that means is that even if the December bumps were real, we would expect the ATLAS bump to shrink in size (but not statistical significance) and we would expect the CMS bump to remain of similar size (but grow in statistical significance). Remember, though, that “expectation” is not certainty, because at every stage statistical flukes (up or down) are possible.

In about a week we’ll find out where things currently stand. But the mood, as I read it here in the hallways and cafeteria, is not one of excitement. Moreover, the fact that the update to the results is (at the moment) unobtrusively scheduled for a parallel session of the ICHEP conference next Friday, afternoon time at CERN, suggests we’re not going  to see convincing evidence of anything exciting. If so, then the remaining question will be whether the reverse is true: whether the data will show convincing evidence that the December bump was definitely a fluke.

Flukes are guaranteed; with limited amounts of data, they can’t be avoided.  Discoveries, on the other hand, require skill, insight, and luck: you must ask a good question, address it with the best available methods, and be fortunate enough that (as is rarely the case) nature offers a clear and interesting answer.

 

*I am grateful for the CERN theory group’s financial support during this visit.