Category Archives: Particle Physics

So What Is It???

So What Is It? That’s the question one hears in all the bars and on all the street corners and on every Twitter feed and in the whispering of the wind. Everybody wants to know. That bump seen on the ATLAS and CMS two-photon plots! What… IS… it…?

ATLAS_CMS_diphoton_2015

The two-photon results from ATLAS (top) and CMS (bottom) aligned, so that the 600, 700 and 800 GeV locations (blue vertical lines) line up almost perfectly. The peaks in the two data sets are in about the same location. ATLAS’s is larger and also wider. Click here for more commentary.

Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not — just in case it really is the sign of a new particle in Large Hadron Collider [LHC] data — let me (start to) address the question.

First: what it isn’t. It can’t just be a second Higgs particle (a heavier version of the one found in 2012) that is just appended to the known particles, with no other particles added in.   Continue reading

Is This the Beginning of the End of the Standard Model?

Was yesterday the day when a crack appeared in the Standard Model that will lead to its demise?  Maybe. It was a very interesting day, that’s for sure. [Here’s yesterday’s article on the results as they appeared.]

I find the following plot useful… it shows the results on photon pairs from ATLAS and CMS superposed for comparison.  [I take only the central events from CMS because the events that have a photon in the endcap don’t show much (there are excesses and deficits in the interesting region) and because it makes the plot too cluttered; suffice it to say that the endcap photons show nothing unusual.]  The challenge is that ATLAS uses a linear horizontal axis while CMS uses a logarithmic one, but in the interesting region of 600-800 GeV you can more or less line them up.  Notice that CMS’s bins are narrower than ATLAS’s by a factor of 2.

ATLAS_CMS_diphoton_2015

The diphoton results from ATLAS (top) and CMS (bottom) arranged so that the 600, 700 and 800 GeV locations (blue vertical lines) line up almost perfectly. (The plots do not line up away from this region!)  The data are the black dots (ignore the bottom section of CMS’s plot for now.) Notice that the obvious bumps in the two data sets appear in more or less the same place. The bump in ATLAS’s data is both higher (more statistically significant) and significantly wider.

Both plots definitely show a bump.  The two experiments have rather similar amounts of data, so we might have hoped for something more similar in the bumps, but the number of events in each bump is small and statistical flukes can play all sorts of tricks.

Of course your eye can play tricks too. A bump of a low significance with a small number of events looks much more impressive on a logarithmic plot than a bump of equal significance with a larger number of events — so beware that bias, which makes the curves to the left of the bump appear smoother and more featureless than they actually are.  [For instance, in the lower register of CMS’s plot, notice the bump around 350.]

We’re in that interesting moment when all we can say is that there might be something real and new in this data, and we have to take it very seriously.  We also have to take the statistical analyses of these bumps seriously, and they’re not as promising as these bumps look by eye.  If I hadn’t seen the statistical significances that ATLAS and CMS quoted, I’d have been more optimistic.

Also disappointing is that ATLAS’s new search is not very different from their Run 1 search of the same type, and only uses 3.2 inverse femtobarns of data, less than the 3.5 that they can use in a few other cases… and CMS uses 2.6 inverse femtobarns.  So this makes ATLAS less sensitive and CMS more sensitive than I was originally estimating… and makes it even less clear why ATLAS would be more sensitive in Run 2 to this signal than they were in Run 1, given the small amount of Run 2 data.  [One can check that if the events really have 750 GeV of energy and come from gluon collisions, the sensitivity of the Run 1 and Run 2 searches are comparable, so one should consider combining them, which would reduce the significance of the ATLAS excess. Not to combine them is to “cherry pick”.]

By the way, we heard that the excess events do not look very different from the events seen on either side of the bump; they don’t, for instance, have much higher total energy.  That means that a higher-energy process, one that produces a new particle at 750 GeV indirectly, can’t be a cause of big jump in the 13 TeV production rate relative to 8 TeV.  So one can’t hide behind this possible explanation for why a putative signal is seen brightly in Run 2 and was barely seen, if at all, in Run 1.

Of course the number of events is small and so these oddities could just be due to statistical flukes doing funny things with a real signal.  The question is whether it could just be statistical flukes doing funny things with the known background, which also has a small number of events.

And we should also, in tempering our enthusiasm, remember this plot: the diboson excess that so many were excited about this summer.  Bumps often appear, and they usually go away.  R.I.P.

ATLAS_dibosonXS

The most dramatic of the excesses in the production of two W or Z bosons from Run 1 data, as seen in ATLAS work published earlier this year. That bump excited a lot of people. But it doesn’t appear to be supported by Run 2 data. A cautionary tale.

Nevertheless, there’s nothing about this diphoton excess which makes it obvious that one should be pessimistic about it.  It’s inconclusive: depending on the statistical questions you ask (whether you combine ATLAS and CMS Run 2, whether you try to combine ATLAS Run 1 and Run 2, whether you worry about whether the resonance is wide or narrow), you can draw positive or agnostic conclusions.  It’s hard to draw entirely negative conclusions… and that’s a reason for optimism.

Six months or so from now — or less, if we can use this excess as a clue to find something more convincing within the existing data — we’ll likely say “R.I.P.” again.  Will we bury this little excess, or the Standard Model itself?

Exciting Day Ahead at LHC

At CERN, the laboratory that hosts the Large Hadron Collider [LHC]. Four years ago, almost to the day. Fabiola Gianotti, spokesperson for the ATLAS experiment, delivered the first talk in a presentation on 2011 LHC data. Speaking to the assembled scientists and dignitaries, she presented the message that energized the physics community: a little bump had shown up on a plot. Continue reading

First Big Results from LHC at 13 TeV

A few weeks ago, the Large Hadron Collider [LHC] ended its 2015 data taking of 13 TeV proton-proton collisions.  This month we’re getting our first look at the data.

Already the ATLAS experiment has put out two results which are a significant and impressive contribution to human knowledge.  CMS has one as well (sorry to have overlooked it the first time, but it isn’t posted on the usual Twiki page for some reason.) Continue reading

Dark Matter: How Could the Large Hadron Collider Discover It?

Dark Matter. Its existence is still not 100% certain, but if it exists, it is exceedingly dark, both in the usual sense — it doesn’t emit light or reflect light or scatter light — and in a more general sense — it doesn’t interact much, in any way, with ordinary stuff, like tables or floors or planets or  humans. So not only is it invisible (air is too, after all, so that’s not so remarkable), it’s actually extremely difficult to detect, even with the best scientific instruments. How difficult? We don’t even know, but certainly more difficult than neutrinos, the most elusive of the known particles. The only way we’ve been able to detect dark matter so far is through the pull it exerts via gravity, which is big only because there’s so much dark matter out there, and because it has slow but inexorable and remarkable effects on things that we can see, such as stars, interstellar gas, and even light itself.

About a week ago, the mainstream press was reporting, inaccurately, that the leading aim of the Large Hadron Collider [LHC], after its two-year upgrade, is to discover dark matter. [By the way, on Friday the LHC operators made the first beams with energy-per-proton of 6.5 TeV, a new record and a major milestone in the LHC’s restart.]  There are many problems with such a statement, as I commented in my last post, but let’s leave all that aside today… because it is true that the LHC can look for dark matter.   How?

When people suggest that the LHC can discover dark matter, they are implicitly assuming

  • that dark matter exists (very likely, but perhaps still with some loopholes),
  • that dark matter is made from particles (which isn’t established yet) and
  • that dark matter particles can be commonly produced by the LHC’s proton-proton collisions (which need not be the case).

You can question these assumptions, but let’s accept them for now.  The question for today is this: since dark matter barely interacts with ordinary matter, how can scientists at an LHC experiment like ATLAS or CMS, which is made from ordinary matter of course, have any hope of figuring out that they’ve made dark matter particles?  What would have to happen before we could see a BBC or New York Times headline that reads, “Large Hadron Collider Scientists Claim Discovery of Dark Matter”?

Well, to address this issue, I’m writing an article in three stages. Each stage answers one of the following questions:

  1. How can scientists working at ATLAS or CMS be confident that an LHC proton-proton collision has produced an undetected particle — whether this be simply a neutrino or something unfamiliar?
  2. How can ATLAS or CMS scientists tell whether they are making something new and Nobel-Prizeworthy, such as dark matter particles, as opposed to making neutrinos, which they do every day, many times a second?
  3. How can we be sure, if ATLAS or CMS discovers they are making undetected particles through a new and unknown process, that they are actually making dark matter particles?

My answer to the first question is finished; you can read it now if you like.  The second and third answers will be posted later during the week.

But if you’re impatient, here are highly compressed versions of the answers, in a form which is accurate, but admittedly not very clear or precise.

  1. Dark matter particles, like neutrinos, would not be observed directly. Instead their presence would be indirectly inferred, by observing the behavior of other particles that are produced alongside them.
  2. It is impossible to directly distinguish dark matter particles from neutrinos or from any other new, equally undetectable particle. But the equations used to describe the known elementary particles (the “Standard Model”) predict how often neutrinos are produced at the LHC. If the number of neutrino-like objects is larger that the predictions, that will mean something new is being produced.
  3. To confirm that dark matter is made from LHC’s new undetectable particles will require many steps and possibly many decades. Detailed study of LHC data can allow properties of the new particles to be inferred. Then, if other types of experiments (e.g. LUX or COGENT or Fermi) detect dark matter itself, they can check whether it shares the same properties as LHC’s new particles. Only then can we know if LHC discovered dark matter.

I realize these brief answers are cryptic at best, so if you want to learn more, please check out my new article.

The LHC restarts — in a manner of speaking —

As many of you will have already read, the Large Hadron Collider [LHC], located at the CERN laboratory in Geneva, Switzerland, has “restarted”. Well, a restart of such a machine, after two years of upgrades, is not a simple matter, and perhaps we should say that the LHC has “begun to restart”. The process of bringing the machine up to speed begins with one weak beam of protons at a time — with no collisions, and with energy per proton at less than 15% of where the beams were back in 2012. That’s all that has happened so far.

If that all checks out, then the LHC operators will start trying to accelerate a beam to higher energy — eventually to record energy, 40% more than in 2012, when the LHC last was operating.  This is the real test of the upgrade; the thousands of magnets all have to work perfectly. If that all checks out, then two beams will be put in at the same time, one going clockwise and the other counterclockwise. Only then, if that all works, will the beams be made to collide — and the first few collisions of protons will result. After that, the number of collisions per second will increase, gradually. If everything continues to work, we could see the number of collisions become large enough — approaching 1 billion per second — to be scientifically interesting within a couple of months. I would not expect important scientific results before late summer, at the earliest.

This isn’t to say that the current milestone isn’t important. There could easily have been (and there almost were) magnet problems that could have delayed this event by a couple of months. But delays could also occur over the coming weeks… so let’s not expect too much in 2015. Still, the good news is that once the machine gets rolling, be it in May, June, July or beyond, we have three to four years of data ahead of us, which will offer us many new opportunities for discoveries, anticipated and otherwise.

One thing I find interesting and odd is that many of the news articles reported that finding dark matter is the main goal of the newly upgraded LHC. If this is truly the case, then I, and most theoretical physicists I know, didn’t get the memo. After all,

  • dark matter could easily be of a form that the LHC cannot produce, (for example, axions, or particles that interact only gravitationally, or non-particle-like objects)
  • and even if the LHC finds signs of something that behaves like dark matter (i.e. something that, like neutrinos, cannot be directly detected by LHC’s experiments), it will be impossible for the LHC to prove that it actually is dark matter.  Proof will require input from other experiments, and could take decades to obtain.

What’s my own understanding of LHC’s current purpose? Well, based on 25 years of particle physics research and ten years working almost full time on LHC physics, I would say (and I do say, in my public talks) that the coming several-year run of the LHC is for the purpose of

  1. studying the newly discovered Higgs particle in great detail, checking its properties very carefully against the predictions of the “Standard Model” (the equations that describe the known apparently-elementary particles and forces)  to see whether our current understanding of the Higgs field is complete and correct, and
  2. trying to find particles or other phenomena that might resolve the naturalness puzzle of the Standard Model, a puzzle which makes many particle physicists suspicious that we are missing an important part of the story, and
  3. seeking either dark matter particles or particles that may be shown someday to be “associated” with dark matter.

Finding dark matter itself is a worthy goal, but the LHC may simply not be the right machine for the job, and certainly can’t do the job alone.

Why the discrepancy between these two views of LHC’s purpose? One possibility is that since everybody has heard of dark matter, the goal of finding it is easier for scientists to explain to journalists, even though it’s not central.  And in turn, it is easier for journalists to explain this goal to readers who don’t care to know the real situation.  By the time the story goes to press, all the modifiers and nuances uttered by the scientists are gone, and all that remains is “LHC looking for dark matter”.  Well, stay tuned to this blog, and you’ll get a much more accurate story.

Fortunately a much more balanced story did appear in the BBC, due to Pallab Ghosh…, though as usual in Europe, with rather too much supersymmetry and not enough of other approaches to the naturalness problem.   Ghosh also does mention what I described in the italicized part of point 3 above — the possibility of what he calls the “wonderfully evocatively named `dark sector’ ”.  [Mr. Ghosh: back in 2006, well before these ideas were popular, Kathryn Zurek and I named this a “hidden valley”, potentially relevant either for dark matter or the naturalness problem. We like to think this is a much more evocative name.]  A dark sector/hidden valley would involve several types of particles that interact with one another, but interact hardly at all with anything that we and our surroundings are made from.  Typically, one of these types of particles could make up dark matter, but the others would unsuitable for making dark matter.  So why are these others important?  Because if they are produced at the LHC, they may decay in a fashion that is easy to observe — easier than dark matter itself, which simply exits the LHC experiments without a trace, and can only be inferred from something recoiling against it.   In other words, if such a dark sector [or more generally, a hidden valley of any type] exists, the best targets for LHC’s experiments (and other experiments, such as APEX or SHiP) are often not the stable particles that could form dark matter but their unstable friends and associates.

But this will all be irrelevant if the collider doesn’t work, so… first things first.  Let’s all wish the accelerator physicists success as they gradually bring the newly powerful LHC back into full operation, at a record energy per collision and eventually a record collision rate.

How a Trigger Can Potentially Make or Break an LHC Discovery

Triggering is an essential part of the Large Hadron Collider [LHC]; there are so many collisions happening each second at the LHC, compared to the number that the experiments can afford to store for later study, that the data about most of the collisions (99.999%) have to be thrown away immediately, completely and permanently within a second after the collisions occur.  The automated filter, partly hardware and partly software, that is programmed to make the decision as to what to keep and what to discard is called “the trigger”.  This all sounds crazy, but it’s necessary, and it works.   Usually.

Let me give you one very simple example of how things can go wrong, and how the ATLAS and CMS experiments [the two general purpose experiments at the LHC] attempted to address the problem.  Before you read this, you may want to read my last post, which gives an overview of what I’ll be talking about in this one.

Click here to read the rest of the article…

Final Days of Busy Visit to CERN

I’m a few days behind (thanks to an NSF grant proposal that had to be finished last week) but I wanted to write a bit more about my visit to CERN, which concluded Nov. 21st in a whirlwind of activity. I was working full tilt on timely issues related to Run 2 of the Large Hadron Collider [LHC], currently scheduled to start early next May.   (You may recall the LHC has been shut down for repairs and upgrades since the end of 2012.)

A certain fraction of my time for the last decade has been taken up by concerns about the LHC experiments’ ability to observe new long-lived particles, specifically ones that aren’t affected by the electromagnetic or strong nuclear forces. (Long-lived particles that are affected by those forces are easier to search for, and are much more constrained by the LHC experiments.  More about them some other time.)

This subject is important to me because it is a classic example of how the trigger systems at LHC experiments could fail us — whereby a spectacular signal of a new phenomena could be discarded and lost in the very process of taking and storing the data! If no one thinks carefully about the challenges of finding long-lived particles in advance of running the LHC, we can end up losing a huge opportunity, unnecessarily. Fortunately some of us are thinking about it, but we are small in number. It is an uphill battle for those experimenters within ATLAS and CMS [the two general purpose experiments at the LHC] who are working hard to make sure they have the required triggers available. I can’t tell you how many times people within the experiments — even at the Naturalness conference I wrote about recently — have told me “such efforts are hopeless”… despite the fact that their own experiments have actually shown, already in public and in some cases published measurements (including this, this, this, this, this, and this), that it is not. Conversely, many completely practical searches for long-lived particles have not been carried out, often because there was no trigger strategy able to capture them, or because, despite the events having been recorded, no one at ATLAS or CMS has had time or energy to actually search through their data for this signal.

Now what is meant by “long-lived particles”? Continue reading