Tag Archives: jets

Final Day of SEARCH 2013

Day 3 of the SEARCH workshop (see here for an introduction and overviews of Day 1 and Day 2) opened with my own talk, entitled “On The Frontier: Where New Physics May Be Hiding”. The issue I was addressing is this:

Even though dozens of different strategies have been used by the experimenters at ATLAS and CMS (the two general purpose experiments at the Large Hadron Collider [LHC]) to look for various types of new particles, there are still many questions that haven’t been asked and many aspects of the data that haven’t been studied. My goal was to point out a few of these unasked or incompletely asked questions, ones that I think are very important for ATLAS and CMS experts to investigate… both in the existing data and also in the data that the LHC will start producing, with a higher energy per proton-proton collision, in 2015.

I covered four topics — I’ll be a bit long-winded here, so just skip over this part if it bores you.

1. Non-Standard-Model (or “exotic”) Higgs Decays: a lightweight Higgs particle, such as the one we’ve recently discovered, is very sensitive to novel effects, and can reveal them by decaying in unexpected ways. One class of possibilities, studied by a very wide range of theorists over the past decade, is that the Higgs might decay to unknown lightweight particles (possibly related in some way to dark matter). I’ve written about these possible Higgs decays a lot (here, here, here, here, here, here and here). This was a big topic of mine at the last SEARCH workshop, and is related to the issue of data parking/delaying. In recent months, a bunch of young theorists (with some limited help and advice from me) have been working to write an overview article, going systematically through the most promising non-Standard-Model decay modes of the Higgs, and studying how easy or difficult it will be to measure them.  Discoveries using the 2011-2012 data are certainly possible!  and at least at CMS, the parked data is going to play an important role.

2. What Variants of “Natural” Supersymmetry (And Related Models) Are Still Allowed By ATLAS and CMS Searches? A natural variant of supersymmetry (see my discussion of “naturalness”=genericity here) is one in which the Higgs particle’s mass and the Higgs field’s value (and therefore the W and Z particles’ masses) wouldn’t change drastically if you were somehow to vary the masses of superpartner particles by small amounts. Such variants tend to have the superpartner particle of the Higgs (called the “Higgsino”) relatively light (a few hundred GeV/c² or below), the superpartner of the top (the “top squark”, with which the Higgs interacts very strongly) also relatively light, and the superpartner of the gluino up in the 1-2 TeV range. If the gluino is heavier than 1.4 TeV or so, then it is too heavy to have been produced during the 2011-2012 LHC run; for variants with such a heavy gluino, we may have to wait until 2015 and beyond to discover or rule them out. But it turns out that if the gluino is light enough (generally a bit above 1 TeV/c²) it is possible to make very general arguments, without resort to the three assumptions that go into the most classic searches for supersymmetry, that almost all such natural and currently accessible variants are now ruled out. I say “almost” because there is at least one class of important exceptions where the case is clearly not yet closed, and for which the gluino mass could be well below 1 TeV/c². [Research to completely characterize the situation is still in progress; I'm working on it with Rutgers faculty member David Shih and postdocs Yevgeny Kats and Jared Evans.]  What we’ve learned is applicable beyond supersymmetry to certain other classes of speculative ideas.

3. Long-Lived Particles: In most LHC studies, it is assumed that any currently unknown particles that are produced in LHC collisions will decay in microscopic times to particles we know about. But it is also possible that one or more new type of particle will decay only after traveling a measurable distance (about 1 millimeter or greater) from the collision point. Searching for such “long-lived” particles (with lifetimes longer than a trillionth of a second!) is complicated; there are many cases to consider, a non-standard search strategy is almost always required, and sometimes specialized trigger strategies are needed. Until recently, only a few studies had been carried out, many with only 2011 data. A very important advance occurred very recently, however, when CMS produced a study, using the full 2011-2012 data set, looking for a long-lived particle that decays to two jets (or to anything that looks to the detector like two jets, which is a bit more general) after traveling up to a large fraction of a meter. The specialized trigger that was used requires about 300 GeV of energy or more to be produced in the proton-proton collision in the form of jets (or things that look like jets to the triggering system.) This is too much for the search to detect a Higgs particle decaying to one or two long-lived particles, because a Higgs particle’s mass-energy [E=mc2 energy] is only 125 GeV, and it is rather rare therefore for 300 GeV of energy in jets-et-al to be observed when a Higgs is produced. But in many speculative theories with long-lived particles, this amount of energy is easily obtained. As a result, this new CMS search clearly wipes out, at one stroke, many variants of a number of speculative models. It will take theorists a little while to fully understand the impact of this new search, but it will be big. Still, it’s by no means the final word.  We need to push harder, improving and broadening the use of these methods, in order that decays of the Higgs itself to long-lived particles can be searched for. This has been done already in a handful of cases (for example if the long-lived particle decays not to jets but to a muon/anti-muon pair or an electron/positron pair, or if the long-lived particle travels several meters before it decays) and in some cases it is already possible to show that at most 1 in 100 to 1000 Higgs particles produce long-lived particles of this type.  For some other cases, the triggers developed for the parked data may be crucial.

4. “Soft” Signals: A frontier that has never been explored, but which theorists have been talking about for some years, is one in which a high-energy process associated with a new particle is typically accompanied by an unusually large number of very low-energy particles (typically photons or hadrons with energy below a few GeV). The high-energy process is mimicked by certain common processes that occur in the Standard Model, and consequently the signal is drowned out, like a child’s voice in a crowded room. But the haze of a large number of low-energy particles that accompanies the signal is rare in the mimicking processes, so by keeping only those collisions that show something like this haze, it becomes possible to throw out the mimicking process most of the time, making the signal stand out — as though, in trying to find the child, one could identify a way to get most of the people to leave the room, reducing the noise enough for the child’s voice to be heard. [For experts: The most classic example of this situation arises in certain types of objects called "quirks", though perhaps there are other examples. For non-experts: I'll explain what quirks are some other time; it's a sophisticated story.]

I was pleased that there was lively discussion on all of these four points; that’s essential for a good workshop.

After me there were talks by ATLAS expert Erez Etzion and CMS’s Steve Wurm, surveying a large number of searches for new particles and other phenomena by the two experiments. One new result that particularly caught my eye was a set of CMS searches for new very heavy particles that decay to pairs of W and/or Z particles.  The W and Z particles go flying outwards with tremendous energy, and form the kind of jet-like objects I mentioned yesterday in the context of Jesse Thaler’s talk on “jet substructure”.  This and a couple of other related measurements are reflective of our moving into a new era, in which detection of jet-like W and Z particles and jet-like top quarks has become part of the standard toolbox of a particle physicist.

The workshop concluded with three hour-long panel discussions:

  1. on the possible interplay between dark matter and LHC research (for instance: how production of “friends” of dark matter [i.e., particles that are somehow related to dark matter particles] may be easier to detect at the LHC than production of dark matter itself)
  2. on the highest priorities for the 2013-2014 shutdown period before the LHC restarts (for instance, conversations between theorists and experimentalists about the trigger strategies that should be used in the next LHC run)
  3. on what the opportunities of the 2015-2020 run of the LHC are likely to be, and what their implications may be (for instance, the ability to finally reach the 3 TeV/c2 mass range for the types of particles one would expect in the so-called “Randall-Sundrum” class of extra-dimensions models; the opportunities to look for very rare Higgs, top and W decays; and the potential to complete the program I outlined above of ruling out all but a very small class of natural variants of supersymmetry.)

All in all, a useful workshop — but its true value will depend on how much we all follow up on what we discussed.

Higgs Results from The First Week of the Moriond Conference

[UPDATE: Tevatron results start a few paragraphs down; LHC results will appear soon]

[2nd UPDATE: ATLAS  new results added: the big unexpected news.   As far as I can tell CMS, which got its results out much earlier in the year, didn't add anything very new in its talk today.]

[3rd UPDATE: some figures from the talks added]

[4th UPDATE: more understanding of the ATLAS lack of excesses in new channels, and what it does to the overall excess at 125 GeV; reduction in local significance from about 3.5 sigma to about 2.5, and with look-elsewhere effect, now the probability the whole thing is an accident is 10%, not 1%.  Thanks to a comment for pointing out how large the effect was.]

This morning there are were several talks about the Higgs at the Moriond Electroweak conference.  There will be were talks coming from the Tevatron experiments CDF and DZero; we expected new results on the search for the Higgs particle from each experiment separately, and combined together.  There were also talks from the Large Hadron Collider [LHC] experiments CMS and ATLAS.  It wasn’t widely known how much new we’d see; they don’t have any more data than they had in December, since the LHC has been on winter shut-down since then, but ATLAS especially still hasn’t presented all of the results based on its 2011 data, so they may present new information.  The expectation was that the impact of today’s new results would be incremental; whatever we learned today wouldn’t dramatically change the situation.  The Tevatron results will certainly cause a minor ruckus, though, because there will surely be controversy about them, by their very nature.  I gave you a sense for that yesterday.  They aren’t likely to convince doubters.  But they might provide more pieces of evidence in favor of a lightweight Higgs (though not necessarily at the value of around 125 GeV/c2 currently preferred by ATLAS and CMS; see below.)

There are two things I didn’t explain yesterday that are probably worth knowing about.

First, if you look at Figure 2 in my post from yesterday, you’ll notice that the shape of the Higgs signal at the Tevatron experiments is very broad.  It doesn’t have a nice sharp peak at the mass of the Higgs (115 GeV in the figure).  This is because (as I discussed yesterday) it is hard to measure jets very precisely.  For this reason CDF and DZero will be able to address the question: “is there or is there not a lightweight Higgs-like particle”, but they will not easily be able to address the question “is its mass 115 GeV, 120 GeV, 125 GeV or 130 GeV?” very well.  So we’re really talking about them addressing something only slightly beyond a Yes-No question — and one which requires them to understand their backgrounds really well.  This is to be contrasted with the two-photon and four-lepton results from ATLAS and CMS, which with more data are the only measurements, in my view, that can really hope to establish a signal of a Higgs particle in a completely convincing way.  These are the only measurements that will see something that could not be mimicked by a mis-estimated background.

Second, the key to the CDF and DZero measurements is being able to identify jets that come from a bottom quark or anti-quark — a technique which is called “b-tagging the jets” — because, as I described yesterday, they are looking for Higgs decays to a bottom quark and a bottom antiquark, so they want to keep events that have two b-tagged jets and throw away others.  I have finished a new short article that explains the basic principles are behind b-tagging, so you can get an idea of what the experimenters are actually doing to enhance the Higgs signal and reduce their backgrounds.  Now b-tagging is never perfect; you will miss some jets from bottom quarks, and accidentally pick up some that don’t come from bottom quarks.  But one part of making the Tevatron measurement  involves making their b-tagging techniques better and better.  CDF, at least, has already claimed in public that they’ve done this.

Will update this after information becomes available and when time permits.

UPDATES: New Tevatron Results and New ATLAS Results

New Tevatron Results

Tevatron claims a lightweight Higgs; to be precise, the combination of the two experiments CDF and DZero is incompatible with the absence of a lightweight Higgs at 2.2 standard deviations (or “sigmas”), after the look elsewhere effect.  CDF sees a larger effect than DZero; but the CDF data analysis method seems more aggressive.   But both methods are far too complicated for me to evaluate.

The combination of DZero and CDF results from the Tevatron shows that their observed limit on the Higgs production rate as a function of its mass (solid line) lies about two sigma above the expected limit in the absence of any Higgs (dashed line) indicating an excess of events that appears consistent with a Higgs signal roughly in the 115-135 GeV mass range. By itself this result is not confidence-inspiring, but it does add weight to what we know from ATLAS and CMS at the LHC.

2.2 sigma is not much, and excesses of this size come and go all the time.  We even saw that several times this past year. But you can certainly view today’s result from the Tevatron experiments as another step forward toward a convincing case, when you combine it with what ATLAS and CMS currently see.  At minimum, assuming that the Higgs particle is of Standard Model type (the simplest possible type of Higgs particle), what CDF and DZero claim is certainly consistent with the moderate evidence that ATLAS and CMS are observing.  

There’s more content in that statement than you might think.  For example, if there were two Higgs particles, rather than one, the rate for the process CDF and DZero are measuring could easily be reduced somewhat relative to the Standard Model.  In this case they wouldn’t have found even the hint they’ve got.  (I explained why yesterday, toward the end of the post.)  Meanwhile the process that ATLAS and CMS are measuring might not be reduced in such a scenario, and could even be larger — so it would certainly be possible, if there were a non-Standard-Model-like Higgs at 125 GeV, for ATLAS and CMS to see some evidence, and CDF and DZero to see none.  That has not happened.  If you take the CDF and DZero hint seriously, it points — vaguely — toward a lightweight Standard-Model-like Higgs.  Or more accurately, it does not point away from a lightweight Standard-Model-like Higgs.

However, we do have to keep in mind that, as I noted, CDF and DZero can only say the Higgs mass seems as though it might be in the range 115 to 135 GeV; they cannot nail it down better than that, using their methods, for the reasons I explained earlier.  So their result is consistent with a Standard Model Higgs particle  at 125 GeV, which would agree with the hints at ATLAS and CMS, but it is also consistent with one at 120 GeV, which would not agree.   Thus Tevatron bolsters the case for a lightweight Higgs, but would be consistent both with the current hints at LHC and with other parts of the range that the LHC experiments have not yet excluded.  If the current ATLAS and CMS hints went away with more data, the Tevatron results might still be correct, and in that case ATLAS and CMS would start  seeing hints at a different mass.

But given what ATLAS and CMS see: the evidence from December, and the step forward in January with the CMS update in their two-photon data, something around 125 GeV remains the most likely value mass for a Standard Model Higgs.  The issue cannot be considered settled yet, but so far nothing has gotten in the way of this hypothesis.

Now, the inevitable caveats.

First, as with any measurement, these results cannot automatically be assumed to be correct; indeed most small excesses go away when more data is accumulated, either because they are statistical fluctuations or because of errors that get tracked down — but unfortunately we will not get any more data from the now-closed Tevatron to see if that will happen.  The plausibility of Tevatron’s claims needs to be evaluated, and (in contrast to the two photon and four lepton results from ATLAS and CMS, which are relatively straightforward to understand) this won’t be easy or uncontroversial.  The CDF and DZero people did a very fancy analysis with all sorts of clever tricks, which has the advantage that it makes the measurement much more powerful, but the disadvantage of making it obscure to those who didn’t perform it.

One other caveat is that we will have to be a little cautious literally combining results from Tevatron with those from the LHC.  There’s no sense in which [this statement is factually incorrect as stated, as commenters from CDF are pointing out; there are indeed several senses in which it was done blind.  I should have been more precise about what was meant, which was more of a general knowledge of how difficult it is to avoid bias in determining the backgrounds for this measurement.  Let me add that this is not meant to suggest anything about CDF, or DZero, in particular; doing any measurement of this type is extraordinarily difficult, and those who did it deserve applause.  But they're still human.] the Tevatron result was done `blind’; it was done with full knowledge that LHC already has a hint at 125,  and since the Tevatron is closed and all its data is final, this is Tevatron’s last chance (essentially) to contribute to the Higgs particle search.  Combining experiments is fine if they are truly independent; if they are not, you are at risk of bolstering what you believe because you believe it, rather than because nature says it.

New ATLAS results 

ATLAS has now almost caught up with CMS, in that its searches for Higgs particles decaying to two photons and to two lepton/anti-lepton pairs (or “four leptons” for short) have now been supplemented by (preliminary! i.e., not yet publication-ready) results in searches for Higgs particles decaying to

  • a lepton, anti-lepton, neutrino and anti-neutrino
  • a tau lepton/anti-lepton pair
  • a bottom quark/anti-quark pair (which is what CDF and DZero looked for too)

(The only analysis ATLAS is missing is the one that CMS added in January, separating out events with two photons along with two jets.) In contrast to the CMS experiment, which found small excesses (just 1 sigma) above expectation in each of these three channels, ATLAS does not.  [And I've been reminded to point out that the first channel has changed; in December, with 40% of the data analyzed, there was a small excess.] So CDF and DZero’s results from today take us a step forward toward a convincing case, while ATLAS’s result takes us a small step backward.  That’s par for the course in science when you’re squinting to see something that’s barely visible.

In the same search as performed by CDF and DZero, and in the same region where they see an excess, ATLAS sees no excess at all; but ATLAS has less data and is currently less sensitive to this channel than CDF and DZero, so there is no clear contradiction.

But one can’t get too excited about this.  Statistics are still so low in these measurements that it would be easy for this to happen.  And determining the backgrounds in these measurements is tough.  If you make a mistake in a background estimation, you could make a small excess appear where there really isn’t one, or you could make a real excess disappear.  It cuts both ways.

But actually there is a really important result coming out of ATLAS today; it is the deficit of events in the search for the Higgs decaying to a tau lepton/anti-lepton pairs.  For a putative Higgs below 120 GeV, ATLAS sees even fewer tau lepton/anti-lepton events than it expected from pure background — in other words, the background appears to have fluctuated low.  But this means there is not likely to be a Standard Model-like Higgs signal there, because the likelihood that the background plus a Higgs signal would have fluctuated very low is small.  [UPDATE: actually, looking again, I think I am somewhat overstating the importance of this deficit in taus compared to the lack of excess in the other two channels, which is also important. To be quantitative about this would require more information.  In any case, the conclusion is the same.]    And this allows ATLAS to exclude new regions in the mass range for the Standard Model Higgs, at 95% confidence!

This is very important!  One of the things that I have complained about with regard to those who’ve overplayed the December Higgs hints is that you can’t really say that the evidence for a Higgs around 125 GeV is good if you can’t start excluding both above and below that mass.  Well, ATLAS has started to do that.  Granted, it isn’t 99% exclusion, and since this is the Higgs we’re talking about, we need high standards.  But at 95% confidence, ATLAS now excludes, for a Standard Model Higgs, 110-117.5, 118.5-122.5, and 129-539 GeV.  Said better, if there is a Standard Model Higgs in nature, ATLAS alone restricts it (to 95% confidence only, however) to the range 117.5 – 118.5 GeV or 122 – 129 GeV.

ATLAS, just from its own data alone, excludes (pink-shaded regions) the Standard Model Higgs particle at 95% confidence (but not yet at 99%) across the entire allowed range except around 118 GeV and between 122 and 129 GeV, where the two-photon and four-lepton searches provide some positive evidence. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. The red dotted line indicates how well this experiment would perform, on average, if there were no Standard Model Higgs signal.

The window is closing.  Not only has ATLAS completely excluded the old hints of a Standard Model Higgs at 115 from the LEP collider, it seems it has probably excluded CMS’s hint around 120, which was the next best option for the Higgs after 125.  And as far as I can tell, this is coming mainly from the tau lepton/anti-lepton measurement As I said above in an update, I think it is really a mix of all three channels… hard to be quantitative about that without talking to the experts.

So if the Standard Model Higgs is what nature has to offer us, we’re probably down to a tiny little slice around 118 GeV for which there’s no evidence, and a window that has 125 GeV smack in the middle of it, where the evidence, though not much stronger today, if we include both Tevatron and ATLAS, than it was yesterday, is certainly no weaker.

UPDATE: Well, it’s been pointed out to me by the first commenter that the last statement is misleading, because it doesn’t emphasize how the ATLAS excess at 126 GeV has decreased substantially  in significance. Somehow I thought originally that the decrease was marginal. But it isn’t.

The statistics numbers as I think I have them now: What was previously about 3.5 sigma local significance for the Higgs-like peak at 125 GeV is now down to 2.5, and what seemed only 1% likely in December to be a fluctuation is now 10% likely.

There is an issue, however, with combining many measurements.  Of course the two-photon and four-lepton results from ATLAS are the same as before, and they are just as significant; nothing changed.  But the other three measurements came in low, and that pulls the significance of the combination down.  However, I must remind you again how difficult the last three measurements are.  I would trust the first two before the last three.  So I think we should be careful not to overinterpret this change.   When you combine what you trust most with what you trust least, you reduce your confidence in what you have.

That said, it also indicates why one should be very cautious with small amounts of data.

Comparison of the December ATLAS results (left), combining all measurements that were available at the time, with the March 2012 ATLAS results (right). I've lined them up as best I could, given the scales were slightly different. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. Compared to December, there is much more excluded and the height of the peak at 126 GeV is noticeably lower.

How Do We KNOW a Proton Is So Complicated? (Data!)

Among the bridges that I hope to build, as I develop this website, is one connecting what we know today about nature with how we know it. After all, you’re reading my depiction of nature, based on how I think nature works.  I can try to assure you that my depiction is the mainstream viewpoint at the forefront of the research field — but you may still wonder if this website is legitimate, or if I might just be full of hot air, or if I might simply be mistaken. Well, my confidence in what I’m saying doesn’t come from having trained at some fancy university or my degree or from having been in the business for over 20 years. It comes from the data… in short, from nature itself.

So it’s important, I think, to link the data to the ideas and concepts, when it’s possible to do that.

You’ve heard the famous statement that “a proton is made from two up quarks and a down quark”.  But in this basic article, and this somewhat more advanced one, and in Wednesday’s post where I went into some details about what we know about proton structure, I’ve claimed to you that protons are actually chock full of particles, most of which carry a tiny fraction of the proton’s energy, and most of which are gluons, with a lot of quarks and antiquarks. [If this sounds unfamiliar, you should read those articles and posts before reading this one, which is a follow-up.]  And I claimed that these complications make a big difference at the Large Hadron Collider [LHC].

So should you take my word for this? You don’t have to.  Let me show you evidence.  From LHC data.  Here’s an article defending the main claim’s of Wednesday’s post.  It’s a near-final draft, still needing some proofreading perhaps, and probably some clarification, but I think it is fully readable now.  Enjoy it (and please feel free to give me feedback on its clarity, so I can improve it), or wait for the final version next week, as you see fit.  And have a great weekend!

The Benefits of 8 TeV Collisions Over 7 TeV.

Yesterday, a commenter asked me a very good question that I realized I hadn’t yet addressed on this site.  Answering it gives us a chance to look at real data from the Large Hadron Collider [LHC], and to see what differences will arise the machine’s energy is increased from 7 TeV to 8.

The protons that are smashed together at the LHC are made from many quarks, gluons and antiquarks. The proton-proton collisions take place at a definite energy: 7 TeV = 7000 GeV in 2011, 8 TeV = 8000 GeV  in 2012.  But what we’re mainly interested in — what can really create new physical phenomena for us to observe — are the collisions of a quark in one proton with an antiquark in the other proton, or the collision of two gluons, etc. These “mini-collisions” carry only a fraction — typically a very small fraction — of the total proton-proton collision energy. How high a fraction can they carry?  and what are the motivations for increasing the energy from 7 TeV per collision to 8 TeV?  Click here for the answer.