Tag Archives: CDF

It’s (not) The End of the World

The December solstice has come and gone at 11:11 a.m. London time (6:11 a.m New York time). That’s the moment when the north pole of the Earth points most away from the sun, and the south pole points most toward it. Because it’s followed by a weekend and then Christmas Eve, it marks the end of the 2012 blogging season, barring a major event between now and year’s end. But although 11:11 London time is the only moment of astronomical significance during this day (clearly the universe does not care where humans set our international date line and exactly how we set our time zones, so destruction was never going to be at local midnight — something the media doesn’t seem to get) it obviously wasn’t the end of the world.

A lot of people do put a lot of stock in prophecy, including prophecies of the end of the world that nobody ever made (such as the one not made for today by the Mayans, through their calendar) and others that people made but were wrong (such as those made by Harold Camping last year and by many throughout history who preceded him.) If anyone were any good at prophecy they’d be able to use their special knowledge to become billionaires, so maybe we should be watching Bill Gates and Michael Bloomberg and the Koch brothers and people like that. I haven’t heard any rumors of them building bunkers or spaceships yet. Of course at the end of the year they may get a small tax hike, but that wouldn’t be the end of the world.

The Large Hadron Collider [LHC], meanwhile, has triumphantly reached the end of its first run of proton-proton collisions. Goal #1 of the LHC was to allow physicists at the ATLAS and CMS experiments to discover the Higgs particle, or particles, or whatever took their place in nature; and it would appear that, in a smashing success, they have co-discovered one.  But no Higgs particles, or anything like them, will be produced again until 2015. Although the LHC will run for a short while in early 2013, it will do so in a different mode, smashing not protons but the nuclei of lead atoms together, in order to study the properties of extremely hot and dense matter, under conditions the universe hasn’t seen since the earliest stages of the Big Bang that launched the current era of our universe.  Then it will be closed down for repairs and upgrades.  So until 2015, any additional information we’re going to learn about the Higgs particle, or any other unknown particle that might have been produced at the LHC, is going to be obtained by analyzing the data that has been collected in 2011 and 2012. The total amount of data is huge; what was collected in 2012 was about 4.5 times as much as in 2011, and it was taken at 8 TeV of energy per proton-proton collision rather than 7 TeV as in 2011. I can assure you there will be many new things learned from analyzing that data throughout 2013 and 2014.

Of course a lot of people prophesied confidently that we’d discover supersymmetry, or something else dramatic, very early on at the LHC. Boy, were they wrong! Those of us who were cautioning against such optimistic statements are not sure whether to laugh or cry, because of course it would have been great to have such a discovery early in the LHC program. But there was ample reason to believe (despite what other bloggers sometimes say) that even if supersymmetry exists and is accessible to the LHC experiments, discovering it could take a lot longer than just two years!  For instance, see this paper written in 2006 pointing out that the search strategies being planned for seeking supersymmetry might fail in the presence of a few extra lightweight particles not predicted in the minimal variants of supersymmetry. As far as I can tell at present, this very big loophole has only partly been closed by the LHC studies done up to now. The same loophole applies for other speculative ideas, including certain variants of LHC-accessible extra dimensions. I am hopeful that these loopholes can be closed in 2013 and 2014, with additional analysis on the current data, but until they are, you should be very cautious believing those who claim that reasonable variants of LHC-accessible supersymmetry (meaning “natural variants of supersymmetry that resolve the hierarchy problem”) are ruled out by the LHC experiments. It’s just not true. Not yet. The only classes of theories that have been almost thoroughly ruled out by LHC data are those predict on general grounds that there should be no observable Higgs particle at all (e.g. classic technicolor).

While we’re on the subject, I’ve been looking back at how I did on prophecy this year. It’s been a remarkably good year, probably my best ever — though admittedly I only made very easy (though not necessarily common) predictions. First, the really easy one:  I assured you, as did most of my colleagues, that 2012 would be the Year of the Higgs — at least, the Year of the Simplest Possible Higgs particle, called the “Standard Model Higgs”. It would be the year when Phase 1 of the Higgs Search would end — when we’d either find a Higgs particle of Standard Model type (or something looking vaguely like it), or, if not, we’d know we’d have to move to a more aggressive search in Phase 2, in which we’d look for more complicated versions of the Higgs particle that would have been much harder to find. We started the year with ambiguous hints of the Higgs particle, too flimsy to be sure of, but certainly tantalizing, at around a mass of 125 GeV/c2. In July the hints turned into a discovery — somewhat faster than expected for a Standard Model Higgs particle, because the rate for this particle to appear in collisions that produce two photons was higher than anticipated. The excess in the photon signal means either the probability for the Higgs particle to decay to photons is larger than predicted for a Higgs of Standard Model type, or both CMS and ATLAS experienced a fortunate statistical fluctuation that made the discovery easier. We still don’t know which it was; though we’ll know more by March, this ambiguity may remain with us until 2015.

One prophecy I made all the way back at the beginning of this blog, July 2011, was that the earliest search strategy for the Higgs, through its decays to a lepton, anti-lepton, neutrino and anti-neutrino, wouldn’t end up being crucial in the discovery; it was just too difficult. (In this experimental context, “lepton” refers only to “electron” or “muon”; taus don’t count, for technical reasons.) In the end, I said, it would be decays of the Higgs to two photons and to two lepton/anti-lepton pairs that would be the critical ones, because they would provide a clean signal that would be uncontroversial. And that prophesy was correct; the photon-based and lepton-based searches were the signals that led to discovery.

Now we’ve reached December, and the data seems to imply that except possibly for this overabundance of photons, which still tantalizes us, the various measurements of how the Higgs-like particle is produced and decays are starting to agree, to a precision which is still only moderate, with the predictions of the Standard Model for a Higgs of this mass. Fewer and fewer experts are still suggesting that this is not a Higgs particle. But it will be some years yet — 2018 or later — before measurements are precise enough to start convincing people that this Higgs particle is really of Standard Model type. Many variants of the Standard Model, with new particles and forces, predict that the difference of the real Higgs from a Standard Model Higgs may be subtle, with deviations at the ten percent level or even less. Meanwhile, other Higgs-like particles, with different masses and different properties, might be hiding in the data, and it may take quite a while to track them down. Many years of data collecting and data analysis lie ahead, in Phase 2 of the Higgs search.

Another prophecy I made at the beginning of the year was that Exotic Decays of the Higgs would be a high priority for 2012. You might think this prophesy was wrong, because in fact, so far, there have been very few searches at ATLAS, CMS and LHCb for such decays. But the challenge that required prioritizing these decays wasn’t data analysis; it was the problem of even collecting the data. The problem is that many exotic decays of the Higgs would lead to events that might not be selected by the all-important trigger system that determines which tiny fraction of the LHC’s collisions to store permanently for analysis! At the beginning of 2012 there was a risk that some of these processes would have been dumped by the trigger and irretrievably lost from the 2012 data, making future searches for such decays impossible or greatly degraded. At a hadron collider like the LHC, you have to think ahead! If you don’t consider carefully the analyses you’ll want to do a year or two from now, you may not set the trigger properly today. So although the priority for data analysis in 2012 was to find the Higgs particle and measure its bread-and-butter properties, the fact that the Higgs has come out looking more or less Standard Model-like in 2012 means that focusing on exotic possibilities, including exotic decays, will be one of the obvious places to look for something new, and thus a very high priority for data analysis, in 2013 and 2014. And that’s why, for the trigger — for the collection of the data — exotic decays were a very high priority for 2012. Indeed, one significant use of the new strategy of delayed data streaming at ATLAS and of data parking at CMS (two names for the same thing) was to address this priority. [My participation in this effort, working with experimentalists and with several young theorists, was my most rewarding project of 2012.]  As I explained to you, a Higgs particle with a low mass, such as 125 GeV/c2, is very sensitive to the presence of new particles and forces that are otherwise very difficult to detect, and it easily could exhibit one or more types of exotic decays.  So there will be a lot of effort put into looking for signs of exotic decays in 2013 and 2014! I’m very excited about all the work that lies ahead of us.

Now, the prophecy I’d like to make, but cannot — because I do not have any special insight into the answer — is on the question of whether the LHC will make great new discoveries in the future, or whether the LHC has already made its last discovery: a Higgs particle of Standard Model type. Even if the latter is the case, we will need years of data from the LHC in order to distinguish these two possibilities; there’s no way for us to guess. It’s clear that Nature’s holding secrets from us.  We know the Standard Model (the equations we use to describe all the known particles and forces) is not a complete theory of nature, because it doesn’t explain things like dark matter (hey, were dark matter particles perhaps discovered in 2012?), and it doesn’t tell us why, for example, there are six types of quarks, or why the heaviest quark has a mass that is more than 10,000 times larger than the mass of the lightest quarks, etc. What we don’t know is whether the answers to those secrets are accessible to the LHC; does it have enough energy per collision, and enough collisions, for the job?  The only way to find out is to run the LHC, and to dig thoroughly through its data for any sign of anything amiss with the predictions of the Standard Model. This is very hard work, and it will take the rest of the decade (but not until the end of the world.)

In the meantime, please do not fret about the quiet in the tunnel outside Geneva, Switzerland. The LHC will be back, bigger and better (well, at least with more energy per collision) in 2015. And while we wait during the two year shutdown, the experimentalists at ATLAS, CMS, and LHCb will be hard at work, producing many new results from the 2011 and 2012 proton collision data! Even the experiments CDF and DZero from the terminated Tevatron are still writing new papers. In short, fear not: not only isn’t the December solstice of 2012 the end of the world, it doesn’t even signal a temporary stop to the news about the Higgs particle!


One last personal note (just for those with some interest in my future.)

The First Human-Created Higgs-Like Particle: 1988 or 89, at the Tevatron

Yesterday’s Quiz Question: when was the first Higgs particle produced by humans? (where admittedly “Higgs” should have read “Higgs-like”) got many answers, but not the one I think is correct. Here’s what I believe is the answer.


[UPDATE: After this post was written, but before it went live, commenter bobathon got the right answer — at 6:30 Eastern, just under the wire! Well done!]

The first human-produced Higgs particle [more precisely, the Higgs-like particle with a mass of about 125 GeV/c2 whose discovery was reported earlier this month, and which I’ll refer to as “`H”– but I’ve told you why I think it is a Higgs of some sort] was almost certainly created in the United States, at the Fermilab National Accelerator Center outside Chicago. Back in 1988 and 1989, Fermilab’s accelerator called the Tevatron created collisions within the then-new CDF experiment, during the often forgotten but very important “Run Zero”.  The energy per collision, and the total data collected, were just enough to make it nearly certain that an H particle was created during this run.

Run Zero, though short, was important because it allowed CDF to prove that precision mass measurements were possible at a proton collider.  They made a measurement of the Z particle’s mass that almost rivaled the one made simultaneously at the SLC electron-positron collider.  This surprised nearly everyone. [Unfortunately I was out of town and missed the scene of disbelief, back in 1989, when CDF dropped this bombshell during a conference at SLAC, the SLC’s host laboratory.] Nowadays we take it for granted that the best measurement of the W particle’s mass comes from the Tevatron experiments, and that the Large Hadron Collider [LHC] experiments will measure the H particle’s mass to better than half a percent — but up until Run Zero it was widely assumed to be impossible to make measurements of such quality in the messy environment of collisions that involve protons.

Anyway, it is truly astonishing that we have to go back to 1988-1989 for the first artificially produced Higgs(-like) particle!! I was a first-year graduate student, and had just learned what Higgs particles were; precision measurements of the Z particle were just getting started, and the top quark hadn’t been found yet. It took 23 years to make enough of these Higgs(-like) particles to convince ourselves that they were there, using the power of the CERN laboratory’s Large Hadron Collider [LHC]!

[Perhaps this remarkable history will help you understand why I keep saying that although the LHC experiments haven’t yet found something unexpected in their data, that absolutely doesn’t mean that nothing unexpected is there. What’s new just may be hard to see, waiting to be noticed with more sophisticated methods and/or more data.] Continue reading

Taking Stock: Where is the Higgs Search Now?

Today, we got new information at the Moriond conference on the search for the Higgs particle (in particular, Phase 1 of the search, which involves the search for the simplest possible Higgs particle, called the “Standard Model Higgs”) from the Tevatron and the Large Hadron Collider [LHC], the Tevatron’s successor.  With those results in hand, and having had a little time to mull them over, let me give you a short summary.  If you want more details, read today’s earlier post and yesterday’s preparatory post.

Before I do that, let me make a remark.  There is a big difference between healthy skepticism and political denialism.  I get the impression that some people who are writing or reading other blogs misinterpret my caution with regard to experimental results as being somehow a political and unreasonable bias against the Higgs particle being present, either at a mass of 125 GeV/c2 or at all.  That’s ridiculous.  All that is going on is that I simply am not convinced yet by the data.  I’m a careful scientist… period.  And you’ll see that I’m consistent; later in this post I will advise you not to over-react negatively to what ATLAS didn’t see.

What happened today at the Moriond conference?

What did we learn?

The Tevatron experiments see a combined 2.2 standard deviation [2.2 “sigma”] excess in their search, consistent with a Standard Model Higgs particle with a mass anywhere in the range of 115 to 135 GeV/c2.  This is not inconsistent with the Higgs hints that we saw in December from the LHC experiments.  Here I am being perhaps overly careful in not saying, more positively, “it is consistent with the Higgs hints…” only because this measurement is intrinsically too crude to allow us to narrow in on 124-126 GeV, where ATLAS and CMS see their hints.  In short, the Tevatron measurement could, in the end, turn out to indicate a Higgs at a different mass than the one indicated by the current ATLAS and CMS hints.  Anyway, it’s a minor and mostly a semantic point.

The results from ATLAS were a bit of a shock.  In all three processes on which ATLAS reported, CMS has presented results already, and in each case CMS saw a small excess (1 standard deviation [1″sigma”], which is  small indeed.)  But ATLAS reported today that it sees essentially no excess in any of the three, and even a deficit in one of them for low mass.  This has a big effect.

  • First, it allows ATLAS to exclude a Standard Model Higgs all the way up to 122 GeV/c2 (except for a little window 1 GeV/c2 wide centered at 118) and down to 129 GeV/c2.  The only large window left for the Standard Model Higgs particle is 122-129, more or less centered around the hint at 126 GeV/c2 that they saw in December.
  • But second, the significance of the December hint, when combined with the new data that shows no excesses in these three new processes, drops by about a full standard deviation.  That’s a pretty big drop.

What does it all mean?

I think it basically means, roughly, status quo.  We got some positive information and some negative information today, and none of it is that easy to interpret.  So I think we are roughly where we were before, except that we probably no longer have to worry about any Standard Model Higgs below 122 GeV/c2.  Before today we had a decent hint of a Standard Model-like Higgs particle with a mass around 125 GeV/c2; we still have it.  Let me explain what I mean.

There are easy (relatively!) searches for the Higgs, and there are hard ones.  The easy searches are the ones where the backgrounds are relatively simple and the signal is a narrow peak on a plot.  There are two:

  1. Higgs decaying to  photons
  2. Higgs decaying to two lepton/anti-lepton pairs (often called “four leptons” for short)

Results on these were presented by both ATLAS and CMS back in December.  The hard searches are the ones where the backgrounds are rather complicated and the signal is quite broad, so that a mistake in estimating a background can either create a fake signal or hide a real one.    There are three (mainly) for a lightweight Higgs:

  1. Higgs decaying to a lepton, an anti-lepton, a neutrino and an anti-neutrino
  2. Higgs decaying to a tau lepton/anti-lepton pair
  3. Higgs decaying to a bottom quark/anti-quark pair

These are the three that ATLAS reported on today (where they saw no sign of a Higgs signal), and that CMS presented back in December (and saw a small excess in all three.)  [ATLAS presented a result on the first one in December, but only using part of their data; it showed a small excess at the time, but not now.]  The third process is the main one in which CDF and DZero reported an excess today, though the first one also plays a role in interpreting that excess.

In other words, everything we learned today had to do with the difficult searches — the ones that are hard to perform, hard to interpret, and hard to check.  And everything we learned was 1 or 2 sigma information; not very compelling even statistically.

For this reason,

  • I would not conclude that the new Tevatron results make the 125 GeV Higgs case much stronger
  • I would not conclude that the new ATLAS results make the 125 GeV Higgs case much weaker

For the same reason, when I explained why I was skeptical of the evidence back in December, I told you that in my view the CMS excesses in the difficult searches did not make the case for a 125 GeV Higgs much more compelling.  Since the easy searches at CMS do not show as large excesses as ATLAS’s do, I wasn’t really comfortable with the whole case from CMS.   Their case improved in January, when they added a bit more information from their easy search for two photons.

If, like me, you discount the difficult Higgs searches somewhat relative to the easy Higgs ones, then almost nothing has changed, as far as the current Higgs hints, after today’s up and down information.  The excess in the two easy searches at ATLAS is still there, and there are excesses at CMS at least in the two-photon search.  Even from the beginning, I gave you good reasons to think the ATLAS’s easy-search excesses were a bit larger than they should be, probably due to an upward statistical fluctuation in the background.    Conversely I think now that one should not overstate how bad today’s ATLAS news is for the Higgs hints.  It’s still quite reasonable to think there may be a Standard Model Higgs there at 125 GeV/c2.  There’s some evidence in its favor, and it’s certainly not ruled out at this point. (Whereas now, almost all other masses are.)

So as usual I advise patience and calm and no hyperventilating; the 2012 data will settle the issue.  Either there is a Standard Model Higgs with a mass within a few percent of 125 GeV/c2 , or we’ll soon be fanning out in Phase 2 of the Higgs search, looking for all the other types of Higgs particles that might be out there.

Higgs Results from The First Week of the Moriond Conference

[UPDATE: Tevatron results start a few paragraphs down; LHC results will appear soon]

[2nd UPDATE: ATLAS  new results added: the big unexpected news.   As far as I can tell CMS, which got its results out much earlier in the year, didn’t add anything very new in its talk today.]

[3rd UPDATE: some figures from the talks added]

[4th UPDATE: more understanding of the ATLAS lack of excesses in new channels, and what it does to the overall excess at 125 GeV; reduction in local significance from about 3.5 sigma to about 2.5, and with look-elsewhere effect, now the probability the whole thing is an accident is 10%, not 1%.  Thanks to a comment for pointing out how large the effect was.]

This morning there are were several talks about the Higgs at the Moriond Electroweak conference.  There will be were talks coming from the Tevatron experiments CDF and DZero; we expected new results on the search for the Higgs particle from each experiment separately, and combined together.  There were also talks from the Large Hadron Collider [LHC] experiments CMS and ATLAS.  It wasn’t widely known how much new we’d see; they don’t have any more data than they had in December, since the LHC has been on winter shut-down since then, but ATLAS especially still hasn’t presented all of the results based on its 2011 data, so they may present new information.  The expectation was that the impact of today’s new results would be incremental; whatever we learned today wouldn’t dramatically change the situation.  The Tevatron results will certainly cause a minor ruckus, though, because there will surely be controversy about them, by their very nature.  I gave you a sense for that yesterday.  They aren’t likely to convince doubters.  But they might provide more pieces of evidence in favor of a lightweight Higgs (though not necessarily at the value of around 125 GeV/c2 currently preferred by ATLAS and CMS; see below.)

There are two things I didn’t explain yesterday that are probably worth knowing about.

First, if you look at Figure 2 in my post from yesterday, you’ll notice that the shape of the Higgs signal at the Tevatron experiments is very broad.  It doesn’t have a nice sharp peak at the mass of the Higgs (115 GeV in the figure).  This is because (as I discussed yesterday) it is hard to measure jets very precisely.  For this reason CDF and DZero will be able to address the question: “is there or is there not a lightweight Higgs-like particle”, but they will not easily be able to address the question “is its mass 115 GeV, 120 GeV, 125 GeV or 130 GeV?” very well.  So we’re really talking about them addressing something only slightly beyond a Yes-No question — and one which requires them to understand their backgrounds really well.  This is to be contrasted with the two-photon and four-lepton results from ATLAS and CMS, which with more data are the only measurements, in my view, that can really hope to establish a signal of a Higgs particle in a completely convincing way.  These are the only measurements that will see something that could not be mimicked by a mis-estimated background.

Second, the key to the CDF and DZero measurements is being able to identify jets that come from a bottom quark or anti-quark — a technique which is called “b-tagging the jets” — because, as I described yesterday, they are looking for Higgs decays to a bottom quark and a bottom antiquark, so they want to keep events that have two b-tagged jets and throw away others.  I have finished a new short article that explains the basic principles are behind b-tagging, so you can get an idea of what the experimenters are actually doing to enhance the Higgs signal and reduce their backgrounds.  Now b-tagging is never perfect; you will miss some jets from bottom quarks, and accidentally pick up some that don’t come from bottom quarks.  But one part of making the Tevatron measurement  involves making their b-tagging techniques better and better.  CDF, at least, has already claimed in public that they’ve done this.

Will update this after information becomes available and when time permits.

UPDATES: New Tevatron Results and New ATLAS Results

New Tevatron Results

Tevatron claims a lightweight Higgs; to be precise, the combination of the two experiments CDF and DZero is incompatible with the absence of a lightweight Higgs at 2.2 standard deviations (or “sigmas”), after the look elsewhere effect.  CDF sees a larger effect than DZero; but the CDF data analysis method seems more aggressive.   But both methods are far too complicated for me to evaluate.

The combination of DZero and CDF results from the Tevatron shows that their observed limit on the Higgs production rate as a function of its mass (solid line) lies about two sigma above the expected limit in the absence of any Higgs (dashed line) indicating an excess of events that appears consistent with a Higgs signal roughly in the 115-135 GeV mass range. By itself this result is not confidence-inspiring, but it does add weight to what we know from ATLAS and CMS at the LHC.

2.2 sigma is not much, and excesses of this size come and go all the time.  We even saw that several times this past year. But you can certainly view today’s result from the Tevatron experiments as another step forward toward a convincing case, when you combine it with what ATLAS and CMS currently see.  At minimum, assuming that the Higgs particle is of Standard Model type (the simplest possible type of Higgs particle), what CDF and DZero claim is certainly consistent with the moderate evidence that ATLAS and CMS are observing.  

There’s more content in that statement than you might think.  For example, if there were two Higgs particles, rather than one, the rate for the process CDF and DZero are measuring could easily be reduced somewhat relative to the Standard Model.  In this case they wouldn’t have found even the hint they’ve got.  (I explained why yesterday, toward the end of the post.)  Meanwhile the process that ATLAS and CMS are measuring might not be reduced in such a scenario, and could even be larger — so it would certainly be possible, if there were a non-Standard-Model-like Higgs at 125 GeV, for ATLAS and CMS to see some evidence, and CDF and DZero to see none.  That has not happened.  If you take the CDF and DZero hint seriously, it points — vaguely — toward a lightweight Standard-Model-like Higgs.  Or more accurately, it does not point away from a lightweight Standard-Model-like Higgs.

However, we do have to keep in mind that, as I noted, CDF and DZero can only say the Higgs mass seems as though it might be in the range 115 to 135 GeV; they cannot nail it down better than that, using their methods, for the reasons I explained earlier.  So their result is consistent with a Standard Model Higgs particle  at 125 GeV, which would agree with the hints at ATLAS and CMS, but it is also consistent with one at 120 GeV, which would not agree.   Thus Tevatron bolsters the case for a lightweight Higgs, but would be consistent both with the current hints at LHC and with other parts of the range that the LHC experiments have not yet excluded.  If the current ATLAS and CMS hints went away with more data, the Tevatron results might still be correct, and in that case ATLAS and CMS would start  seeing hints at a different mass.

But given what ATLAS and CMS see: the evidence from December, and the step forward in January with the CMS update in their two-photon data, something around 125 GeV remains the most likely value mass for a Standard Model Higgs.  The issue cannot be considered settled yet, but so far nothing has gotten in the way of this hypothesis.

Now, the inevitable caveats.

First, as with any measurement, these results cannot automatically be assumed to be correct; indeed most small excesses go away when more data is accumulated, either because they are statistical fluctuations or because of errors that get tracked down — but unfortunately we will not get any more data from the now-closed Tevatron to see if that will happen.  The plausibility of Tevatron’s claims needs to be evaluated, and (in contrast to the two photon and four lepton results from ATLAS and CMS, which are relatively straightforward to understand) this won’t be easy or uncontroversial.  The CDF and DZero people did a very fancy analysis with all sorts of clever tricks, which has the advantage that it makes the measurement much more powerful, but the disadvantage of making it obscure to those who didn’t perform it.

One other caveat is that we will have to be a little cautious literally combining results from Tevatron with those from the LHC.  There’s no sense in which [this statement is factually incorrect as stated, as commenters from CDF are pointing out; there are indeed several senses in which it was done blind.  I should have been more precise about what was meant, which was more of a general knowledge of how difficult it is to avoid bias in determining the backgrounds for this measurement.  Let me add that this is not meant to suggest anything about CDF, or DZero, in particular; doing any measurement of this type is extraordinarily difficult, and those who did it deserve applause.  But they’re still human.] the Tevatron result was done `blind’; it was done with full knowledge that LHC already has a hint at 125,  and since the Tevatron is closed and all its data is final, this is Tevatron’s last chance (essentially) to contribute to the Higgs particle search.  Combining experiments is fine if they are truly independent; if they are not, you are at risk of bolstering what you believe because you believe it, rather than because nature says it.

New ATLAS results 

ATLAS has now almost caught up with CMS, in that its searches for Higgs particles decaying to two photons and to two lepton/anti-lepton pairs (or “four leptons” for short) have now been supplemented by (preliminary! i.e., not yet publication-ready) results in searches for Higgs particles decaying to

  • a lepton, anti-lepton, neutrino and anti-neutrino
  • a tau lepton/anti-lepton pair
  • a bottom quark/anti-quark pair (which is what CDF and DZero looked for too)

(The only analysis ATLAS is missing is the one that CMS added in January, separating out events with two photons along with two jets.) In contrast to the CMS experiment, which found small excesses (just 1 sigma) above expectation in each of these three channels, ATLAS does not.  [And I’ve been reminded to point out that the first channel has changed; in December, with 40% of the data analyzed, there was a small excess.] So CDF and DZero’s results from today take us a step forward toward a convincing case, while ATLAS’s result takes us a small step backward.  That’s par for the course in science when you’re squinting to see something that’s barely visible.

In the same search as performed by CDF and DZero, and in the same region where they see an excess, ATLAS sees no excess at all; but ATLAS has less data and is currently less sensitive to this channel than CDF and DZero, so there is no clear contradiction.

But one can’t get too excited about this.  Statistics are still so low in these measurements that it would be easy for this to happen.  And determining the backgrounds in these measurements is tough.  If you make a mistake in a background estimation, you could make a small excess appear where there really isn’t one, or you could make a real excess disappear.  It cuts both ways.

But actually there is a really important result coming out of ATLAS today; it is the deficit of events in the search for the Higgs decaying to a tau lepton/anti-lepton pairs.  For a putative Higgs below 120 GeV, ATLAS sees even fewer tau lepton/anti-lepton events than it expected from pure background — in other words, the background appears to have fluctuated low.  But this means there is not likely to be a Standard Model-like Higgs signal there, because the likelihood that the background plus a Higgs signal would have fluctuated very low is small.  [UPDATE: actually, looking again, I think I am somewhat overstating the importance of this deficit in taus compared to the lack of excess in the other two channels, which is also important. To be quantitative about this would require more information.  In any case, the conclusion is the same.]    And this allows ATLAS to exclude new regions in the mass range for the Standard Model Higgs, at 95% confidence!

This is very important!  One of the things that I have complained about with regard to those who’ve overplayed the December Higgs hints is that you can’t really say that the evidence for a Higgs around 125 GeV is good if you can’t start excluding both above and below that mass.  Well, ATLAS has started to do that.  Granted, it isn’t 99% exclusion, and since this is the Higgs we’re talking about, we need high standards.  But at 95% confidence, ATLAS now excludes, for a Standard Model Higgs, 110-117.5, 118.5-122.5, and 129-539 GeV.  Said better, if there is a Standard Model Higgs in nature, ATLAS alone restricts it (to 95% confidence only, however) to the range 117.5 – 118.5 GeV or 122 – 129 GeV.

ATLAS, just from its own data alone, excludes (pink-shaded regions) the Standard Model Higgs particle at 95% confidence (but not yet at 99%) across the entire allowed range except around 118 GeV and between 122 and 129 GeV, where the two-photon and four-lepton searches provide some positive evidence. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. The red dotted line indicates how well this experiment would perform, on average, if there were no Standard Model Higgs signal.

The window is closing.  Not only has ATLAS completely excluded the old hints of a Standard Model Higgs at 115 from the LEP collider, it seems it has probably excluded CMS’s hint around 120, which was the next best option for the Higgs after 125.  And as far as I can tell, this is coming mainly from the tau lepton/anti-lepton measurement As I said above in an update, I think it is really a mix of all three channels… hard to be quantitative about that without talking to the experts.

So if the Standard Model Higgs is what nature has to offer us, we’re probably down to a tiny little slice around 118 GeV for which there’s no evidence, and a window that has 125 GeV smack in the middle of it, where the evidence, though not much stronger today, if we include both Tevatron and ATLAS, than it was yesterday, is certainly no weaker.

UPDATE: Well, it’s been pointed out to me by the first commenter that the last statement is misleading, because it doesn’t emphasize how the ATLAS excess at 126 GeV has decreased substantially  in significance. Somehow I thought originally that the decrease was marginal. But it isn’t.

The statistics numbers as I think I have them now: What was previously about 3.5 sigma local significance for the Higgs-like peak at 125 GeV is now down to 2.5, and what seemed only 1% likely in December to be a fluctuation is now 10% likely.

There is an issue, however, with combining many measurements.  Of course the two-photon and four-lepton results from ATLAS are the same as before, and they are just as significant; nothing changed.  But the other three measurements came in low, and that pulls the significance of the combination down.  However, I must remind you again how difficult the last three measurements are.  I would trust the first two before the last three.  So I think we should be careful not to overinterpret this change.   When you combine what you trust most with what you trust least, you reduce your confidence in what you have.

That said, it also indicates why one should be very cautious with small amounts of data.

Comparison of the December ATLAS results (left), combining all measurements that were available at the time, with the March 2012 ATLAS results (right). I've lined them up as best I could, given the scales were slightly different. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. Compared to December, there is much more excluded and the height of the peak at 126 GeV is noticeably lower.

Awaiting Higgs News from the Tevatron Experiments

The search for the Higgs particle has been dominated recently by the new kids on the block, the ATLAS and CMS experiments at the Large Hadron Collider [LHC], who benefit from the LHC’s record high energy per collision. But at its predecessor, the now-closed Tevatron, the CDF and DZero experiments still have a few tricks up their sleeves. Though the energy per collision in recent years at the Tevatron was 3.5 times smaller than was the LHC’s  in 2011,  CDF and DZero have twice as much data as do ATLAS and CMS right now. And there’s one more thing going for them. In contrast to the LHC, where protons collide with protons, at the Tevatron protons collided with antiprotons. That gives the Tevatron a little edge in one particular search mode for the Higgs. It won’t be enough to beat the LHC at the game for which it was designed, but it’s enough that the Tevatron experiments can at least play. And we’ll see results from the two experiments tomorrow (Wednesday) — with a preview already publicly available, as you’ll see below. Continue reading

News from La Thuile, with Much More to Come

At various conferences in the late fall, the Large Hadron Collider [LHC] experiments ATLAS and CMS showed us many measurements that they made using data they took in spring and summer of 2011. But during the fall their data sets increased in size by a factor of two and a half!  So far this year the only results we’d seen that involved the 2011 full data set had been ones needed in the search for the Higgs particle. Last week, that started to change.

The spring flood is just beginning. Many new experimental results from the LHC were announced at La Thuile this past week, some only using part of the 2011 data but a few using all of it, and more and more will be coming every day for the next couple of weeks. And there are also new results coming from the (now-closed) Tevatron experiments CDF and DZero, which are completing many analyses that use their full data set. In particular, we’re expecting them to report on their best crack at the Higgs particle later this week. They can only hope to create controversy; they certainly won’t be able to settle the issue as to whether there is or isn’t a Higgs particle with a mass of about 125 GeV/c2, as hints from ATLAS and CMS seem to indicate.  But all indications are that it will be an interesting week on the Higgs front.

The Top Quark Checks In

Fig. 1: In the Higgs mechanism, the W particle gets its mass from the non-zero average value of the Higgs field. A precise test of this idea arises as follows. When the top quark decays to a bottom quark and a W particle, and the W then decays to an anti-neutrino and an electron or muon, the probability that the electron or muon travels in a particular direction can be predicted assuming the Higgs mechanism. The data above shows excellent agreement between theory and experiment, validating the notion of the Higgs field.

There are now many new measurements of the properties of the top quark, poking and prodding it from all sides (figuratively)  to see if it behaves as expected within the “Standard Model of particle physics” [the equations that we use to describe all of the known particles and forces of nature.] And so far, disappointingly for those of us hoping for clues as to why the top quark is so much heavier than the other quarks, there’s no sign of anything amiss with those equations. Top quarks and anti-quarks are produced in pairs more or less as expected, with the expected rate, and moving in the expected directions with the expected amount of energy. Top quark decay to a W particle and a bottom quark also agrees, in detail, with theoretical expectation.  Specifically (see Figure 1) the orientation of the W’s intrinsic angular momentum (called its “spin”, technically), a key test of the Standard Model in general and of the Higgs mechanism in particular, agrees very well with theoretical predictions.  Meanwhile there’s no sign that there are unexpected ways of producing top quarks, nor any sign of particles that are heavy cousins of the top quark.

One particularly striking result from CMS relates to the unexpectedly large asymmetry in the production of top quarks observed at the Tevatron experiments, which I’ve previously written about in detail. The number of top quarks produced moving roughly in the same direction as the proton beam is expected theoretically to be only very slightly larger than the number moving roughly in the same direction as the anti-proton beam, but instead both CDF and DZero observe a much larger effect. This significant apparent discrepancy between their measurement and the prediction of the Standard Model has generated lots of interest and hope that perhaps we are seeing a crack in the Standard Model’s equations.

Well, it isn’t so easy for CMS and ATLAS to make the same measurement, because the LHC has two proton beams, so it is symmetric front-to-back, unlike the Tevatron with its proton beam and anti-proton beam.   But still, there are other related asymmetries that LHC experiments can measure. And CMS has now looked with its full 2011 data set, and observes… nothing: for a particular charge asymmetry that they can measure, they find an asymmetry of 0.4% +- 1.0% +- 1.2% (the first number is the best estimate and the latter two numbers are the statistical and systematic uncertainties on that estimate).  The Standard Model predicts something of order a percent or so, while many attempts to explain the Tevatron result might have predicted an effect of several percent.  (ATLAS has presented a similar measurement but only using part of the 2011 data set, so it has much larger uncertainties at present.)

Now CMS is not measuring quite the same thing as CDF and DZero, so the CMS result is not in direct conflict with the Tevatron measurements. But if new phenomena were present that were causing the CDF and DZero’s anomalously large asymmetry, we’d expect that by now they’d be starting to show up, at least a little bit, in this CMS measurement.  The fact that CMS sees not a hint of anything unexpected considerably weakens the overall case that the Tevatron excess asymmetry might have an exciting explanation. It suggests rather that the whole effect is really a problem with the interpretation of the Tevatron measurements themselves, or with the ways that the equations of the Standard Model are used to predict them. That is of course disappointing, but it is still far too early to declare the case closed.

There’s also a subtle connection here with the recent bolstering by CDF of the LHCb experiment’s claim that CP violation is present in the decays of particles called “D mesons”. (D mesons are hadrons containing a charm quark [or anti-quark], an up or down anti-quark [or quark], and [as for all hadrons] lots of additional gluons and quark/anti-quark pairs.) The problem is that theorists, who used to be quite sure that any such CP violation in D mesons would indicate the presence of new phenomena not predicted by the Standard Model, are no longer so sure. So one needs corroborating information from somewhere, showing some other related phenomenon, before getting too excited.

One place that such information might have come from is the top quark.  If there is something surprising in charm quarks (but not in bottom quarks) one might easily imagine that perhaps there is something new affecting all up-type quarks (the up quark, charm quark and top quark) more than the down-type quarks (down, strange and bottom.)  [Read here about the known elementary particles and how they are organized.] In other words, if the charm quark is different from expectations and the bottom quark is not, it would seem quite reasonable that the top quark would be even more different from expectations. But  unfortunately, the results from this week suggest the top quark, to the level of precision that can currently be mustered, is behaving very much as the Standard Model predicted it would.

Meanwhile Nothing Else Checks In

Meanwhile, in the direct search for new particles not predicted by the Standard Model, there were a number of new results from CMS and ATLAS at La Thuile. The talks on these subjects went flying by; there was far too little information presented to allow understanding of any details, and so without fully studying the corresponding papers I can’t say anything more intelligent yet than that they didn’t see anything amiss. But of course, as I’ve suggested many times, searches of this type wouldn’t be shown so soon after the data was collected if they indicated any discrepancy with theoretical prediction, unless the discrepancy was spectacularly convincing. More likely, they would be delayed a few weeks or even months, while they were double- and triple-checked, and perhaps even held back for more data to be collected to clarify the situation. So we are left with the question as to which of the other measurements that weren’t shown are appearing later because, well, some things take longer than others, and which ones (if any) are being actively held back because they are more … interesting. At this preliminary stage in the conference season it’s too early to start that guessing game.

Fig. 2: The search for a heavy particle that, like a Z particle, can decay to an electron/positron pair or a muon/anti-muon pair now excludes such particles to well over 1.5 TeV/c-squared. The Z particle itself is the bump at 90 GeV; any new particle would appear as a bump elsewhere in the plot. But above the Z mass, the data (black dots) show a smooth curve with no significant bumps.

So here’s a few words about what ATLAS and CMS didn’t see. Several classic searches for supersymmetry and other theories that resemble it (in that they show signs of invisible particles, jets from high-energy quarks and gluons, and something rare like a lepton or two or a photon), were updated by CMS for the full or near-full data set. Searches for heavy versions of the top and bottom quark were shown by ATLAS and CMS. ATLAS sought heavy versions of the Z particle (see Figure 2) that decay to a high energy electron/positron pair or muon/anti-muon pair; with their full 2011 data set, they now exclude particles of this type up to masses (depending on the precise details of the particle) of 1.75-1.95 TeV/c2. Meanwhile CMS looked for heavy versions of the W particle that can decay to an electron or muon and something invisible; the exclusions reach out above 2.5 TeV/c2. Other CMS searches using the full data set included ones seeking new particles decaying to two Z particles, or to a W and a Z.   ATLAS looked for a variety of exotic particles, and CMS looked for events that are very energetic and produce many known particles at once.  Most of these searches were actually ones we’d seen before, just updated with more data, but a few of them were entirely new.

Two CMS searches worth noting involved looking for new undetectable particles recoiling against a single jet or a single photon. These put very interesting constraints on dark matter that are complementary to the searches that have been going on elsewhere, deep underground.  Using vats of liquid xenon or bubble chambers or solid-state devices, physicists have been looking for the very rare process in which a dark matter particle, one among the vast ocean of dark matter particles in which our galaxy is immersed, bumps into an atomic nucleus inside a detector and makes a tiny little signal for physicists to detect. Remarkable and successful as their search techniques are, there are two obvious contexts in which they work very poorly. If dark matter particles are very lightweight, much lighter than a few GeV/c2, the effect of one hitting a nucleus becomes very hard to detect. Or if the nature of the interaction of dark matter with ordinary matter is such that it depends on the spin (the intrinsic angular momentum) of a nucleus rather than on how many protons and neutrons the nucleus contains, then the probability of a collision becomes much, much lower. But in either case, as long as dark matter is affected by the weak nuclear force, the LHC can produce dark matter particles, and though ATLAS and CMS can’t detect them, they can detect particles that might sometimes recoil against them, such as a photon or a jet. So CMS was quite proud to show that their results are complementary to those other classes of experiments.

Fig. 3: Limits on dark matter candidates that feel the weak nuclear force and can interact with ordinary matter. The horizontal axis gives the dark matter particle's mass, the vertical mass its probability to hit a proton or neutron. The region above each curve is excluded. All curves shown other than those marked "CMS" are from underground experiments searching for dark matter particles hitting an atomic nucleus. CMS searches for a jet or a photon recoiling against something undetectable provide (left) the best limits on "spin-independent" interactions for masses below 3.5 GeV/c-squared, and (right) the best limits on "spin-dependent" interactions for all masses up to a TeV/c-squared.

Finally, I made a moderately big deal back in October about a small excess in multi-leptons (collisions that produce three or more electrons, muons, positrons [anti-electrons] or antimuons, which are a good place to look for new phenomena), though I warned you in bold red letters that most small excesses go away with more data. A snippet of an update was shown at La Thuile, and from what I said earlier about results that appear early in the conference season, you know that’s bad news. Suffice it to say that although discrepancies with theoretical predictions remain, the ones seen in October apparently haven’t become more striking. The caveat that most small excesses go away applies, so far, to this data set as well. We’ll keep watching.

Fig. 4: The updated multilepton search at CMS shows (black solid curve) a two standard deviation excess compared to expectations (black dotted curve) in at least some regimes in the plane of the gluino mass (vertical axis) versus the chargino mass (horizontal axis) in a particular class of models. But had last fall's excess been a sign of new physics, the current excess would presumably have been larger.

Stay tuned for much more in the coming weeks!

LHCb’s Result From November Appears Confirmed by CDF

Back in November, I described a surprising result from the LHCb experiment at the Large Hadron Collider [LHC] concerning “CP violation” in the decays of particles called “D mesons” (which are hadrons that contain an unpaired charm quark or an unpaired charm antiquark) at a level much larger than expected by theorists.  Rather than rehashing the explanation for what that was all about, I’m going to point you to what I wrote in November.

There’s news this week that the CDF experiment at the now-closed Tevatron has updated its result for a measurement of the same quantity, using the full CDF data set.  And they now find a very similar result to what LHCb found.   This is indicated on the slide shown below, taken from the talk given by Angelo Di Canto at the La Thuile conference (but edited by me to fix a big typo; I hope he does not mind.)  You see that while LHCb found a CP-violating asymmetry of -0.82%, with statistical and systematic uncertainties of 0.21% and 0.11%, CDF finds -0.62%, with almost identical uncertainties — a little closer to zero, but still well away from it.

A slide from the CDF presentation on its measurement of CP violation in D meson decays (with an edit by me to fix a glaring typo.) The CDF result is in orange at the top; the LHCb result is in black just below it. In the figure, the LHCb result is in blue, the CDF result in orange, and the traditional expectation for the Standard Model is very close to the point (0,0), the isolated red dot at dead center.

This lends support to LHCb’s result, and putting the two results (and a couple of other weaker ones) together makes their combination discrepant from zero by about 3.8 standard deviations.  That’s great, but not as great as it would have been if what theorists thought a few years ago was still considered reliable.  Back then, the relevant experts (and I should emphasize I am not one of them) would have told you that they were pretty darn certain that the Standard Model [the equations we use to describe the known particles and forces] could not produce CP violation of this type, and any observation of a non-zero signal would imply the existence of previously unknown particles.  But the experts  have been backing away from this point of view recently, worrying that maybe they know less about how to calculate this in the Standard Model than they used to think.  If we’re to be sure this is really a sign of new particles in nature, and not just a sign that theorists have trouble predicting this quantity, we’re going to need additional evidence from another quarter.  And so far, we haven’t got any.