Tag Archives: LHCb

Details Behind Last Week’s Supersymmetry Story

Last week, I promised you I’d fill in the details of my statement that the recent measurement (of the rare process in which a Bs meson decays to a muon and an anti-muon — read here for the physics behind this process) by the LHCb experiment at the Large Hadron Collider [LHC] had virtually no effect on the constraints on any speculative theories, including supersymmetry, contrary to the statements in the press and by a certain LHCb member. Today I’m providing you with some sources for this statement.

A number of my colleagues have tasked themselves with keeping track of how measurements at the Large Hadron Collider and elsewhere are affecting certain subclasses of variants of the supersymmetry. They call themselves the “Mastercode Project”; here’s their website. They’re not the only ones looking at this, but among them is Professor Gino Isidori, whom I was talking to last week, so I’ve gotten this information from him. I quote from the MasterCode website regarding last week’s result from LHCb: “The new measurement provides a valuable new constraint on the supersymmetric parameter space, but the observation of a Standard Model-like branching fraction for the Bs→μ+μ decay is quite consistent with supersymmetry. In fact, a Standard Model-like branching fraction of this decay was expected in constrained supersymmetric models like the CMSSM or NUHM1 (see, e.g., the recent MasterCode results for further details). As a result, the favoured regions in the parameter space of these models do not change significantly after the inclusion of the new constraint.

Now before I explain what this means, it’s important to have some terminology, running from most general to most specific.

Keep in mind that

  • ruling out the CMSSM or NUHM1 does not mean that the MSSM is ruled out;
  • ruling out the MSSM does not mean that supersymmetry at the TeV scale is ruled out;
  • ruling out supersymmetry at the TeV scale does not mean that supersymmetry is ruled out.

Among the many goals of the LHC is to find or rule out supersymmetry at the TeV scale. (It cannot hope to rule out supersymmetry altogether; that would presumably require a vastly more powerful collider that won’t likely be built for centuries, if ever.) It’s not enough to rule out the CMSSM, or the NUMH1, or even the MSSM. Similar statements apply for other speculative ideas that propose as yet unknown particles and forces; it’s not enough for the LHC to rule out just the simplest variants of these ideas.

Now if it turns out that supersymmetry is part of nature, rather few of my colleagues expect the variant we find to be contained within the CMSSM or NUHM1; and personally (though I’m probably in the minority) I have long doubted that it would be contained within the MSSM. Nevertheless, it is instructive to look at how LHC data is impacting the CMSSM and the NUHM1 subclasses of supersymmetry variants.   One just must be careful not to over-interpret; the exclusion of most variants in the CMSSM is not an indication that most variants of TeV-scale supersymmetry as a whole are excluded.

Now in this context, let’s see how the new measurement that was announced last week affects the CMSSM and the NUHM1. In Figure 1 is a plot showing the allowed variants of the CMSSM and the NUMH1, as a function of two quantities: on the horizontal axis, MA, which if large is (approximately but essentially) the mass of four of the five Higgs particles in the MSSM, and on the vertical axis, tan β, the ratio of the values of the two non-zero Higgs fields that are required in the MSSM. In solid red and solid blue are the one-standard-deviation and two-standard-deviation allowed regions after the new LHCb measurement is accounted for; any variant of the theory not sitting inside the blue region is excluded by the data. The dashed bands show the same thing before the new LHCb measurement. Since the dashed and solid blue bands are right on top of each other, you see there’s almost no effect at all. That’s what was behind my claim last Friday.

Fig. 1: Constraints on the CMSSM (left) and NUHM1 (right) subclasses of supersymmetric theories, before and after the HCP conference of last week. The quantities on the horizontal and vertical axes are explained in the text. In both plots: solid red (blue) give the constraints at one (two) standard deviations; variants outside the blue curve are excluded. Dashed red (blue) are the same limits before the new LHCb measurement. Notice there is almost no change.

But please, don’t misinterpret what I’m saying (or my colleagues) as suggesting that the LHC’s data has had no impact on the list of possible variants of supersymmetry! Far from it! Many variants are excluded, and many popular (but not necessarily more likely) subclasses of variants of supersymmetry have been pushed into regions that many would consider corners. The only statement in Figure 1 is that the new LHCb measurement didn’t make these corners smaller.  But to see how things have changed since before the LHC began, look at Figure 2, which shows how the LHC as a whole — all the measurements from LHCb, ATLAS and CMS taken together — have affected the CMSSM and NUMH1 since 2009. (The CMSSM and NUHM1 also make assumptions about where dark matter comes from, so even effects of the dark matter measurements from the XENON100 experiment are included here.)

Fig. 2: as in Figure 1, except that the dashed lines give the constraints on the CMSSM and NUHM1 before the LHC began taking data, and the solid line gives the constraints after the data taken through early summer 2011 was analyzed. Notice the scale on the horizontal axis is different from that of Figure 1.

Figure 2 is a similar plot to Figure 1 — but this time, solid blue and red indicate the impact of LHC data as of summer 2011, and the dashed blue and red indicate the situation before the LHC started. Now compare the dashed blue line in Figure 2 (before the LHC) with the solid blue line in Figure 1 (now); note the scale on the horizontal axis is different!. You’ll see that in the CMSSM it was possible before the LHC to have MA as low as 350 GeV/c², but now it has to be over 900 GeV/c², which many would consider a rather high value. In the NUHM1 there’s been a similar shift from 150 to about 300 GeV/c², not yet so high but still a significant increase. And meanwhile, while almost any value of tan β from 2 up to 60 was allowed before the LHC, this number is now limited to a smaller range. For example, if MA were below 900 GeV/c², then the CMSSM would be excluded and the NUHM1 would be allowed only for tan β < 30 or so.  This upper limit on tan β is mainly caused by the similar LHCb measurement presented back in March (and mentioned by me on Friday), and by similar ones from the CMS, CDF and ATLAS experiments.

But clearly there are plenty of variants within the NUHM1 that remain viable.  And the NUHM1 is not representative of the full range of possibilities within the MSSM, so even if the NUHM1 were excluded, we’d still have a long way to go to exclude the MSSM, much less all of TeV-scale supersymmetry. In short, it’s neither all nor nothing. Yes, a lot of progress has been made; LHC data (and data from other sources) have ruled out a lot of variants of TeV-scale supersymmetry.  But no, we’re not yet close to ruling out the full range of variants.

Please note that I’m not telling you this because I’m some devotee of supersymmetry who believes deeply in his heart that we’ll someday find it, and is trying to persuade you not to give up. I’m just laying out for you the facts on the ground. Do you imagine that I’m happy that a long, painful slog lies ahead, during which particle physicists — theorists and experimentalists — will painstakingly cover all the possible variants of supersymmetry, and slowly but surely determine whether or not supersymmetry is absent at the TeV scale? Don’t you think my life and that of my colleagues would be a lot easier if we could snap our fingers and with one or two quick measurements settle the question of whether supersymmetry is a fact of nature or not? Unfortunately, things don’t work that way.  You should simply ignore the irresponsible grand statements you will see in the press and on various blogs; indeed, sweeping remarks are a sign of careless thinking, and you should beware. The truth is that only through very hard work — by the experts who make the measurements, by those who advise them on which measurements to make, and by those who do the calculations that are the ingredients for studies like the MasterCode Project — can we hope to settle profound questions about nature.

Remember That “Blow” to Supersymmetry (And Other Theories)?

According to the BBC, it was a heavy blow.  According to a member of the LHCb experiment quoted in the article, it put the theory “in the hospital.”  The reality?  Nobody even suffered a scratch.

On Monday I wrote about a new measurement by the LHCb experiment at the Large Hadron Collider [LHC] of a rare process, reported at the HCP 2012 conference in Kyoto (a link to the talks can be found at this link), in which a B_s meson decays to a muon and an anti-muon (click here for more details of the physics process.)  It’s a very important measurement, definitely!  But whilelistening to theorist Gino Isidori’s talk in which he briefly discussed this measurement, I was a little puzzled about an inconsistency between what LHCb had done and said in the past, and what they had done and were saying now, in particular as was reported/implied by the BBC.

When I chatted with him later, Isidori reminded me exactly what LHCb reported in March, and how it compares to what they report now.

  • March: LHCb reported that at most 4.5 per billion B_s mesons decay this way (at 95% confidence)
  • November: LHCb reports about 3.5 per billion B_s mesons decay this way, and at 95% confidence the rate is at least 1.1 per billion and at most 6.4 per billion.

Notice that the new measurement raises the upper limit on how often this process occurs.  This upward shift is not an indication of a problem; it’s probably just an ordinary statistical effect that arises from having small amounts of data.  But it makes the constraints from this measurement — on the many variants of supersymmetry, and on other theories — a little weaker, if anything.  Most of the supersymmetric (and other) models that would be constrained by this measurement lead to a higher rate than in the Standard Model (where it is predicted to be about 3.2 ± 0.3 per billion).  So a higher upper limit means fewer of these variants are excluded by the data.  All in all,  the constraints on supersymmetric (and other) models are little changed, and perhaps somewhat weaker than they were in March.  Isidori and his colleagues have worked this out, and I’ll try to get details from them next week.

No need to take my word for it, or even Isidori’s.  Professor Michelangelo Mangano of CERN, apparently having spoken independently to other experts, made exactly the same point on page 41 of his summary talk concluding the conference.  (By the way, it was a great talk, and I recommend that experts read it.) 

Well!  So much for the big BBC headline!  [Most likely the public will never learn about this; a news report describing more accurately what this really means for supersymmetry etc. probably will not appear at all on the BBC, and even if it does, it certainly won’t get a big attention-grabbing headline.   It’s sad that this inherent bias in media reporting ensures the public gets an unhealthy dose of incorrect scientific information and rarely gets the antidote.]

None of this at all diminishes LHCb’s accomplishment!  They’ve made a great measurement, for which they deserve big congratulations.  A round of applause, please!!  But let’s not overstate its immediate impact.  As the measurement becomes more precise, its impact will gradually become greater, and even more so when it is combined with similar measurements from ATLAS and CMS.

But meanwhile, there were powerful and truly new constraints on supersymmetry (and other theories) reported at this conference, and they came from ATLAS and CMS, in their searches for effects from superpartner (and other) particles.  I told you about a small number of these searches a couple of days ago (by the way, I learned meanwhile that CMS has a search that is similar to the one I mentioned from ATLAS  involving bottom quarks) and maybe I’ll point out a few others next week, if I have the energy.

“Supersymmetry Dealt a Blow”?

One of the challenges of being a science journalist is conveying not only the content of a new scientific result but also the feel of what it means.  The prominent article in the BBC about the new measurement by the LHCb experiment at the Large Hadron Collider [LHC]  (reported yesterday at the HCP conference in Kyoto — I briefly described this result yesterday) could have been worse.  But it has a couple of real problems characterizing the implications of the new measurement, so I’d like to comment on it.

The measurement is of how often B_s mesons (hadrons containing a bottom quark and a strange anti-quark, or vice versa, along with many quark/anti-quark pairs and gluons) decay to a muon and an anti-muon.  This process (which I described last year — only about one in 300,000,000 B_s mesons decays this way) has three nice features:

Yesterday the LHCb experiment reported the evidence for this process, at a rate that is consistent (but see below) with the prediction of the Standard Model.

The worst thing about the BBC article is the headline, “Supersymmetry theory dealt a blow” (though that’s presumably the editor’s fault, as much as or more than the author’s) and the ensuing prose, “The finding deals a significant blow to the theory of physics known as supersymmetry.”  What’s wrong with it?  It’s certainly true that the measurement means that many variants of supersymmetry (of which there are a vast number) are now inconsistent with what we know about nature.  But what does it mean to say a theory has suffered a blow? and why supersymmetry?

First of all, whatever this new measurement means, there’s rather little scientific reason to single out supersymmetry.  The rough consistency of the measurement with the prediction of the Standard Model is a “blow” (see below) against a wide variety of speculative ideas that introduce new particles and forces.  It would be better simply to say that it is a blow for the Standard Model — the model to beat — and not against any speculative idea in particular.  Supersymmetry is by no means the only idea that is now more constrained than before.  The only reason to single it out is sociological — there are an especially large number of zealots who love supersymmetry and an equal number of zealots who hate it.

Now about the word “blow”.  New measurements usually don’t deal blows to ideas, or to a general theory like supersymmetry.  That’s just not what they do.  They might deal blows to individual physicists who might have a very particular idea of exactly which variant of the general idea might be present in nature; certain individuals are surely more disappointed than they were before yesterday.   But typically, great ideas are relatively flexible.  (There are exceptions — the discovery of a Higgs particle was a huge blow to the idea behind “technicolor” — but in my career I’ve seen very few.)  It is better to think of each new measurement as part of a process of cornering a great idea, not striking and injuring it — the way a person looking for treasure might gradually rule out possibilities for where it might be located.

Then there’s the LHCb scientist who is quoted as saying that “Supersymmetry may not be dead but these latest results have certainly put it into hospital”; well…  Aside from the fact that this isn’t accurate scientifically (as John Ellis points out at the end of the article), it’s just not a meaningful or helpful way to think about what’s going on at the LHC. Continue reading

First News from Kyoto Conference

The HCP [Hadron Collider Physics] 2012 conference in Kyoto is underway.  After opening talks laying out the field’s future, the main topics today have been

  • Collisions of heavy ions (specifically of lead or gold nuclei) and generic proton-proton collisions
  • Processes involving “heavy flavor” (meaning in this case the properties of hadrons containing bottom and charm quarks.)

Although there were a number of interesting new results from several experiments, today’s highlight so far has been a presentation on a new measurement by the LHCb experiment, one of the special-purpose experiments at the Large Hadron Collider [LHC], of a rare decay of a B_s meson to a muon and an antimuon.  I described this process in some detail, and claims and counterclaims about it, in the first portion of an article last year; the details of the measurements are out of date, but the physics process is, of course, the same.

Today the LHCb experiment, for the first time, announced evidence for the existence of this process, using their data collected in both 2011 and 2012.  In the Standard Model (the equations that describe the known particles and forces), it is predicted that about one in about 300,000,000 B_s mesons (hadrons containing a bottom quark and a strange anti-quark, or vice versa) should decay in this fashion.  The measurement that LHCb has made is completely consistent with this prediction.

[In detail, the Standard Model predicts (3.54 ± 0.30) × 10-9 {including mixing effects} and LHCb measures (3.2 +1.5[−1.2])× 10-9.]  The measurement is at the level of 3.5 standard deviations — evidence, but not yet a convincing observation.  LHCb excludes a rate of zero at much better than 95% confidence; their 95% confidence lower bound on the process is 1.1 × 10-9.]

The detailed implications of this result will take a while to work through, but the general implication is easy to state: the Standard Model has survived another test.  And the constraints from LHC data on speculative ideas that predict particles and forces beyond those of the Standard Model have become tighter.  Many variants of these speculative ideas would have affected this process, and the more precisely the data matches the Standard Model prediction, the more of these variants are excluded by the data.

Note Added: CMS says they a very good chance of being able to confirm LHCb’s result using the full 2012 data; not sure what ATLAS says about their chances.

2nd Note Added: Gino Isidori, in his talk on the theoretical perspective on heavy flavor physics, emphasized that a future high-precision measurement of this process will be very important; it can be predicted with high precision, and interesting variants of speculative ideas often have effects on its rate that are between 10% and 100%.

The First Human-Created Higgs-Like Particle: 1988 or 89, at the Tevatron

Yesterday’s Quiz Question: when was the first Higgs particle produced by humans? (where admittedly “Higgs” should have read “Higgs-like”) got many answers, but not the one I think is correct. Here’s what I believe is the answer.


[UPDATE: After this post was written, but before it went live, commenter bobathon got the right answer — at 6:30 Eastern, just under the wire! Well done!]

The first human-produced Higgs particle [more precisely, the Higgs-like particle with a mass of about 125 GeV/c2 whose discovery was reported earlier this month, and which I’ll refer to as “`H”– but I’ve told you why I think it is a Higgs of some sort] was almost certainly created in the United States, at the Fermilab National Accelerator Center outside Chicago. Back in 1988 and 1989, Fermilab’s accelerator called the Tevatron created collisions within the then-new CDF experiment, during the often forgotten but very important “Run Zero”.  The energy per collision, and the total data collected, were just enough to make it nearly certain that an H particle was created during this run.

Run Zero, though short, was important because it allowed CDF to prove that precision mass measurements were possible at a proton collider.  They made a measurement of the Z particle’s mass that almost rivaled the one made simultaneously at the SLC electron-positron collider.  This surprised nearly everyone. [Unfortunately I was out of town and missed the scene of disbelief, back in 1989, when CDF dropped this bombshell during a conference at SLAC, the SLC’s host laboratory.] Nowadays we take it for granted that the best measurement of the W particle’s mass comes from the Tevatron experiments, and that the Large Hadron Collider [LHC] experiments will measure the H particle’s mass to better than half a percent — but up until Run Zero it was widely assumed to be impossible to make measurements of such quality in the messy environment of collisions that involve protons.

Anyway, it is truly astonishing that we have to go back to 1988-1989 for the first artificially produced Higgs(-like) particle!! I was a first-year graduate student, and had just learned what Higgs particles were; precision measurements of the Z particle were just getting started, and the top quark hadn’t been found yet. It took 23 years to make enough of these Higgs(-like) particles to convince ourselves that they were there, using the power of the CERN laboratory’s Large Hadron Collider [LHC]!

[Perhaps this remarkable history will help you understand why I keep saying that although the LHC experiments haven’t yet found something unexpected in their data, that absolutely doesn’t mean that nothing unexpected is there. What’s new just may be hard to see, waiting to be noticed with more sophisticated methods and/or more data.] Continue reading

At a CERN Workshop

Today I’m attending the start of a several day workshop at the CERN laboratory (host of the Large Hadron Collider [LHC]).  This is bringing LHC experimentalists and theoretical particle physicists together to hear about and discuss not only results from the (successful) search for the Higgs particle but also from many other searches (so far unsuccessful, but still important and instructive for our understanding of nature) for other new particles and/or forces, as well as relatively high-precision tests of the Standard Model itself.  This should help those of us who were distracted for the past week by the discovery of the Higgs-like particle to catch up with everything else that the experiments reported at the ICHEP conference.  Will update today or over the next few days if anything striking is presented.

LHC Producing 8 TeV Data

Still early days in the 2012 data-taking run, which just started a couple of weeks ago, but already the Large Hadron Collider [LHC] accelerator wizards, operating the machine at 8 TeV of energy per proton-proton collision (compared to last year’s 7 TeV) have brought the collision rates back up nearly to where they were last year.    This is very good news, in that it indicates there are no significant unexpected technical problems preventing the accelerator from operating at the high collision rates that are required this year.   And the experiments are already starting to collect useful data at 8 TeV.

The challenges for the experiments of operating at 8 TeV and at the 2012 high collision rate are significant.  One challenge is modeling. To understand how their experiments are working, well enough that they can tell the difference between a new physical phenomenon and a badly understood part of their detector, the experimenters have to run an enormous amount of computer simulation, modeling the beams, the collisions, and the detector itself.  Well, 8 TeV isn’t 7 TeV; all of last year’s modeling was fine for last year’s data, but not for this year’s.  So a lot of computers are running at full tilt right now, helping to ensure that all of the needed simulations for 8 TeV are finished before they’re needed for the first round of 2012 data analysis that will be taking place in the late spring and early summer.

Another challenge is “pile-up.”  The LHC proton beams are not continuous; they consist of up to about 1300 bunches of protons, each bunch containing something like 100,000,000,000 protons.  Collisions in each detector occur whenever two bunches pass through each other, every 50 nanoseconds (billionths of a second).  With the beam settings that were seen late in 2011 and that will continue to intensify in 2012, every time two bunches cross at the center of the big experiments ATLAS and CMS, an average of 10 to 20 proton-proton collisions occur essentially simultaneously.  That means that every proton-proton collision in which something interesting happens is doused in the debris from a dozen uninteresting ones.  Moreover, some of the debris from all these collisions hangs around for a while, creating electronic noise that obscures measurements of future collisions.  One of the questions for 2012 is how much of a nagging problem the increasing pile-up will pose for some of the more delicate measurements — especially study of Higgs particle decays, both expected ones and exotic ones, and searches for relatively light-weight new particles with low production rates, such as particles created only via the weak nuclear force (e.g. supersymmetric partners of the W, Z and Higgs particles.)

But I have a lot of confidence in my colleagues; barring a really nasty surprise, they’ll manage pretty well, as they did last year.  And so far, so good!

News from La Thuile, with Much More to Come

At various conferences in the late fall, the Large Hadron Collider [LHC] experiments ATLAS and CMS showed us many measurements that they made using data they took in spring and summer of 2011. But during the fall their data sets increased in size by a factor of two and a half!  So far this year the only results we’d seen that involved the 2011 full data set had been ones needed in the search for the Higgs particle. Last week, that started to change.

The spring flood is just beginning. Many new experimental results from the LHC were announced at La Thuile this past week, some only using part of the 2011 data but a few using all of it, and more and more will be coming every day for the next couple of weeks. And there are also new results coming from the (now-closed) Tevatron experiments CDF and DZero, which are completing many analyses that use their full data set. In particular, we’re expecting them to report on their best crack at the Higgs particle later this week. They can only hope to create controversy; they certainly won’t be able to settle the issue as to whether there is or isn’t a Higgs particle with a mass of about 125 GeV/c2, as hints from ATLAS and CMS seem to indicate.  But all indications are that it will be an interesting week on the Higgs front.

The Top Quark Checks In

Fig. 1: In the Higgs mechanism, the W particle gets its mass from the non-zero average value of the Higgs field. A precise test of this idea arises as follows. When the top quark decays to a bottom quark and a W particle, and the W then decays to an anti-neutrino and an electron or muon, the probability that the electron or muon travels in a particular direction can be predicted assuming the Higgs mechanism. The data above shows excellent agreement between theory and experiment, validating the notion of the Higgs field.

There are now many new measurements of the properties of the top quark, poking and prodding it from all sides (figuratively)  to see if it behaves as expected within the “Standard Model of particle physics” [the equations that we use to describe all of the known particles and forces of nature.] And so far, disappointingly for those of us hoping for clues as to why the top quark is so much heavier than the other quarks, there’s no sign of anything amiss with those equations. Top quarks and anti-quarks are produced in pairs more or less as expected, with the expected rate, and moving in the expected directions with the expected amount of energy. Top quark decay to a W particle and a bottom quark also agrees, in detail, with theoretical expectation.  Specifically (see Figure 1) the orientation of the W’s intrinsic angular momentum (called its “spin”, technically), a key test of the Standard Model in general and of the Higgs mechanism in particular, agrees very well with theoretical predictions.  Meanwhile there’s no sign that there are unexpected ways of producing top quarks, nor any sign of particles that are heavy cousins of the top quark.

One particularly striking result from CMS relates to the unexpectedly large asymmetry in the production of top quarks observed at the Tevatron experiments, which I’ve previously written about in detail. The number of top quarks produced moving roughly in the same direction as the proton beam is expected theoretically to be only very slightly larger than the number moving roughly in the same direction as the anti-proton beam, but instead both CDF and DZero observe a much larger effect. This significant apparent discrepancy between their measurement and the prediction of the Standard Model has generated lots of interest and hope that perhaps we are seeing a crack in the Standard Model’s equations.

Well, it isn’t so easy for CMS and ATLAS to make the same measurement, because the LHC has two proton beams, so it is symmetric front-to-back, unlike the Tevatron with its proton beam and anti-proton beam.   But still, there are other related asymmetries that LHC experiments can measure. And CMS has now looked with its full 2011 data set, and observes… nothing: for a particular charge asymmetry that they can measure, they find an asymmetry of 0.4% +- 1.0% +- 1.2% (the first number is the best estimate and the latter two numbers are the statistical and systematic uncertainties on that estimate).  The Standard Model predicts something of order a percent or so, while many attempts to explain the Tevatron result might have predicted an effect of several percent.  (ATLAS has presented a similar measurement but only using part of the 2011 data set, so it has much larger uncertainties at present.)

Now CMS is not measuring quite the same thing as CDF and DZero, so the CMS result is not in direct conflict with the Tevatron measurements. But if new phenomena were present that were causing the CDF and DZero’s anomalously large asymmetry, we’d expect that by now they’d be starting to show up, at least a little bit, in this CMS measurement.  The fact that CMS sees not a hint of anything unexpected considerably weakens the overall case that the Tevatron excess asymmetry might have an exciting explanation. It suggests rather that the whole effect is really a problem with the interpretation of the Tevatron measurements themselves, or with the ways that the equations of the Standard Model are used to predict them. That is of course disappointing, but it is still far too early to declare the case closed.

There’s also a subtle connection here with the recent bolstering by CDF of the LHCb experiment’s claim that CP violation is present in the decays of particles called “D mesons”. (D mesons are hadrons containing a charm quark [or anti-quark], an up or down anti-quark [or quark], and [as for all hadrons] lots of additional gluons and quark/anti-quark pairs.) The problem is that theorists, who used to be quite sure that any such CP violation in D mesons would indicate the presence of new phenomena not predicted by the Standard Model, are no longer so sure. So one needs corroborating information from somewhere, showing some other related phenomenon, before getting too excited.

One place that such information might have come from is the top quark.  If there is something surprising in charm quarks (but not in bottom quarks) one might easily imagine that perhaps there is something new affecting all up-type quarks (the up quark, charm quark and top quark) more than the down-type quarks (down, strange and bottom.)  [Read here about the known elementary particles and how they are organized.] In other words, if the charm quark is different from expectations and the bottom quark is not, it would seem quite reasonable that the top quark would be even more different from expectations. But  unfortunately, the results from this week suggest the top quark, to the level of precision that can currently be mustered, is behaving very much as the Standard Model predicted it would.

Meanwhile Nothing Else Checks In

Meanwhile, in the direct search for new particles not predicted by the Standard Model, there were a number of new results from CMS and ATLAS at La Thuile. The talks on these subjects went flying by; there was far too little information presented to allow understanding of any details, and so without fully studying the corresponding papers I can’t say anything more intelligent yet than that they didn’t see anything amiss. But of course, as I’ve suggested many times, searches of this type wouldn’t be shown so soon after the data was collected if they indicated any discrepancy with theoretical prediction, unless the discrepancy was spectacularly convincing. More likely, they would be delayed a few weeks or even months, while they were double- and triple-checked, and perhaps even held back for more data to be collected to clarify the situation. So we are left with the question as to which of the other measurements that weren’t shown are appearing later because, well, some things take longer than others, and which ones (if any) are being actively held back because they are more … interesting. At this preliminary stage in the conference season it’s too early to start that guessing game.

Fig. 2: The search for a heavy particle that, like a Z particle, can decay to an electron/positron pair or a muon/anti-muon pair now excludes such particles to well over 1.5 TeV/c-squared. The Z particle itself is the bump at 90 GeV; any new particle would appear as a bump elsewhere in the plot. But above the Z mass, the data (black dots) show a smooth curve with no significant bumps.

So here’s a few words about what ATLAS and CMS didn’t see. Several classic searches for supersymmetry and other theories that resemble it (in that they show signs of invisible particles, jets from high-energy quarks and gluons, and something rare like a lepton or two or a photon), were updated by CMS for the full or near-full data set. Searches for heavy versions of the top and bottom quark were shown by ATLAS and CMS. ATLAS sought heavy versions of the Z particle (see Figure 2) that decay to a high energy electron/positron pair or muon/anti-muon pair; with their full 2011 data set, they now exclude particles of this type up to masses (depending on the precise details of the particle) of 1.75-1.95 TeV/c2. Meanwhile CMS looked for heavy versions of the W particle that can decay to an electron or muon and something invisible; the exclusions reach out above 2.5 TeV/c2. Other CMS searches using the full data set included ones seeking new particles decaying to two Z particles, or to a W and a Z.   ATLAS looked for a variety of exotic particles, and CMS looked for events that are very energetic and produce many known particles at once.  Most of these searches were actually ones we’d seen before, just updated with more data, but a few of them were entirely new.

Two CMS searches worth noting involved looking for new undetectable particles recoiling against a single jet or a single photon. These put very interesting constraints on dark matter that are complementary to the searches that have been going on elsewhere, deep underground.  Using vats of liquid xenon or bubble chambers or solid-state devices, physicists have been looking for the very rare process in which a dark matter particle, one among the vast ocean of dark matter particles in which our galaxy is immersed, bumps into an atomic nucleus inside a detector and makes a tiny little signal for physicists to detect. Remarkable and successful as their search techniques are, there are two obvious contexts in which they work very poorly. If dark matter particles are very lightweight, much lighter than a few GeV/c2, the effect of one hitting a nucleus becomes very hard to detect. Or if the nature of the interaction of dark matter with ordinary matter is such that it depends on the spin (the intrinsic angular momentum) of a nucleus rather than on how many protons and neutrons the nucleus contains, then the probability of a collision becomes much, much lower. But in either case, as long as dark matter is affected by the weak nuclear force, the LHC can produce dark matter particles, and though ATLAS and CMS can’t detect them, they can detect particles that might sometimes recoil against them, such as a photon or a jet. So CMS was quite proud to show that their results are complementary to those other classes of experiments.

Fig. 3: Limits on dark matter candidates that feel the weak nuclear force and can interact with ordinary matter. The horizontal axis gives the dark matter particle's mass, the vertical mass its probability to hit a proton or neutron. The region above each curve is excluded. All curves shown other than those marked "CMS" are from underground experiments searching for dark matter particles hitting an atomic nucleus. CMS searches for a jet or a photon recoiling against something undetectable provide (left) the best limits on "spin-independent" interactions for masses below 3.5 GeV/c-squared, and (right) the best limits on "spin-dependent" interactions for all masses up to a TeV/c-squared.

Finally, I made a moderately big deal back in October about a small excess in multi-leptons (collisions that produce three or more electrons, muons, positrons [anti-electrons] or antimuons, which are a good place to look for new phenomena), though I warned you in bold red letters that most small excesses go away with more data. A snippet of an update was shown at La Thuile, and from what I said earlier about results that appear early in the conference season, you know that’s bad news. Suffice it to say that although discrepancies with theoretical predictions remain, the ones seen in October apparently haven’t become more striking. The caveat that most small excesses go away applies, so far, to this data set as well. We’ll keep watching.

Fig. 4: The updated multilepton search at CMS shows (black solid curve) a two standard deviation excess compared to expectations (black dotted curve) in at least some regimes in the plane of the gluino mass (vertical axis) versus the chargino mass (horizontal axis) in a particular class of models. But had last fall's excess been a sign of new physics, the current excess would presumably have been larger.

Stay tuned for much more in the coming weeks!