Tag Archives: CPViolation

A Neutrino Success Story

Almost all the news on neutrinos in the mainstream press this past few months was about the OPERA experiment, and a possible violation of Einstein’s foundational theory of relativity. That the experiment turned out to be wrong didn’t surprise experts. But one of the concerns that scientists have about how this story turned out and was reported in the press is that perhaps many non-experts may get the impression that science is so full of mistakes that you can’t trust it at all. That would be a very unhappy conclusion — not just unhappy but in fact a very dangerous conclusion, at least for anyone who would like to keep their economy strong, their planet well-treated and their nation well-defended.

So it is important to balance the OPERA mini-fiasco with another hot-off-the-presses neutrino story that illustrates why, even though mistakes in individual scientific experiments are common, collective mistakes in science are rare. A discipline such as physics has intrinsic checks and balances that significantly reduce the probability of errors going unrecognized for long. In the story I’m about to relate, one can recognize how and why scientists start to come to consensus.  Though quite suspicious of any individual experiment, scientists generally take a different view of a group of experiments that buttress one another.

The context of this story, though much less revolutionary than a violation of Einstein’s speed limit, still represents a milestone in our understanding of neutrinos, which has been advancing very rapidly over the past fifteen years or so. When I was a starting graduate student in the late 1980s, almost all we knew about neutrinos was that there were at least three types and that they were much lighter than electrons, and perhaps massless. Today we know much, much more about neutrinos and how they behave. And in just the last few months and weeks and days, one of the missing entries in the Encyclopedia Neutrinica appears to have been filled in. Continue reading

News from La Thuile, with Much More to Come

At various conferences in the late fall, the Large Hadron Collider [LHC] experiments ATLAS and CMS showed us many measurements that they made using data they took in spring and summer of 2011. But during the fall their data sets increased in size by a factor of two and a half!  So far this year the only results we’d seen that involved the 2011 full data set had been ones needed in the search for the Higgs particle. Last week, that started to change.

The spring flood is just beginning. Many new experimental results from the LHC were announced at La Thuile this past week, some only using part of the 2011 data but a few using all of it, and more and more will be coming every day for the next couple of weeks. And there are also new results coming from the (now-closed) Tevatron experiments CDF and DZero, which are completing many analyses that use their full data set. In particular, we’re expecting them to report on their best crack at the Higgs particle later this week. They can only hope to create controversy; they certainly won’t be able to settle the issue as to whether there is or isn’t a Higgs particle with a mass of about 125 GeV/c2, as hints from ATLAS and CMS seem to indicate.  But all indications are that it will be an interesting week on the Higgs front.

The Top Quark Checks In

Fig. 1: In the Higgs mechanism, the W particle gets its mass from the non-zero average value of the Higgs field. A precise test of this idea arises as follows. When the top quark decays to a bottom quark and a W particle, and the W then decays to an anti-neutrino and an electron or muon, the probability that the electron or muon travels in a particular direction can be predicted assuming the Higgs mechanism. The data above shows excellent agreement between theory and experiment, validating the notion of the Higgs field.

There are now many new measurements of the properties of the top quark, poking and prodding it from all sides (figuratively)  to see if it behaves as expected within the “Standard Model of particle physics” [the equations that we use to describe all of the known particles and forces of nature.] And so far, disappointingly for those of us hoping for clues as to why the top quark is so much heavier than the other quarks, there’s no sign of anything amiss with those equations. Top quarks and anti-quarks are produced in pairs more or less as expected, with the expected rate, and moving in the expected directions with the expected amount of energy. Top quark decay to a W particle and a bottom quark also agrees, in detail, with theoretical expectation.  Specifically (see Figure 1) the orientation of the W’s intrinsic angular momentum (called its “spin”, technically), a key test of the Standard Model in general and of the Higgs mechanism in particular, agrees very well with theoretical predictions.  Meanwhile there’s no sign that there are unexpected ways of producing top quarks, nor any sign of particles that are heavy cousins of the top quark.

One particularly striking result from CMS relates to the unexpectedly large asymmetry in the production of top quarks observed at the Tevatron experiments, which I’ve previously written about in detail. The number of top quarks produced moving roughly in the same direction as the proton beam is expected theoretically to be only very slightly larger than the number moving roughly in the same direction as the anti-proton beam, but instead both CDF and DZero observe a much larger effect. This significant apparent discrepancy between their measurement and the prediction of the Standard Model has generated lots of interest and hope that perhaps we are seeing a crack in the Standard Model’s equations.

Well, it isn’t so easy for CMS and ATLAS to make the same measurement, because the LHC has two proton beams, so it is symmetric front-to-back, unlike the Tevatron with its proton beam and anti-proton beam.   But still, there are other related asymmetries that LHC experiments can measure. And CMS has now looked with its full 2011 data set, and observes… nothing: for a particular charge asymmetry that they can measure, they find an asymmetry of 0.4% +- 1.0% +- 1.2% (the first number is the best estimate and the latter two numbers are the statistical and systematic uncertainties on that estimate).  The Standard Model predicts something of order a percent or so, while many attempts to explain the Tevatron result might have predicted an effect of several percent.  (ATLAS has presented a similar measurement but only using part of the 2011 data set, so it has much larger uncertainties at present.)

Now CMS is not measuring quite the same thing as CDF and DZero, so the CMS result is not in direct conflict with the Tevatron measurements. But if new phenomena were present that were causing the CDF and DZero’s anomalously large asymmetry, we’d expect that by now they’d be starting to show up, at least a little bit, in this CMS measurement.  The fact that CMS sees not a hint of anything unexpected considerably weakens the overall case that the Tevatron excess asymmetry might have an exciting explanation. It suggests rather that the whole effect is really a problem with the interpretation of the Tevatron measurements themselves, or with the ways that the equations of the Standard Model are used to predict them. That is of course disappointing, but it is still far too early to declare the case closed.

There’s also a subtle connection here with the recent bolstering by CDF of the LHCb experiment’s claim that CP violation is present in the decays of particles called “D mesons”. (D mesons are hadrons containing a charm quark [or anti-quark], an up or down anti-quark [or quark], and [as for all hadrons] lots of additional gluons and quark/anti-quark pairs.) The problem is that theorists, who used to be quite sure that any such CP violation in D mesons would indicate the presence of new phenomena not predicted by the Standard Model, are no longer so sure. So one needs corroborating information from somewhere, showing some other related phenomenon, before getting too excited.

One place that such information might have come from is the top quark.  If there is something surprising in charm quarks (but not in bottom quarks) one might easily imagine that perhaps there is something new affecting all up-type quarks (the up quark, charm quark and top quark) more than the down-type quarks (down, strange and bottom.)  [Read here about the known elementary particles and how they are organized.] In other words, if the charm quark is different from expectations and the bottom quark is not, it would seem quite reasonable that the top quark would be even more different from expectations. But  unfortunately, the results from this week suggest the top quark, to the level of precision that can currently be mustered, is behaving very much as the Standard Model predicted it would.

Meanwhile Nothing Else Checks In

Meanwhile, in the direct search for new particles not predicted by the Standard Model, there were a number of new results from CMS and ATLAS at La Thuile. The talks on these subjects went flying by; there was far too little information presented to allow understanding of any details, and so without fully studying the corresponding papers I can’t say anything more intelligent yet than that they didn’t see anything amiss. But of course, as I’ve suggested many times, searches of this type wouldn’t be shown so soon after the data was collected if they indicated any discrepancy with theoretical prediction, unless the discrepancy was spectacularly convincing. More likely, they would be delayed a few weeks or even months, while they were double- and triple-checked, and perhaps even held back for more data to be collected to clarify the situation. So we are left with the question as to which of the other measurements that weren’t shown are appearing later because, well, some things take longer than others, and which ones (if any) are being actively held back because they are more … interesting. At this preliminary stage in the conference season it’s too early to start that guessing game.

Fig. 2: The search for a heavy particle that, like a Z particle, can decay to an electron/positron pair or a muon/anti-muon pair now excludes such particles to well over 1.5 TeV/c-squared. The Z particle itself is the bump at 90 GeV; any new particle would appear as a bump elsewhere in the plot. But above the Z mass, the data (black dots) show a smooth curve with no significant bumps.

So here’s a few words about what ATLAS and CMS didn’t see. Several classic searches for supersymmetry and other theories that resemble it (in that they show signs of invisible particles, jets from high-energy quarks and gluons, and something rare like a lepton or two or a photon), were updated by CMS for the full or near-full data set. Searches for heavy versions of the top and bottom quark were shown by ATLAS and CMS. ATLAS sought heavy versions of the Z particle (see Figure 2) that decay to a high energy electron/positron pair or muon/anti-muon pair; with their full 2011 data set, they now exclude particles of this type up to masses (depending on the precise details of the particle) of 1.75-1.95 TeV/c2. Meanwhile CMS looked for heavy versions of the W particle that can decay to an electron or muon and something invisible; the exclusions reach out above 2.5 TeV/c2. Other CMS searches using the full data set included ones seeking new particles decaying to two Z particles, or to a W and a Z.   ATLAS looked for a variety of exotic particles, and CMS looked for events that are very energetic and produce many known particles at once.  Most of these searches were actually ones we’d seen before, just updated with more data, but a few of them were entirely new.

Two CMS searches worth noting involved looking for new undetectable particles recoiling against a single jet or a single photon. These put very interesting constraints on dark matter that are complementary to the searches that have been going on elsewhere, deep underground.  Using vats of liquid xenon or bubble chambers or solid-state devices, physicists have been looking for the very rare process in which a dark matter particle, one among the vast ocean of dark matter particles in which our galaxy is immersed, bumps into an atomic nucleus inside a detector and makes a tiny little signal for physicists to detect. Remarkable and successful as their search techniques are, there are two obvious contexts in which they work very poorly. If dark matter particles are very lightweight, much lighter than a few GeV/c2, the effect of one hitting a nucleus becomes very hard to detect. Or if the nature of the interaction of dark matter with ordinary matter is such that it depends on the spin (the intrinsic angular momentum) of a nucleus rather than on how many protons and neutrons the nucleus contains, then the probability of a collision becomes much, much lower. But in either case, as long as dark matter is affected by the weak nuclear force, the LHC can produce dark matter particles, and though ATLAS and CMS can’t detect them, they can detect particles that might sometimes recoil against them, such as a photon or a jet. So CMS was quite proud to show that their results are complementary to those other classes of experiments.

Fig. 3: Limits on dark matter candidates that feel the weak nuclear force and can interact with ordinary matter. The horizontal axis gives the dark matter particle's mass, the vertical mass its probability to hit a proton or neutron. The region above each curve is excluded. All curves shown other than those marked "CMS" are from underground experiments searching for dark matter particles hitting an atomic nucleus. CMS searches for a jet or a photon recoiling against something undetectable provide (left) the best limits on "spin-independent" interactions for masses below 3.5 GeV/c-squared, and (right) the best limits on "spin-dependent" interactions for all masses up to a TeV/c-squared.

Finally, I made a moderately big deal back in October about a small excess in multi-leptons (collisions that produce three or more electrons, muons, positrons [anti-electrons] or antimuons, which are a good place to look for new phenomena), though I warned you in bold red letters that most small excesses go away with more data. A snippet of an update was shown at La Thuile, and from what I said earlier about results that appear early in the conference season, you know that’s bad news. Suffice it to say that although discrepancies with theoretical predictions remain, the ones seen in October apparently haven’t become more striking. The caveat that most small excesses go away applies, so far, to this data set as well. We’ll keep watching.

Fig. 4: The updated multilepton search at CMS shows (black solid curve) a two standard deviation excess compared to expectations (black dotted curve) in at least some regimes in the plane of the gluino mass (vertical axis) versus the chargino mass (horizontal axis) in a particular class of models. But had last fall's excess been a sign of new physics, the current excess would presumably have been larger.

Stay tuned for much more in the coming weeks!

LHCb’s Result From November Appears Confirmed by CDF

Back in November, I described a surprising result from the LHCb experiment at the Large Hadron Collider [LHC] concerning “CP violation” in the decays of particles called “D mesons” (which are hadrons that contain an unpaired charm quark or an unpaired charm antiquark) at a level much larger than expected by theorists.  Rather than rehashing the explanation for what that was all about, I’m going to point you to what I wrote in November.

There’s news this week that the CDF experiment at the now-closed Tevatron has updated its result for a measurement of the same quantity, using the full CDF data set.  And they now find a very similar result to what LHCb found.   This is indicated on the slide shown below, taken from the talk given by Angelo Di Canto at the La Thuile conference (but edited by me to fix a big typo; I hope he does not mind.)  You see that while LHCb found a CP-violating asymmetry of -0.82%, with statistical and systematic uncertainties of 0.21% and 0.11%, CDF finds -0.62%, with almost identical uncertainties — a little closer to zero, but still well away from it.

A slide from the CDF presentation on its measurement of CP violation in D meson decays (with an edit by me to fix a glaring typo.) The CDF result is in orange at the top; the LHCb result is in black just below it. In the figure, the LHCb result is in blue, the CDF result in orange, and the traditional expectation for the Standard Model is very close to the point (0,0), the isolated red dot at dead center.

This lends support to LHCb’s result, and putting the two results (and a couple of other weaker ones) together makes their combination discrepant from zero by about 3.8 standard deviations.  That’s great, but not as great as it would have been if what theorists thought a few years ago was still considered reliable.  Back then, the relevant experts (and I should emphasize I am not one of them) would have told you that they were pretty darn certain that the Standard Model [the equations we use to describe the known particles and forces] could not produce CP violation of this type, and any observation of a non-zero signal would imply the existence of previously unknown particles.  But the experts  have been backing away from this point of view recently, worrying that maybe they know less about how to calculate this in the Standard Model than they used to think.  If we’re to be sure this is really a sign of new particles in nature, and not just a sign that theorists have trouble predicting this quantity, we’re going to need additional evidence from another quarter.  And so far, we haven’t got any.