Tag Archives: top_quarks

Quantum Field Theory, String Theory and Predictions (Part 3)

[This is the third post in a series; here’s #1 and #2.]

The quantum field theory that we use to describe the known particles and forces is called the “Standard Model”, whose structure is shown schematically in Figure 1. It involves an interweaving of three quantum field theories — one for the electromagnetic force, one for the weak nuclear force, and one for the strong nuclear force — into a single more complex quantum field theory.

SM_Interactions

Fig. 1: The three non-gravitational forces, in the colored boxes, affect different combinations of the known apparently-elementary particles. For more details see http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-known-apparently-elementary-particles/

We particle physicists are extremely fortunate that this particular quantum field theory is just one step more complicated than the very simplest quantum field theories. If this hadn’t been the case, we might still be trying to figure out how it works, and we wouldn’t be able to make detailed and precise predictions for how the known elementary particles will behave in our experiments, such as those at the Large Hadron Collider [LHC].

In order to make predictions for processes that we can measure at the LHC, using the equations of the Standard Model, we employ a method of successive approximation (with the jargon name “method of perturbations”, or “perturbation `theory’ ”). It’s a very common method in math and science, in which

  • we make an initial rough estimate,
  • and then correct the estimate,
  • and then correct the correction,
  • etc.,

until we have a prediction that is precise enough for our needs.

What are those needs? Well, the precision of any measurement, in any context, is always limited by having

  • a finite amount of data (so small statistical flukes are common)
  • imperfect equipment (so small mistakes are inevitable).

What we need, for each measurement, is a prediction a little more precise than the measurement will be, but not much more so. In the difficult environment of the LHC, where measurements are really hard, we often need only the first correction to the original estimate; sometimes we need the second (see Figure 2).

Fig. 2: LONLONNLO

Fig. 2: Top quark/anti-quark pair production rate as a function of the energy of the LHC collisions, as measured by the LHC experiments ATLAS and CMS, and compared with the prediction within the Standard Model.  The measurements are the colored points, with bars indicating their uncertainties.  The prediction is given by the colored bands — purple for the initial estimate, red after the first correction, grey after the second correction — whose widths indicate how uncertain the prediction is at each stage.  The grey band is precise enough to be useful, because its uncertainties are comparable to those of the data.  And the data and Standard Model prediction agree!

Until recently the calculations were done by starting with Feynman’s famous diagrams, but the diagrams are not as efficient as one would like, and new techniques have made them mostly obsolete for really hard calculations.

The method of successive approximation works as long as all the forces involved are rather “weak”, in a technical sense. Now this notion of “weak” is complicated enough (and important enough) that I wrote a whole article on it, so those who really want to understand this should read that article. The brief summary suitable for today is this: suppose you took two particles that are attracted to each other by a force, and allowed them to glom together, like an electron and a proton, to form an atom-like object.  Then if the relative velocity of the two particles is small compared to the speed of light, the force is weak. The stronger the force, the faster the particles will move around inside their “atom”.  (For more details see this article. )

For a weak force, the method of successive approximation is very useful, because the correction to the initial estimate is small, and the correction to the correction is smaller, and the correction to the correction to the correction is even smaller. So for a weak force, the first or second correction is usually enough; one doesn’t have to calculate forever in order to get a sufficiently precise prediction. The “stronger” the force, in this technical sense, the harder you have to work to get a precise prediction, because the corrections to your estimate are larger.

If a force is truly strong, though, everything breaks down. In that case, the correction to the estimate is as big as the estimate, and the next correction is again just as big, so no method of successive approximation will get you close to the answer. In short, for truly strong forces, you need a completely different approach if you are to make predictions.

In the Standard Model, the electromagnetic force and the weak nuclear force are “weak” in all contexts. However, the strong nuclear force is (technically) “strong” for any processes that involve distances comparable to or larger than a proton‘s size (about 100,000 times smaller than an atom) or energies comparable to or smaller than a proton’s mass-energy (about 1 GeV). For such processes, successive approximation does not work at all; it can’t be used to calculate a proton’s size or mass or internal structure. In fact the first step in that method would estimate that quarks and anti-quarks and gluons are free to roam independently and the proton should not exist at all… which is so obviously completely wrong that no method of correcting it will ever give the right answer.  I’ll get back to how we show protons are predicted by these equations, using big computers, in a later post.

But there’s a remarkable fact about the strong nuclear force. As I said, at distances the size of a proton or larger, the strong nuclear force is so strong that successive approximation doesn’t work. Yet, at distances shorter than this, the force actually becomes “weak”, in the technical sense, and successive approximation does work there.

Let me make sure this is absolutely clear, because the difference between what we think of colloquially as “weak” is different from “weak” in the sense I’m using it here.  Suppose you put two quarks very close together, at a distance r closer together than the radius R of a proton.  In Figure 3 I’ve plotted how big the strong nuclear force (purple) and the electromagnetic force (blue) would be between two quarks, as a function of the distance between them. Notice both forces are very strong (colloquially) at short distances (r << R), but (I assert) both forces are weak (technically) there.  The electromagnetic force is much the weaker of the two, which is why its curve is lower in the plot.  

Now if you move the two quarks apart a bit (increasing r, but still with r << R), both forces become smaller; in fact both decrease almost like 1/r², which would be your first, naive estimate, same as in your high school science class. If this naive estimate were correct, both forces would maintain the same strength (technically) at all distances r.  

But this isn’t quite right.  Since the 1950s, it was well-known that the correction to this estimate (using successive approximation methods) is to make the electromagnetic force decrease just a little faster than that; it becomes a tiny bit weaker (technically) at longer distances.  In the 60s, that’s what most people thought any force described by quantum field theory would do. But they were wrong.  In 1973, David Politzer, and David Gross and Frank Wilczek, showed that for the quantum field theory of quarks and gluons, the correction to the naive estimate goes the other direction; it makes the force decrease just a little more slowly than 1/r². [Gerard 't Hooft had also calculated this, but apparently without fully recognizing its importance...?] It is the small, accumulating excess above the naive estimate — the gradual deviation of the purple curve from its naive 1/r² form — that leads us to say that this force becomes technically “stronger” and “stronger” at larger distances. Once the distance r becomes comparable to a proton’s size R, the force becomes so “strong” that successive approximation methods fail.  As shown in the figure, we have some evidence that the force becomes constant for r >> R, independent of distance.  It is this effect that, as we’ll see next time, is responsible for the existence of protons and neutrons, and therefore of all ordinary matter.

Fig. 3: How the electromagnetic force (blue) and the strong nuclear force (purple) vary as a function of the distance r between two particles that feel the corresponding force. The horizontal axis shows r in units of the confinement scale R; the vertical axis shows the force in units of the minimum strength of the strong nuclear force, which it exerts for r > R.

Fig. 3: How the electromagnetic force (blue) and the strong nuclear force (purple) vary as a function of the distance r between two quarks. The horizontal axis shows r in units of the proton’s radius R; the vertical axis shows the force in units of the constant value that the strong nuclear force takes for r >> R.  Both forces are “weak” at short distances, but the strong nuclear force becomes “strong” once r is comparable to, or larger than, R.

So: at very short distances and high energies, the strong nuclear force is a somewhat “weak” force, stronger still than the electromagnetic and weak nuclear forces, but similar to them.  And therefore, successive approximation can tell you what happens when a quark inside one speeding proton hits a quark in a proton speeding the other direction, as long as the quarks collide with energy far more than 1 GeV. If this weren’t true, we could make scarcely any predictions at the LHC, and at similar proton-proton and proton-antiproton colliders! (We do also need to know about the proton’s structure, but we don’t calculate that: we simply measure it in other experiments.)  In particular, we would never have been able to calculate how often we should be making top quarks, as in Figure 2.  And we would not have been able to calculate what the Standard Model, or any other quantum field theory, predicts for the rate at which Higgs particles are produced, so we’d never have been sure that the LHC would either find or exclude the Standard Model Higgs particle. Fortunately, it is true, and that is why precise predictions can be made, for so many processes, at the LHC.  And the success of those and other predictions underlies our confidence that the Standard Model correctly describes most of what we know about particle physics.

But still, the equations of the strong nuclear force have only quarks and anti-quarks and gluons in them — no protons, neutrons, or other hadrons.  Our understanding of the real world would certainly be incomplete if we didn’t know why there are protons.  Well, it turns out that if we want to know whether protons and neutrons and other hadrons are actually predicted by the strong nuclear force’s equations, we have to test this notion using big computers. And that’s tricky, even trickier than you might guess.

Continued here

 

What is the “Strength” of a Force?

Particle physicists, cataloging the fundamental forces of nature, have named two of them the strong nuclear force and the weak nuclear force. [A force is simply any phenomenon that pushes or pulls on objects.] More generally they talk about strong and weak forces, speaking of electromagnetism as rather weak and gravity as extremely weak.  What do the words “strong” and “weak” mean here?  Don’t electric forces become strong at short distances? Isn’t gravity a pretty strong force, given that it makes it hard to lift a bar of gold?

Well, these words don’t mean what you think.  Yes, the electric force between two electrons becomes stronger (in absolute terms) as you bring them closer together; the force grows as one over the square of the distance between them.  Yet physicists, when speaking their own language to each other, will view this behavior as what is expected of a typical force, and so will say that “electromagnetism’s strength is unchanging with distance — and it is rather weak at all distances.

And the strength of gravity between the Earth and a bar of gold isn’t relevant either; physicists are interested in the strength of forces between individual elementary (or at least small) particles, not between large objects containing enormous numbers of particles.

Clearly there is a language difference here… as is often the case with words in English and words in Physics-ese.  It requires translation.  So I have now written an article explaining the language of “strong” and “weak” forces used by particle physicists, describing how it works, why it is useful, and what it teaches us about the known forces: gravity, electromagnetism, the strong nuclear force, the weak nuclear force, and the (still unobserved but surely present) Higgs force. Continue reading

Higgs Symposium: A More Careful Summary

My rather hasty, breathless and inconsistent summaries (#1, #2 and #3) of last week’s talks at the excellent Higgs Symposium (held at the University of Edinburgh, as part of the new Higgs Center for Theoretical Physics) clearly had their limitations.  So I thought it might be useful to give a more organized overview, with more careful language appropriate for non-expert readers, of our current knowledge and ignorance concerning the recently discovered Higgs-like particle (which most of us do believe is a Higgs particle of some type, though not necessarily of the simplest, “Standard Model” type.)

I’m therefore writing an article that tries to put the questions about the Higgs-like particle into a sensible order, and then draws upon the talks that were given at the Symposium to provide the current best answers. About half of the article is done, and you’re welcome to read it.  Due to other commitments, I won’t probably get back to finish it until next week.  But “Part 1″ is long enough that it will take some time for most readers to absorb anyway…

TIME for a Little Soul-Searching

Yes, it was funny, as I hope you enjoyed in my post from Saturday; but really, when we step back and look at it, something is dreadfully wrong and quite sad.  Somehow TIME magazine, fairly reputable on the whole, in the process of reporting the nomination of a particle (the Higgs Boson; here’s my FAQ about it and here’s my layperson’s explanation of why it is important) as a Person (?) of the Year, explained the nature of this particle with a disastrous paragraph of five astoundingly erroneous sentences.   Treating this as a “teaching moment” (yes, always the professor — can’t help myself) I want to go through those sentences carefully and fix them, not to string up or further embarrass the journalist but to be useful to my readers.  So that’s coming in a moment.

But first, a lament.

Who’s at fault here, and how did this happen?  There’s plenty of blame to go around; some lies with the journalist, who would have been wise to run his prose past a science journalist buddy; some lies with the editors, who didn’t do basic fact checking, even of the non-science issues; some lies with a public that (broadly) doesn’t generally care enough about science for editors to make it a priority to have accurate reporting on the subject.  But there’s a history here.  How did it happen that we ended up a technological society, relying heavily on the discoveries of modern physics and other sciences over the last century, and yet we have a public that is at once confused by, suspicious of, bored by, and unfamiliar with science?   I think a lot of the blame also lies with scientists, who collectively over generations have failed to communicate both what we do and why it’s important — and why it’s important for journalists not to misrepresent it. Continue reading

A Real Workshop

In the field of particle physics, the word “workshop” has a rather broad usage; some workshops are just conferences with a little bit of time for discussion or some other additional feature.  But some workshops are about WORK…. typically morning-til-night work.  This includes the one I just attended at the Perimeter Institute (PI) in Waterloo, Canada, which brought particle experimentalists from the CMS experiment (one of the two general-purpose experiments at the Large Hadron Collider [LHC] — the other being ATLAS) together with some particle theorists like myself.  In fact, it was one of the most productive workshops I’ve ever participated in.

The workshop was organized by the PI’s young theoretical particle physics professors, Philip Schuster and Natalia Toro, along with CMS’s current spokesman Joseph Incandela and physics coordinator Greg Landsberg. (Incandela, professor at the University of California at Santa Barbara, is now famous for giving CMS’s talk July 4th announcing the observation of a Higgs-like particle; ATLAS’s talk was given by Fabiola Gianotti. Landsberg is a senior professor at Brown University.) Other participants included many of the current “conveners” from CMS — typically very experienced and skilled people who’ve been selected to help supervise segments of the research program — and a couple of dozen LHC theorists, mostly under the age of 40, who are experienced in communicating with LHC experimenters about their measurements.  Continue reading

Is Supersymmetry Ruled Out Yet?

[A Heads Up: I’m giving a public lecture about the LHC on Saturday, April 28th, 1 p.m. New York time/10 a.m. Pacific, through the MICA Popular Talks series, held online at the Large Auditorium on StellaNova, Second Life; should you miss it, both audio and slides will be posted for you to look at later.]

Is supersymmetry, as a symmetry that might explain some of the puzzling aspects of particle physics at the energy scales accessible to the Large Hadron Collider [LHC], ruled out yet? If the only thing you’re interested in is the answer to precisely that question, let me not waste your time: the answer is “not yet”. But a more interesting answer is that many simple variants of supersymmetry are either ruled out or near death.

Still, the problem with supersymmetry — and indeed with any really good idea, such as extra dimensions, or a composite Higgs particle — is that such a basic idea typically can be realized in many different ways. Pizza is a great idea too, but there are a million ways to make one, so you can’t conclude that nobody makes pizza in town just because you can’t smell tomatoes. Similarly, to rule out supersymmetry as an idea, you can’t be satisfied by ruling out the most popular forms of supersymmetry that theorists have invented; you have to rule out all its possible variants. This will take a while, probably a decade.

That said, many of the simplest and popular variants of supersymmetry no longer work very well or at all. This is because of two things: (click here to read the rest of the article.)

Professor Peskin’s Four Slogans: Advice for the 2012 LHC

On Monday, during the concluding session of the SEARCH Workshop on Large Hadron Collider [LHC] physics (see also here for a second post), and at the start of the panel discussion involving a group of six theorists, Michael Peskin, professor of theoretical particle physics at the Stanford Linear Accelerator Center [and my Ph.D. advisor] opened the panel with a few powerpoint slides.  He entitled them: “My Advice in Four Slogans” — the advice in question being aimed at experimentalists at ATLAS and CMS (the two general-purpose experiments at the LHC) as to how they ought best to search for new phenomena at the LHC in 2012, and beyond. Since I agree strongly with his points (as I believe most LHC theory experts do), I thought I’d tell you those four slogans and explain what they mean, at least to me. [I'm told the panel discussion will be posted online soon.]

1. No Boson Left Behind

There is a tendency in the LHC experimental community to assume that the new particles that we are looking for are heavy — heavier than any we’ve ever produced before. However, it is equally possible that there are unknown particles that are rather lightweight, but have evaded detection because they interact very weakly with the particles that we already know about, and in particular very weakly with the quarks and antiquarks and gluons that make up the proton.

Peskin’s advice is thus a warning: don’t just rush ahead to look for the heavy particles; remember the lightweight but hard-to-find particles you may have missed.

The word “boson” here is a minor point, I think. All particles are either fermions or bosons; I’d personally say that Peskin’s slogan applies to certain fermions too.

2. Exclude Triangles Not Points

The meaning of this slogan is a less obscure than the slogan itself.  Its general message is this: if one is looking for signs of a new hypothetical particle which

  • is produced mostly or always in particle-antiparticle pairs, and
  • can decay in multiple ways,

one has to remember to search for collisions where the particle decays one way and the antiparticle decays a different way; the probability for this to occur can be high.  Most LHC searches have so far been aimed at those cases where both particle and anti-particle decay in the same way.  This approach can in some cases be quite inefficient.   In fact, to search efficiently, one must combine all the different search strategies.

Now what does this have to do with triangles and points?  If you’d like to know, jump to the very end of this post, where I explain the example that motivated this wording of the slogan.  For those not interested in those technical details, let’s go to the next slogan.

3. Higgs Implies Higgs in BSM

[The Standard Model is the set of equations used to predict the behavior of all the known particles and forces, along with the simplest possible type of Higgs particle (the Standard Model Higgs.) Any other phenomenon is by definition Beyond the Standard Model: BSM.]

 [And yes, one may think of the LHC as a machine for converting theorists' B(SM) speculations into (BS)M speculations.]

One of the main goals of the LHC is to find evidence of one or more types of Higgs particles that may be found in nature.  There are two main phases to this search, Phase 1 being the search for the “Standard Model Higgs”, and Phase 2 depending on the result of Phase 1.  You can read more about this here.

Peskin’s point is that the Higgs particle may itself be a beacon, signalling new phenomena not predicted by the Standard Model. It is common in many BSM theories that there are new ways of producing the Higgs particle, typically in decays of as-yet-unknown heavy particles. Some of the resulting phenomena may be quite easy to discover, if one simply remembers to look!

Think what a coup it would be to discover not only the Higgs particle but also an unexpected way of making it! Two Nobel prize-winning discoveries for the price of one!!

Another equally important way to read this slogan (and I’m not sure why Peskin didn’t mention it — maybe it was too obvious, and indeed every panel member said something about this during the following discussion) is that everything about the Higgs particle needs to be studied in very great detail. Most BSM theories predict that the Higgs particle will behave differently from what is predicted in the Standard Model, possibly in subtle ways, possibly in dramatic ways. Either its production mechanisms or its decay rates, or both, may easily be altered. So we should not assume that a Higgs particle that looks at first like a Standard Model Higgs actually is a Standard Model Higgs. (I’ve written about this here, here and here.)  Even a particle that looks very much like a Standard Model Higgs may offer, through precise measurements, the first opportunity to dethrone the Standard Model.

4. BSM Hides Beneath Top

At the Tevatron, the LHC’s predecessor,  top quark/anti-quark pairs were first discovered, but were rather rare. But the LHC has so much energy per collision that it has no trouble producing these particles. ATLAS and CMS have each witnessed about 800,000 top quark/anti-quark pairs so far.

Of course, this is great news, because the huge amount of LHC data on top quarks from 2011 allowed measurements of the top quark’s properties that are far more precise than we had previously. (I wrote about this here.) But there’s a drawback. Certain types of new phenomena that might be present in nature may be very hard to recognize, because the rare collisions that contain them look too similar to the common collisions that contain a top quark/anti-quark pair.

Peskin’s message is that the LHC experimenters need to do very precise measurements of all the data from collisions that appear to contain the debris from top quarks, just in case it’s a little bit different from what the Standard Model predicts.

A classic example of this problem involves the search for a supersymmetric partner of a top quark, the “top squark”. Unlike the t’ quark that I described a couple of slogans back, which would be produced with a fairly high rate and would be relatively easy to notice, top squarks would be produced with a rate that is several times smaller. [Technically, this has to do with the fact that the t' would have spin-1/2 and the top squark would have spin 0.] Unfortunately, if the mass of the top squark is not very different from the mass of the top quark, then collisions that produce top squarks may look very similar indeed to ones that produce top quarks, and it may be a big struggle to separate them in the data. The only way to do it is to work hard — to make very precise measurements and perhaps better calculations that can allow one to tell the subtle differences between a pile of data that contains both top quark/anti-quark pairs and top squark/anti-squark pairs, and a pile of data that contains no squarks at all.

Following up on slogan #2: An example with a triangle.

Ok, now let’s see why the second slogan has something to do with triangles.

One type of particle that has been widely hypothesized over the years is a heavy version of the top quark, often given the unimaginative name of “top-prime.” For short, top is written t, so top-prime is written t’. The t’ may decay in various possible ways. I won’t list all of them, but three important ones that show up in many speculative theories are

  • t’ → W particle + bottom quark   (t’ → Wb)
  • t’ → Z particle + top quark      (t’ → Zt)
  • t’ → Higgs particle + top quark    (t’ → ht)

But we don’t know how often t’ quarks decay to Wb, or to Zt, or to ht; that’s something we’ll have to measure. [Let’s call the probability that a t’ decays to Wb “P1”, and similarly define P2 and P3 for Zt and ht].

Of course we have to look for the darn thing first; maybe there is no t’. Unfortunately, how we should look for it depends on P1, P2, and P3, which we don’t know. For instance, if P1 is much larger than P2 and P3, then we should look for collisions that show signs of producing a t’ quark and a t‘ antiquark decaying as t’ → W+ b and t‘ → W- b. Or if P2 is much larger than P1 and P3, we should look for t’ → Zt and t‘ → Z t.

Peskin's triangle for a t' quark; at each vertex the probabilty for the decay labeling the vertex is 100%, while at dead center all three decays are equally probable. One must search in a way that is sensitive to all the possibilities.

Peskin has drawn this problem of three unknown probabilities, whose sum is 1, as a triangle.  The three vertices of the triangle, labeled by Wb, Zt and ht, represent three extreme cases: P1=1 and P2=P3=0; P2=1 and P1=P3=0; and P3=1, P1=P2=0. Each point inside this triangle represents different possible non-zero values for P1, P2 and P3 (with P1+P2+P3 assumed to be 1.)  The center of the triangle is P1=P2=P3=1/3.

Peskin’s point is that if the experiments only look for collisions where both quark and antiquark decay in the same way

  • t’ → W+ b and t‘ → W- b;
  • t’ → Zt and t‘ → Z t;
  • t’ → ht and t‘ → h t;

which is what they’ve done so far, then they’ll only be sensitive to the cases for which P1 is by far the largest, P2 is by far the largest, or P3 is by far the largest — the regions near the vertices of the triangle.  But we know a number of very reasonable theories with P1=1/2 and P2=P3=1/4 — a point deep inside the triangle.  So the experimenters are not yet looking efficiently for this case.  Peskin is saying that to cover the whole triangle, one has add three more searches, for

  • t’ → W+ b and t‘ → Z t, or t’ → W-  b and t’ → Zt;
  • t’ → W+ b and t‘ → h t, or t‘ → W- b  and t’ → ht;
  • t’ → Zt and t‘ → h t, or t’ → ht or t‘ → Z t;

so as to cover that case (and more generally, the whole triangle) efficiently. Moreover, no one search is very effective; one has to combine them all six searches together.

His logic is quite general.  If you have a particle that decays in four different ways, the same logic applies but for a tetrahedron, and you need ten searches; if two different ways, it’s a line segment, and you need three searches.

News from La Thuile, with Much More to Come

At various conferences in the late fall, the Large Hadron Collider [LHC] experiments ATLAS and CMS showed us many measurements that they made using data they took in spring and summer of 2011. But during the fall their data sets increased in size by a factor of two and a half!  So far this year the only results we’d seen that involved the 2011 full data set had been ones needed in the search for the Higgs particle. Last week, that started to change.

The spring flood is just beginning. Many new experimental results from the LHC were announced at La Thuile this past week, some only using part of the 2011 data but a few using all of it, and more and more will be coming every day for the next couple of weeks. And there are also new results coming from the (now-closed) Tevatron experiments CDF and DZero, which are completing many analyses that use their full data set. In particular, we’re expecting them to report on their best crack at the Higgs particle later this week. They can only hope to create controversy; they certainly won’t be able to settle the issue as to whether there is or isn’t a Higgs particle with a mass of about 125 GeV/c2, as hints from ATLAS and CMS seem to indicate.  But all indications are that it will be an interesting week on the Higgs front.

The Top Quark Checks In

Fig. 1: In the Higgs mechanism, the W particle gets its mass from the non-zero average value of the Higgs field. A precise test of this idea arises as follows. When the top quark decays to a bottom quark and a W particle, and the W then decays to an anti-neutrino and an electron or muon, the probability that the electron or muon travels in a particular direction can be predicted assuming the Higgs mechanism. The data above shows excellent agreement between theory and experiment, validating the notion of the Higgs field.

There are now many new measurements of the properties of the top quark, poking and prodding it from all sides (figuratively)  to see if it behaves as expected within the “Standard Model of particle physics” [the equations that we use to describe all of the known particles and forces of nature.] And so far, disappointingly for those of us hoping for clues as to why the top quark is so much heavier than the other quarks, there’s no sign of anything amiss with those equations. Top quarks and anti-quarks are produced in pairs more or less as expected, with the expected rate, and moving in the expected directions with the expected amount of energy. Top quark decay to a W particle and a bottom quark also agrees, in detail, with theoretical expectation.  Specifically (see Figure 1) the orientation of the W’s intrinsic angular momentum (called its “spin”, technically), a key test of the Standard Model in general and of the Higgs mechanism in particular, agrees very well with theoretical predictions.  Meanwhile there’s no sign that there are unexpected ways of producing top quarks, nor any sign of particles that are heavy cousins of the top quark.

One particularly striking result from CMS relates to the unexpectedly large asymmetry in the production of top quarks observed at the Tevatron experiments, which I’ve previously written about in detail. The number of top quarks produced moving roughly in the same direction as the proton beam is expected theoretically to be only very slightly larger than the number moving roughly in the same direction as the anti-proton beam, but instead both CDF and DZero observe a much larger effect. This significant apparent discrepancy between their measurement and the prediction of the Standard Model has generated lots of interest and hope that perhaps we are seeing a crack in the Standard Model’s equations.

Well, it isn’t so easy for CMS and ATLAS to make the same measurement, because the LHC has two proton beams, so it is symmetric front-to-back, unlike the Tevatron with its proton beam and anti-proton beam.   But still, there are other related asymmetries that LHC experiments can measure. And CMS has now looked with its full 2011 data set, and observes… nothing: for a particular charge asymmetry that they can measure, they find an asymmetry of 0.4% +- 1.0% +- 1.2% (the first number is the best estimate and the latter two numbers are the statistical and systematic uncertainties on that estimate).  The Standard Model predicts something of order a percent or so, while many attempts to explain the Tevatron result might have predicted an effect of several percent.  (ATLAS has presented a similar measurement but only using part of the 2011 data set, so it has much larger uncertainties at present.)

Now CMS is not measuring quite the same thing as CDF and DZero, so the CMS result is not in direct conflict with the Tevatron measurements. But if new phenomena were present that were causing the CDF and DZero’s anomalously large asymmetry, we’d expect that by now they’d be starting to show up, at least a little bit, in this CMS measurement.  The fact that CMS sees not a hint of anything unexpected considerably weakens the overall case that the Tevatron excess asymmetry might have an exciting explanation. It suggests rather that the whole effect is really a problem with the interpretation of the Tevatron measurements themselves, or with the ways that the equations of the Standard Model are used to predict them. That is of course disappointing, but it is still far too early to declare the case closed.

There’s also a subtle connection here with the recent bolstering by CDF of the LHCb experiment’s claim that CP violation is present in the decays of particles called “D mesons”. (D mesons are hadrons containing a charm quark [or anti-quark], an up or down anti-quark [or quark], and [as for all hadrons] lots of additional gluons and quark/anti-quark pairs.) The problem is that theorists, who used to be quite sure that any such CP violation in D mesons would indicate the presence of new phenomena not predicted by the Standard Model, are no longer so sure. So one needs corroborating information from somewhere, showing some other related phenomenon, before getting too excited.

One place that such information might have come from is the top quark.  If there is something surprising in charm quarks (but not in bottom quarks) one might easily imagine that perhaps there is something new affecting all up-type quarks (the up quark, charm quark and top quark) more than the down-type quarks (down, strange and bottom.)  [Read here about the known elementary particles and how they are organized.] In other words, if the charm quark is different from expectations and the bottom quark is not, it would seem quite reasonable that the top quark would be even more different from expectations. But  unfortunately, the results from this week suggest the top quark, to the level of precision that can currently be mustered, is behaving very much as the Standard Model predicted it would.

Meanwhile Nothing Else Checks In

Meanwhile, in the direct search for new particles not predicted by the Standard Model, there were a number of new results from CMS and ATLAS at La Thuile. The talks on these subjects went flying by; there was far too little information presented to allow understanding of any details, and so without fully studying the corresponding papers I can’t say anything more intelligent yet than that they didn’t see anything amiss. But of course, as I’ve suggested many times, searches of this type wouldn’t be shown so soon after the data was collected if they indicated any discrepancy with theoretical prediction, unless the discrepancy was spectacularly convincing. More likely, they would be delayed a few weeks or even months, while they were double- and triple-checked, and perhaps even held back for more data to be collected to clarify the situation. So we are left with the question as to which of the other measurements that weren’t shown are appearing later because, well, some things take longer than others, and which ones (if any) are being actively held back because they are more … interesting. At this preliminary stage in the conference season it’s too early to start that guessing game.

Fig. 2: The search for a heavy particle that, like a Z particle, can decay to an electron/positron pair or a muon/anti-muon pair now excludes such particles to well over 1.5 TeV/c-squared. The Z particle itself is the bump at 90 GeV; any new particle would appear as a bump elsewhere in the plot. But above the Z mass, the data (black dots) show a smooth curve with no significant bumps.

So here’s a few words about what ATLAS and CMS didn’t see. Several classic searches for supersymmetry and other theories that resemble it (in that they show signs of invisible particles, jets from high-energy quarks and gluons, and something rare like a lepton or two or a photon), were updated by CMS for the full or near-full data set. Searches for heavy versions of the top and bottom quark were shown by ATLAS and CMS. ATLAS sought heavy versions of the Z particle (see Figure 2) that decay to a high energy electron/positron pair or muon/anti-muon pair; with their full 2011 data set, they now exclude particles of this type up to masses (depending on the precise details of the particle) of 1.75-1.95 TeV/c2. Meanwhile CMS looked for heavy versions of the W particle that can decay to an electron or muon and something invisible; the exclusions reach out above 2.5 TeV/c2. Other CMS searches using the full data set included ones seeking new particles decaying to two Z particles, or to a W and a Z.   ATLAS looked for a variety of exotic particles, and CMS looked for events that are very energetic and produce many known particles at once.  Most of these searches were actually ones we’d seen before, just updated with more data, but a few of them were entirely new.

Two CMS searches worth noting involved looking for new undetectable particles recoiling against a single jet or a single photon. These put very interesting constraints on dark matter that are complementary to the searches that have been going on elsewhere, deep underground.  Using vats of liquid xenon or bubble chambers or solid-state devices, physicists have been looking for the very rare process in which a dark matter particle, one among the vast ocean of dark matter particles in which our galaxy is immersed, bumps into an atomic nucleus inside a detector and makes a tiny little signal for physicists to detect. Remarkable and successful as their search techniques are, there are two obvious contexts in which they work very poorly. If dark matter particles are very lightweight, much lighter than a few GeV/c2, the effect of one hitting a nucleus becomes very hard to detect. Or if the nature of the interaction of dark matter with ordinary matter is such that it depends on the spin (the intrinsic angular momentum) of a nucleus rather than on how many protons and neutrons the nucleus contains, then the probability of a collision becomes much, much lower. But in either case, as long as dark matter is affected by the weak nuclear force, the LHC can produce dark matter particles, and though ATLAS and CMS can’t detect them, they can detect particles that might sometimes recoil against them, such as a photon or a jet. So CMS was quite proud to show that their results are complementary to those other classes of experiments.

Fig. 3: Limits on dark matter candidates that feel the weak nuclear force and can interact with ordinary matter. The horizontal axis gives the dark matter particle's mass, the vertical mass its probability to hit a proton or neutron. The region above each curve is excluded. All curves shown other than those marked "CMS" are from underground experiments searching for dark matter particles hitting an atomic nucleus. CMS searches for a jet or a photon recoiling against something undetectable provide (left) the best limits on "spin-independent" interactions for masses below 3.5 GeV/c-squared, and (right) the best limits on "spin-dependent" interactions for all masses up to a TeV/c-squared.

Finally, I made a moderately big deal back in October about a small excess in multi-leptons (collisions that produce three or more electrons, muons, positrons [anti-electrons] or antimuons, which are a good place to look for new phenomena), though I warned you in bold red letters that most small excesses go away with more data. A snippet of an update was shown at La Thuile, and from what I said earlier about results that appear early in the conference season, you know that’s bad news. Suffice it to say that although discrepancies with theoretical predictions remain, the ones seen in October apparently haven’t become more striking. The caveat that most small excesses go away applies, so far, to this data set as well. We’ll keep watching.

Fig. 4: The updated multilepton search at CMS shows (black solid curve) a two standard deviation excess compared to expectations (black dotted curve) in at least some regimes in the plane of the gluino mass (vertical axis) versus the chargino mass (horizontal axis) in a particular class of models. But had last fall's excess been a sign of new physics, the current excess would presumably have been larger.

Stay tuned for much more in the coming weeks!