Category Archives: LHC News

A Black Day (and a Happy One) In Scientific History

Wow.

Twenty years ago, astronomers Heino Falcke, Fulvio Melia and Eric Agol (a former colleague of mine at the University of Washington) pointed out that the black hole at the center of our galaxy, the Milky Way, was probably big enough to be observed — not with a usual camera using visible light, but using radio waves and clever techniques known as “interferometry”.  Soon it was pointed out that the black hole in M87, further but larger, could also be observed.  [How? I explained this yesterday in this post.]   

And today, an image of the latter, looking quite similar to what we expected, was presented to humanity.  Just as with the discovery of the Higgs boson, and with LIGO’s first discovery of gravitational waves, nature, captured by the hard work of an international group of many scientists, gives us something definitive, uncontroversial, and spectacularly in line with expectations.

EHTDiscoveryM87.png

An image of the dead center of the huge galaxy M87, showing a glowing ring of radio waves from a disk of rapidly rotating gas, and the dark quasi-silhouette of a solar-system-sized black hole.  Congratulations to the Event Horizon Telescope team

I’ll have more to say about this later [have to do non-physics work today 😦 ] and in particular about the frustration of not finding any helpful big surprises during this great decade of fundamental science — but for now, let’s just enjoy this incredible image for what it is, and congratulate those who proposed this effort and those who carried it out.

 

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

The Importance and Challenges of “Open Data” at the Large Hadron Collider

A little while back I wrote a short post about some research that some colleagues and I did using “open data” from the Large Hadron Collider [LHC]. We used data made public by the CMS experimental collaboration — about 1% of their current data — to search for a new particle, using a couple of twists (as proposed over 10 years ago) on a standard technique.  (CMS is one of the two general-purpose particle detectors at the LHC; the other is called ATLAS.)  We had two motivations: (1) Even if we didn’t find a new particle, we wanted to prove that our search method was effective; and (2) we wanted to stress-test the CMS Open Data framework, to assure it really does provide all the information needed for a search for something unknown.

Recently I discussed (1), and today I want to address (2): to convey why open data from the LHC is useful but controversial, and why we felt it was important, as theoretical physicists (i.e. people who perform particle physics calculations, but do not build and run the actual experiments), to do something with it that is usually the purview of experimenters.

The Importance of Archiving Data

In many subfields of physics and astronomy, data from experiments is made public as a matter of routine. Usually this occurs after an substantial delay, to allow the experimenters who collected the data to analyze it first for major discoveries. That’s as it should be: the experimenters spent years of their lives proposing, building and testing the experiment, and they deserve an uninterrupted opportunity to investigate its data. To force them to release data immediately would create a terrible disincentive for anyone to do all the hard work!

Data from particle physics colliders, however, has not historically been made public. More worrying, it has rarely been archived in a form that is easy for others to use at a later date. I’m not the right person to tell you the history of this situation, but I can give you a sense for why this still happens today. Continue reading

A Broad Search for Fast Hidden Particles

A few days ago I wrote a quick summary of a project that we just completed (and you may find it helpful to read that post first). In this project, we looked for new particles at the Large Hadron Collider (LHC) in a novel way, in two senses. Today I’m going to explain what we did, why we did it, and what was unconventional about our search strategy.

The first half of this post will be appropriate for any reader who has been following particle physics as a spectator sport, or in some similar vein. In the second half, I’ll add some comments for my expert colleagues that may be useful in understanding and appreciating some of our results.  [If you just want to read the comments for experts, jump here.]

Why did we do this?

Motivation first. Why, as theorists, would we attempt to take on the role of our experimental colleagues — to try on our own to analyze the extremely complex and challenging data from the LHC? We’re by no means experts in data analysis, and we were very slow at it. And on top of that, we only had access to 1% of the data that CMS has collected. Isn’t it obvious that there is no chance whatsoever of finding something new with just 1% of the data, since the experimenters have had years to look through much larger data sets? Continue reading

Lights in the Sky (maybe…)

The Sun is busy this summer. The upcoming eclipse on August 21 will turn day into deep twilight and transfix millions across the United States.  But before we get there, we may, if we’re lucky, see darkness transformed into color and light.

On Friday July 14th, a giant sunspot in our Sun’s upper regions, easily visible if you project the Sun’s image onto a wall, generated a powerful flare.  A solar flare is a sort of magnetically powered explosion; it produces powerful electromagnetic waves and often, as in this case, blows a large quantity of subatomic particles from the Sun’s corona. The latter is called a “coronal mass ejection.” It appears that the cloud of particles from Friday’s flare is large, and headed more or less straight for the Earth.

Light, visible and otherwise, is an electromagnetic wave, and so the electromagnetic waves generated in the flare — mostly ultraviolet light and X-rays — travel through space at the speed of light, arriving at the Earth in eight and a half minutes. They cause effects in the Earth’s upper atmosphere that can disrupt radio communications, or worse.  That’s another story.

But the cloud of subatomic particles from the coronal mass ejection travels a few hundred times slower than light, and it takes it about two or three days to reach the Earth.  The wait is on.

Bottom line: a huge number of high-energy subatomic particles may arrive in the next 24 to 48 hours. If and when they do, the electrically charged particles among them will be trapped in, and shepherded by, the Earth’s magnetic field, which will drive them spiraling into the atmosphere close to the Earth’s polar regions. And when they hit the atmosphere, they’ll strike atoms of nitrogen and oxygen, which in turn will glow. Aurora Borealis, Northern Lights.

So if you live in the upper northern hemisphere, including Europe, Canada and much of the United States, keep your eyes turned to the north (and to the south if you’re in Australia or southern South America) over the next couple of nights. Dark skies may be crucial; the glow may be very faint.

You can also keep abreast of the situation, as I will, using NOAA data, available for instance at

http://www.swpc.noaa.gov/communities/space-weather-enthusiasts

The plot on the upper left of that website, an example of which is reproduced below, shows three types of data. The top graph shows the amount of X-rays impacting the atmosphere; the big jump on the 14th is Friday’s flare. And if and when the Earth’s magnetic field goes nuts and auroras begin, the bottom plot will show the so-called “Kp Index” climbing to 5, 6, or hopefully 7 or 8. When the index gets that high, there’s a much greater chance of seeing auroras much further away from the poles than usual.

The latest space weather overview plot

Keep an eye also on the data from the ACE satellite, lower down on the website; it’s placed to give Earth an early warning, so when its data gets busy, you’ll know the cloud of particles is not far away.

Wishing you all a great sky show!

What’s all this fuss about having alternatives?

I don’t know what all the fuss is about “alternative facts.” Why, we scientists use them all the time!

For example, because of my political views, I teach physics students that gravity pulls down. That’s why the students I teach, when they go on to be engineers, put wheels on the bottom corners of cars, so that the cars don’t scrape on the ground. But in some countries, the physicists teach them that gravity pulls whichever way the country’s leaders instruct it to. That’s why their engineers build flying carpets as transports for their country’s troops. It’s a much more effective way to bring an army into battle, if your politics allows it.  We ought to consider it here.

Another example: in my physics class I claim that energy is “conserved” (in the physics sense) — it is never created out of nothing, nor is it ever destroyed. In our daily lives, energy is taken in with food, converted into special biochemicals for storage, and then used to keep us warm, maintain the pumping of our hearts, allow us to think, walk, breathe — everything we do. Those are my facts. But in some countries, the facts and laws are different, and energy can be created from nothing. The citizens of those countries never need to eat; it is a wonderful thing to be freed from this requirement. It’s great for their military, too, to not have to supply food for troops, or fuel for tanks and airplanes and ships. Our only protection against invasion from these countries is that if they crossed our borders they’d suddenly need fuel tanks.

Facts are what you make them; it’s entirely up to you. You need a good, well-thought-out system of facts, of course; otherwise they won’t produce the answers that you want. But just first figure out what you want to be true, and then go out and find the facts that make it true. That’s the way science has always been done, and the best scientists all insist upon this strategy.  As a simple illustration, compare the photos below.  Which picture has more people in it?   Obviously, the answer depends on what facts you’ve chosen to use.   [Picture copyright Reuters]  If you can’t understand that, you’re not ready to be a serious scientist!

A third example: when I teach physics to students, I instill in them the notion that quantum mechanics controls the atomic world, and underlies the transistors in every computer and every cell phone. But the uncertainty principle that arises in quantum mechanics just isn’t acceptable in some countries, so they don’t factualize it. They don’t use seditious and immoral computer chips there; instead they use proper vacuum tubes. One curious result is that their computers are the size of buildings. The CDC advises you not to travel to these countries, and certainly not to take electronics with you. Not only might your cell phone explode when it gets there, you yourself might too, since your own molecules are held together with quantum mechanical glue. At least you should bring a good-sized bottle of our local facts with you on your travels, and take a good handful before bedtime.

Hearing all the naive cries that facts aren’t for the choosing, I became curious about what our schools are teaching young people. So I asked a friend’s son, a bright young kid in fourth grade, what he’d been learning about alternatives and science. Do you know what he answered?!  I was shocked. “Alternative facts?”, he said. “You mean lies?” Sheesh. Kids these days… What are we teaching them? It’s a good thing we’ll soon have a new secretary of education.

An Interesting Result from CMS, and its Implications

UPDATE 10/26: In the original version of this post, I stupidly forgot to include an effect, causing an error of a factor of about 5 in one of my estimates below. I had originally suggested that a recent result using ALEPH data was probably more powerful than a recent CMS result.  But once the error is corrected, the two experiments appear have comparable sensitivity. However, I was very conservative in my analysis of ALEPH, and my guess concerning CMS has a big uncertainty band — so it might go either way.  It’s up to ALEPH experts and CMS experts to show us who really wins the day.  Added reasoning and discussion marked in green below.

In Friday’s post, I highlighted the importance of looking for low-mass particles whose interactions with known particles are very weak. I referred to a recent preprint in which an experimental physicist, Dr. Arno Heister, reanalyzed ALEPH data in such a search.

A few hours later, Harvard Professor Matt Reece pointed me to a paper that appeared just two weeks ago: a very interesting CMS analysis of 2011-2012 data that did a search of this type — although it appears that CMS [one of the two general purpose detectors at the Large Hadron Collider (LHC)] didn’t think of it that way.

The title of the paper is obscure:  “Search for a light pseudo–scalar Higgs boson produced in association with bottom quarks in pp collisions at 8 TeV“.  Such spin-zero “pseudo-scalar” particles, which often arise in speculative models with more than one Higgs particle, usually decay to bottom quark/anti-quark pairs or tau/anti-tau pairs.  But they can have a very rare decay to muon/anti-muon, which is much easier to measure. The title of the paper gives no indication that the muon/anti-muon channel is the target of the search; you have to read the abstract. Shouldn’t the words “in the dimuon channel” or “dimuon resonance” appear in the title?  That would help researchers who are interested in dimuons, but not in pseudo-scalars, find the paper.

Here’s the main result of the paper:

At left is shown a plot of the number of events as a function of the invariant mass of the muon/anti-muon pairs.  CMS data is in black dots; estimated background is shown in the upper curve (with top quark backgrounds in the lower curve); and the peak at bottom shows what a simulated particle decaying to muon/anti-muon with a mass of 30 GeV/c² would look like. (Imagine sticking the peak on top of the upper curve to see how a signal would affect the data points).  At right are the resulting limits on the rate for such a resonance to be produced and then decay to muon/anti-muon, if it is radiated off of a bottom quark. [A limit of 100 femtobarns means that at most two thousand collisions of this type could have occurred during the year 2012.  But note that only about 1 in 100 of these collisions would have been observed, due to the difficulty of triggering on these collisions and some other challenges.]

[Note also the restriction of the mass of the dimuon pair to the range 25 GeV to 60 GeV. This may have done purely been for technical reasons, but if it was due to the theoretical assumptions, that restriction should be lifted.]

While this plot places moderate limits on spin-zero particles produced with a bottom quark, it’s equally interesting, at least to me, in other contexts. Specifically, it puts limits on any light spin-one particle (call it V) that mixes (either via kinetic or mass mixing) with the photon and Z and often comes along with at least one bottom quark… because for such particles the rate to decay to muons is not rare.  This is very interesting for hidden valley models specifically; as I mentioned on Friday, new spin-one and spin-zero particles often are produced together, giving a muon/anti-muon pair along with one or more bottom quark/anti-quark pairs.

But CMS interpreted its measurement only in terms of radiation of a new particle off a bottom quark.  Now, what if a V particle decaying sometimes to muon/anti-muon were produced in a Z particle decay (a possibility alluded to already in 2006).  For a different production process, the angles and energies of the particles would be different, and since many events would be lost (due to triggering, transverse momentum cuts, and b-tagging inefficiencies at low transverse momentum) the limits would have to be fully recalculated by the experimenters.  It would be great if CMS could add such an analysis before they publish this paper.

Still, we can make a rough back-of-the-envelope estimate, with big caveats. The LHC produced about 600 million Z particles at CMS in 2012. The plot at right tells us that if the V were radiated off a bottom quark, the maximum number of produced V’s decaying to muons would be about 2000 to 8000, depending on the V mass.  Now if we could take those numbers directly, we’d conclude that the fraction of Z’s that could decay to muon/anti-muon plus bottom quarks in this way would be 3 to 12 per million. But sensitivity of this search to a Z decay to V is probably much less than for a V radiated off bottom quarks [because (depending on the V mass) either the bottom quarks in the Z decay would be less energetic and more difficult to tag, or the muons are less energetic on average, or both.] So I’m guessing that the limits on Z decays to V are always worse than one per hundred thousand, for any V mass.  (Thanks to Wei Xue for catching an error as I was finalizing my estimate.)  

If that guess/estimate is correct, then the CMS search does not rule out the possibility of a hundred or so Z decays to V particles at each of the various LEP experiments.  That said, old LEP searches might rule this possibility out; if anyone knows of such a search, please comment or contact me.

As for whether Heister’s analysis of the ALEPH experiment’s data shows signs of such a signal, I think it unlikely (though some people seemed to read my post as saying the opposite.)  As I pointed out in Friday’s post, not only is the excess too small for excitement on its own, it also is somewhat too wide and its angular correlations look like the background (which comes, of course, from bottom quarks that decay to charm quarks plus a muon and neutrino.)  The point of Friday’s post, and of today’s, is that we should be looking.

In fact, because of Heister’s work (which, by the way, is his own, not endorsed by the ALEPH collaboration), we can draw interesting if rough conclusions.  Ignore for now the bump at 30 GeV/c²; that’s more controversial.  What about the absence of a bump between 35 and 50 GeV/c²? Unless there are subtleties with his analysis that I don’t understand, we learn that at ALEPH there were fewer than ten Z decays to a V particle (plus a source of bottom quarks) for V in this mass range.  That limits such Z decays to about 2 to 3 per million.  OOPS: Dumb mistake!! At this step, I forgot to include the fact that requiring bottom quarks in the ALEPH events only works about 20% of the time (thanks to Imperial College Professor Oliver Buchmuller for questioning my reasoning!) The real number is therefore about 5 times larger, more like 10 to 15 per million. If that rough estimate is correct, it would provide a more powerful constraint than constraint roughly comparable to the current CMS analysis.

[[BUT: In my original argument I was very conservative.  When I said “fewer than 10”, I was trying to be brief; really, looking at the invariant mass plot, the allowed numbers of excess events for a V with mass above 36 GeV is typically fewer than 7 or even 5.  And that doesn’t include any angular information, which for many signals would reduce the numbers to 3.   Including these effects properly brings the ALEPH bound back down to something close to my initial estimate.  Anyway, it’s clear that CMS is nipping at ALEPH’s heels, but I’m still betting they haven’t passed ALEPH — yet.]]

So my advice would be to set Heister’s bump aside and instead focus on the constraints that one can obtain, and the potential discoveries that one could make, with this type of analysis, either at LEP or at LHC. That’s where I think the real lesson lies.

The 2016 Data Kills The Two-Photon Bump

Results for the bump seen in December have been updated, and indeed, with the new 2016 data — four times as much as was obtained in 2015 — neither ATLAS nor CMS [the two general purpose detectors at the Large Hadron Collider] sees an excess where the bump appeared in 2015. Not even a hint, as we already learned inadvertently from CMS yesterday.

All indications so far are that the bump was a garden-variety statistical fluke, probably (my personal guess! there’s no evidence!) enhanced slightly by minor imperfections in the 2015 measurements. Should we be surprised? No. If you look back at the history of the 1970s and 1980s, or at the recent past, you’ll see that it’s quite common for hints — even strong hints — of new phenomena to disappear with more data. This is especially true for hints based on small amounts of data (and there were not many two photon events in the bump — just a couple of dozen).  There’s a reason why particle physicists have very high standards for statistical significance before they believe they’ve seen something real.  (Many other fields, notably medical research, have much lower standards.  Think about that for a while.)  History has useful lessons, if you’re willing to learn them.

Back in December 2011, a lot of physicists were persuaded that the data shown by ATLAS and CMS was convincing evidence that the Higgs particle had been discovered. It turned out the data was indeed showing the first hint of the Higgs. But their confidence in what the data was telling them at the time — what was called “firm evidence” by some — was dead wrong. I took a lot of flack for viewing that evidence as a 50-50 proposition (70-30 by March 2012, after more evidence was presented). Yet the December 2015 (March 2016) evidence for the bump at 750 GeV was comparable to what we had in December 2011 for the Higgs. Where’d it go?  Clearly such a level of evidence is not so firm as people claimed. I, at least, would not have been surprised if that original Higgs hint had vanished, just as I am not surprised now… though disappointed of course.

Was this all much ado about nothing? I don’t think so. There’s a reason to have fire drills, to run live-fire exercises, to test out emergency management procedures. A lot of new ideas, both in terms of new theories of nature and new approaches to making experimental measurements, were generated by thinking about this bump in the night. The hope for a quick 2016 discovery may be gone, but what we learned will stick around, and make us better at what we do.