What’s all this fuss about having alternatives?

I don’t know what all the fuss is about “alternative facts.” Why, we scientists use them all the time!

For example, because of my political views, I teach physics students that gravity pulls down. That’s why the students I teach, when they go on to be engineers, put wheels on the bottom corners of cars, so that the cars don’t scrape on the ground. But in some countries, the physicists teach them that gravity pulls whichever way the country’s leaders instruct it to. That’s why their engineers build flying carpets as transports for their country’s troops. It’s a much more effective way to bring an army into battle, if your politics allows it.  We ought to consider it here.

Another example: in my physics class I claim that energy is “conserved” (in the physics sense) — it is never created out of nothing, nor is it ever destroyed. In our daily lives, energy is taken in with food, converted into special biochemicals for storage, and then used to keep us warm, maintain the pumping of our hearts, allow us to think, walk, breathe — everything we do. Those are my facts. But in some countries, the facts and laws are different, and energy can be created from nothing. The citizens of those countries never need to eat; it is a wonderful thing to be freed from this requirement. It’s great for their military, too, to not have to supply food for troops, or fuel for tanks and airplanes and ships. Our only protection against invasion from these countries is that if they crossed our borders they’d suddenly need fuel tanks.

Facts are what you make them; it’s entirely up to you. You need a good, well-thought-out system of facts, of course; otherwise they won’t produce the answers that you want. But just first figure out what you want to be true, and then go out and find the facts that make it true. That’s the way science has always been done, and the best scientists all insist upon this strategy.  As a simple illustration, compare the photos below.  Which picture has more people in it?   Obviously, the answer depends on what facts you’ve chosen to use.   [Picture copyright Reuters]  If you can’t understand that, you’re not ready to be a serious scientist!

A third example: when I teach physics to students, I instill in them the notion that quantum mechanics controls the atomic world, and underlies the transistors in every computer and every cell phone. But the uncertainty principle that arises in quantum mechanics just isn’t acceptable in some countries, so they don’t factualize it. They don’t use seditious and immoral computer chips there; instead they use proper vacuum tubes. One curious result is that their computers are the size of buildings. The CDC advises you not to travel to these countries, and certainly not to take electronics with you. Not only might your cell phone explode when it gets there, you yourself might too, since your own molecules are held together with quantum mechanical glue. At least you should bring a good-sized bottle of our local facts with you on your travels, and take a good handful before bedtime.

Hearing all the naive cries that facts aren’t for the choosing, I became curious about what our schools are teaching young people. So I asked a friend’s son, a bright young kid in fourth grade, what he’d been learning about alternatives and science. Do you know what he answered?!  I was shocked. “Alternative facts?”, he said. “You mean lies?” Sheesh. Kids these days… What are we teaching them? It’s a good thing we’ll soon have a new secretary of education.

An Interesting Result from CMS, and its Implications

UPDATE 10/26: In the original version of this post, I stupidly forgot to include an effect, causing an error of a factor of about 5 in one of my estimates below. I had originally suggested that a recent result using ALEPH data was probably more powerful than a recent CMS result.  But once the error is corrected, the two experiments appear have comparable sensitivity. However, I was very conservative in my analysis of ALEPH, and my guess concerning CMS has a big uncertainty band — so it might go either way.  It’s up to ALEPH experts and CMS experts to show us who really wins the day.  Added reasoning and discussion marked in green below.

In Friday’s post, I highlighted the importance of looking for low-mass particles whose interactions with known particles are very weak. I referred to a recent preprint in which an experimental physicist, Dr. Arno Heister, reanalyzed ALEPH data in such a search.

A few hours later, Harvard Professor Matt Reece pointed me to a paper that appeared just two weeks ago: a very interesting CMS analysis of 2011-2012 data that did a search of this type — although it appears that CMS [one of the two general purpose detectors at the Large Hadron Collider (LHC)] didn’t think of it that way.

The title of the paper is obscure:  “Search for a light pseudo–scalar Higgs boson produced in association with bottom quarks in pp collisions at 8 TeV“.  Such spin-zero “pseudo-scalar” particles, which often arise in speculative models with more than one Higgs particle, usually decay to bottom quark/anti-quark pairs or tau/anti-tau pairs.  But they can have a very rare decay to muon/anti-muon, which is much easier to measure. The title of the paper gives no indication that the muon/anti-muon channel is the target of the search; you have to read the abstract. Shouldn’t the words “in the dimuon channel” or “dimuon resonance” appear in the title?  That would help researchers who are interested in dimuons, but not in pseudo-scalars, find the paper.

Here’s the main result of the paper:

At left is shown a plot of the number of events as a function of the invariant mass of the muon/anti-muon pairs.  CMS data is in black dots; estimated background is shown in the upper curve (with top quark backgrounds in the lower curve); and the peak at bottom shows what a simulated particle decaying to muon/anti-muon with a mass of 30 GeV/c² would look like. (Imagine sticking the peak on top of the upper curve to see how a signal would affect the data points).  At right are the resulting limits on the rate for such a resonance to be produced and then decay to muon/anti-muon, if it is radiated off of a bottom quark. [A limit of 100 femtobarns means that at most two thousand collisions of this type could have occurred during the year 2012.  But note that only about 1 in 100 of these collisions would have been observed, due to the difficulty of triggering on these collisions and some other challenges.]

[Note also the restriction of the mass of the dimuon pair to the range 25 GeV to 60 GeV. This may have done purely been for technical reasons, but if it was due to the theoretical assumptions, that restriction should be lifted.]

While this plot places moderate limits on spin-zero particles produced with a bottom quark, it’s equally interesting, at least to me, in other contexts. Specifically, it puts limits on any light spin-one particle (call it V) that mixes (either via kinetic or mass mixing) with the photon and Z and often comes along with at least one bottom quark… because for such particles the rate to decay to muons is not rare.  This is very interesting for hidden valley models specifically; as I mentioned on Friday, new spin-one and spin-zero particles often are produced together, giving a muon/anti-muon pair along with one or more bottom quark/anti-quark pairs.

But CMS interpreted its measurement only in terms of radiation of a new particle off a bottom quark.  Now, what if a V particle decaying sometimes to muon/anti-muon were produced in a Z particle decay (a possibility alluded to already in 2006).  For a different production process, the angles and energies of the particles would be different, and since many events would be lost (due to triggering, transverse momentum cuts, and b-tagging inefficiencies at low transverse momentum) the limits would have to be fully recalculated by the experimenters.  It would be great if CMS could add such an analysis before they publish this paper.

Still, we can make a rough back-of-the-envelope estimate, with big caveats. The LHC produced about 600 million Z particles at CMS in 2012. The plot at right tells us that if the V were radiated off a bottom quark, the maximum number of produced V’s decaying to muons would be about 2000 to 8000, depending on the V mass.  Now if we could take those numbers directly, we’d conclude that the fraction of Z’s that could decay to muon/anti-muon plus bottom quarks in this way would be 3 to 12 per million. But sensitivity of this search to a Z decay to V is probably much less than for a V radiated off bottom quarks [because (depending on the V mass) either the bottom quarks in the Z decay would be less energetic and more difficult to tag, or the muons are less energetic on average, or both.] So I’m guessing that the limits on Z decays to V are always worse than one per hundred thousand, for any V mass.  (Thanks to Wei Xue for catching an error as I was finalizing my estimate.)  

If that guess/estimate is correct, then the CMS search does not rule out the possibility of a hundred or so Z decays to V particles at each of the various LEP experiments.  That said, old LEP searches might rule this possibility out; if anyone knows of such a search, please comment or contact me.

As for whether Heister’s analysis of the ALEPH experiment’s data shows signs of such a signal, I think it unlikely (though some people seemed to read my post as saying the opposite.)  As I pointed out in Friday’s post, not only is the excess too small for excitement on its own, it also is somewhat too wide and its angular correlations look like the background (which comes, of course, from bottom quarks that decay to charm quarks plus a muon and neutrino.)  The point of Friday’s post, and of today’s, is that we should be looking.

In fact, because of Heister’s work (which, by the way, is his own, not endorsed by the ALEPH collaboration), we can draw interesting if rough conclusions.  Ignore for now the bump at 30 GeV/c²; that’s more controversial.  What about the absence of a bump between 35 and 50 GeV/c²? Unless there are subtleties with his analysis that I don’t understand, we learn that at ALEPH there were fewer than ten Z decays to a V particle (plus a source of bottom quarks) for V in this mass range.  That limits such Z decays to about 2 to 3 per million.  OOPS: Dumb mistake!! At this step, I forgot to include the fact that requiring bottom quarks in the ALEPH events only works about 20% of the time (thanks to Imperial College Professor Oliver Buchmuller for questioning my reasoning!) The real number is therefore about 5 times larger, more like 10 to 15 per million. If that rough estimate is correct, it would provide a more powerful constraint than constraint roughly comparable to the current CMS analysis.

[[BUT: In my original argument I was very conservative.  When I said “fewer than 10”, I was trying to be brief; really, looking at the invariant mass plot, the allowed numbers of excess events for a V with mass above 36 GeV is typically fewer than 7 or even 5.  And that doesn’t include any angular information, which for many signals would reduce the numbers to 3.   Including these effects properly brings the ALEPH bound back down to something close to my initial estimate.  Anyway, it’s clear that CMS is nipping at ALEPH’s heels, but I’m still betting they haven’t passed ALEPH — yet.]]

So my advice would be to set Heister’s bump aside and instead focus on the constraints that one can obtain, and the potential discoveries that one could make, with this type of analysis, either at LEP or at LHC. That’s where I think the real lesson lies.

A Hidden Gem At An Old Experiment?

This summer there was a blog post from   claiming that “The LHC `nightmare scenario’ has come true” — implying that the Large Hadron Collider [LHC] has found nothing but a Standard Model Higgs particle (the simplest possible type), and will find nothing more of great importance. With all due respect for the considerable intelligence and technical ability of the author of that post, I could not disagree more; not only are we not in a nightmare, it isn’t even night-time yet, and hardly time for sleep or even daydreaming. There’s a tremendous amount of work to do, and there may be many hidden discoveries yet to be made, lurking in existing LHC data.  Or elsewhere.

I can defend this claim (and have done so as recently as this month; here are my slides). But there’s evidence from another quarter that it is far too early for such pessimism.  It has appeared in a new paper (a preprint, so not yet peer-reviewed) by an experimentalist named Arno Heister, who is evaluating 20-year old data from the experiment known as ALEPH.

In the early 1990s the Large Electron-Positron (LEP) collider at CERN, in the same tunnel that now houses the LHC, produced nearly 4 million Z particles at the center of ALEPH; the Z’s decayed immediately into other particles, and ALEPH was used to observe those decays.  Of course the data was studied in great detail, and you might think there couldn’t possibly be anything still left to find in that data, after over 20 years. But a hidden gem wouldn’t surprise those of us who have worked in this subject for a long time — especially those of us who have worked on hidden valleys. (Hidden Valleys are theories with a set of new forces and low-mass particles, which, because they aren’t affected by the known forces excepting gravity, interact very weakly with the known particles.  They are also often called “dark sectors” if they have something to do with dark matter.)

For some reason most experimenters in particle physics don’t tend to look for things just because they can; they stick to signals that theorists have already predicted. Since hidden valleys only hit the market in a 2006 paper I wrote with then-student Kathryn Zurek, long after the experimenters at ALEPH had moved on to other experiments, nobody went back to look in ALEPH or other LEP data for hidden valley phenomena (with one exception.) I didn’t expect anyone to ever do so; it’s a lot of work to dig up and recommission old computer files.

This wouldn’t have been a problem if the big LHC experiments (ATLAS, CMS and LHCb) had looked extensively for the sorts of particles expected in hidden valleys. ATLAS and CMS especially have many advantages; for instance, the LHC has made over a hundred times more Z particles than LEP ever did. But despite specific proposals for what to look for (and a decade of pleading), only a few limited searches have been carried out, mostly for very long-lived particles, for particles with mass of a few GeV/c² or less, and for particles produced in unexpected Higgs decays. And that means that, yes, hidden physics could certainly still be found in old ALEPH data, and in other old experiments. Kudos to Dr. Heister for taking a look. Continue reading

The 2016 Data Kills The Two-Photon Bump

Results for the bump seen in December have been updated, and indeed, with the new 2016 data — four times as much as was obtained in 2015 — neither ATLAS nor CMS [the two general purpose detectors at the Large Hadron Collider] sees an excess where the bump appeared in 2015. Not even a hint, as we already learned inadvertently from CMS yesterday.

All indications so far are that the bump was a garden-variety statistical fluke, probably (my personal guess! there’s no evidence!) enhanced slightly by minor imperfections in the 2015 measurements. Should we be surprised? No. If you look back at the history of the 1970s and 1980s, or at the recent past, you’ll see that it’s quite common for hints — even strong hints — of new phenomena to disappear with more data. This is especially true for hints based on small amounts of data (and there were not many two photon events in the bump — just a couple of dozen).  There’s a reason why particle physicists have very high standards for statistical significance before they believe they’ve seen something real.  (Many other fields, notably medical research, have much lower standards.  Think about that for a while.)  History has useful lessons, if you’re willing to learn them.

Back in December 2011, a lot of physicists were persuaded that the data shown by ATLAS and CMS was convincing evidence that the Higgs particle had been discovered. It turned out the data was indeed showing the first hint of the Higgs. But their confidence in what the data was telling them at the time — what was called “firm evidence” by some — was dead wrong. I took a lot of flack for viewing that evidence as a 50-50 proposition (70-30 by March 2012, after more evidence was presented). Yet the December 2015 (March 2016) evidence for the bump at 750 GeV was comparable to what we had in December 2011 for the Higgs. Where’d it go?  Clearly such a level of evidence is not so firm as people claimed. I, at least, would not have been surprised if that original Higgs hint had vanished, just as I am not surprised now… though disappointed of course.

Was this all much ado about nothing? I don’t think so. There’s a reason to have fire drills, to run live-fire exercises, to test out emergency management procedures. A lot of new ideas, both in terms of new theories of nature and new approaches to making experimental measurements, were generated by thinking about this bump in the night. The hope for a quick 2016 discovery may be gone, but what we learned will stick around, and make us better at what we do.

A Flash in the Pan Flickers Out

Back in the California Gold Rush, many people panning for gold saw a yellow glint at the bottom of their pans, and thought themselves lucky.  But more often than not, it was pyrite — iron sulfide — fool’s gold…

Back in December 2015, a bunch of particle physicists saw a bump on a plot.  The plot showed the numbers of events with two photons (particles of light) as a function of the “invariant mass” of the photon pair.  (To be precise, they saw a big bump on one ATLAS plot, and a bunch of small bumps in similar plots by CMS and ATLAS [the two general purpose experiments at the Large Hadron Collider].)  What was that bump?  Was it a sign of a new particle?

A similar bump was the first sign of the Higgs boson, though that was far from clear at the time.  What about this bump?

As I wrote in December,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to describe what it might be if the bump were more than just a statistical fluke.  A lot of us — theoretical particle physicists like me — had a lot of fun, and learned a lot of physics, by considering what that bump might mean if it were a sign of something real.  (In fact I’ll be giving a talk here at CERN next week entitled “Lessons from a Flash in the Pan,” describing what I learned, or remembered, along the way.)

But updated results from CMS, based on a large amount of new data taken in 2016, have been seen.   (Perhaps these have leaked out early; they were supposed to be presented tomorrow along with those from ATLAS.)  They apparently show that where the bump was before, they now see nothing.  In fact there’s a small dip in the data there.

So — it seems that what we saw in those December plots was a fluke.  It happens.  I’m certainly disappointed, but hardly surprised.  Funny things happen with small amounts of data.

At the ICHEP 2016 conference, which started today, official presentation of the updated ATLAS and CMS two-photon results will come on Friday, but I think we all know the score.  So instead our focus will be on  the many other results (dozens and dozens, I hear) that the experiments will be showing us for the first time.  Already we had a small blizzard of them today.  I’m excited to see what they have to show us … the Standard Model, and naturalness, remain on trial.

The Summer View at CERN

For the first time in some years, I’m spending two and a half weeks at CERN (the lab that hosts the Large Hadron Collider [LHC]). Most of my recent visits have been short or virtual, but this time* there’s a theory workshop that has collected together a number of theoretical particle physicists, and it’s a good opportunity for all of us to catch up with the latest creative ideas in the subject.   It’s also an opportunity to catch a glimpse of the furtive immensity of Mont Blanc, a hulking bump on the southern horizon, although only if (as is rarely the case) nature offers clear and beautiful weather.

More importantly, new results on the data collected so far in 2016 at the LHC are coming very soon!  They will be presented at the ICHEP conference that will be held in Chicago starting August 3rd. And there’s something we’ll be watching closely.

You may remember that in a post last December I wrote:

  “Everybody wants to know. That bump seen on the ATLAS and CMS two-photon plots!  What… IS… it…?

Why the excitement? A bump of this type can be a signal of a new particle (as was the case for the Higgs particle itself.) And since a new particle that would produce a bump of this size was both completely unexpected and completely plausible, there was hope that we were seeing a hint of something new and important.

However, as I wrote in the same post,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to discuss briefly what it might mean if it wasn’t just a statistical fluke. But speculation may be about to end: finally, we’re about to find out if it was indeed just a fluke — or a sign of something real.

Since December the amount of 13 TeV collision data available at ATLAS and CMS (the two general purpose experiments at the LHC) has roughly quadrupled, which means that typical bumps and wiggles on their 2015-2016 plots have decreased in relative size by about a factor of two (= square root of four). If the December bump is just randomness, it should also decrease in relative size. If it’s real, it should remain roughly the same relative size, but appear more prominent relative to the random bumps and wiggles around it.

Now, there’s a caution to be added here. The December ATLAS bump was so large and fat compared to what was seen at CMS that (since reality has to appear the same at both experiments, once enough data has been collected) it was pretty obvious that even if it there were a real bump there, at ATLAS it was probably in combination with a statistical fluke that made it look larger and fatter than its true nature. [Something similar happened with the Higgs; the initial bump that ATLAS saw was twice as big as expected, which is why it showed up so early, but it gradually has shrunk as more data has been collected and it is now close to its expected size.  In retrospect, that tells us that ATLAS’s original signal was indeed combined with a statistical fluke that made it appear larger than it really is.] What that means is that even if the December bumps were real, we would expect the ATLAS bump to shrink in size (but not statistical significance) and we would expect the CMS bump to remain of similar size (but grow in statistical significance). Remember, though, that “expectation” is not certainty, because at every stage statistical flukes (up or down) are possible.

In about a week we’ll find out where things currently stand. But the mood, as I read it here in the hallways and cafeteria, is not one of excitement. Moreover, the fact that the update to the results is (at the moment) unobtrusively scheduled for a parallel session of the ICHEP conference next Friday, afternoon time at CERN, suggests we’re not going  to see convincing evidence of anything exciting. If so, then the remaining question will be whether the reverse is true: whether the data will show convincing evidence that the December bump was definitely a fluke.

Flukes are guaranteed; with limited amounts of data, they can’t be avoided.  Discoveries, on the other hand, require skill, insight, and luck: you must ask a good question, address it with the best available methods, and be fortunate enough that (as is rarely the case) nature offers a clear and interesting answer.


*I am grateful for the CERN theory group’s financial support during this visit.

Spinoffs from Fundamental Science

I find that some people just don’t believe scientists when we point out that fundamental research has spin-off benefits for modern society.  The assumption often seems to be that it’s just a bunch of egghead esoteric researchers trying to justify their existence.  It’s a real problem when those scoffing at our evidence are congresspeople of the United States and their staffers, or other members of governmental funding agencies around the world.

So I thought I’d point out an example, reported on Bloomberg News.  It’s a good illustration of how these things often work out, and it is very rare indeed that they are discussed in the press.

Gravitational waves are usually incredibly tiny effects [typically squeezing the radius of our planet by less than the width of an atomic nucleus] that can be made only with monster black holes and neutron stars.   There’s not much hope of using them in technology.  So what good could an experiment to discover them, such as LIGO, possibly be for the rest of the world?

Well, Shell Oil seems to have found some value in it.   It’s not in the gravitational waves themselves, of course; instead, it is in the technology that has to be developed to detect something so delicate.   http://www.bloomberg.com/news/articles/2016-07-07/shell-is-using-innoseis-s-sensors-to-detect-gravitational-waves

Score another one for investment in fundamental scientific research.


LIGO detects a second merger of black holes

There’s additional news from LIGO (the Laser Interferometry Gravitational Observatory) about gravitational waves today. What was a giant discovery just a few months ago will soon become almost routine… but for now it is still very exciting…

LIGO got a Christmas (US) present: Dec 25th/26th 2015, two more black holes were detected coalescing 1.4 billion light years away — changing the length of LIGO’s arms by 300 parts in a trillion trillion, even less than the first merger observed in September. The black holes had 14 solar masses and 8 solar masses, and merged into a black hole with 21 solar masses, emitting 1 solar mass of energy in gravitational waves. In contrast to the September event, which was short and showed just a few orbits before the merger, in this event nearly 30 orbits over a full second are observed, making more information available to scientists about the black holes, the merger, and general relativity.  (Apparently one of the incoming black holes was spinning with at least 20% of the maximum possible rotation rate for a black hole.)

The signal is not so “bright” as the first one, so it cannot be seen by eye if you just look at the data; to find it, some clever mathematical techniques are needed. But the signal, after signal processing, is very clear. (Signal-to-noise ratio is 13; it was 24 for the September detection.) For such a clear signal to occur due to random noise is 5 standard deviations — officially a detection. The corresponding “chirp” is nowhere near so obvious, but there is a faint trace.

This gives two detections of black hole mergers over about 48 days of 2015 quality data. There’s also a third “candidate”, not so clear — signal-to-noise of just under 10. If it is really due to gravitational waves, it would be merging black holes again… midway in size between the September and December events… but it is borderline, and might just be a statistical fluke.

It is interesting that we already have two, maybe three, mergers of large black holes… and no mergers of neutron stars with black holes or with each other, which are harder to observe. It seems there really are a lot of big black holes in binary pairs out there in the universe. Incidentally, the question of whether they might form the dark matter of the universe has been raised; it’s still a long-shot idea, since there are arguments against it for black holes of this size, but seeing these merger rates one has to reconsider those arguments carefully and keep an open mind about the evidence.

Let’s remember also that advanced-LIGO is still not running at full capacity. When LIGO starts its next run, six months long starting in September, the improvements over last year’s run will probably give a 50% to 100% increase in the rate for observed mergers.   In the longer run, the possibility of one merger per week is possible.

Meanwhile, VIRGO in Italy will come on line soon too, early in 2017. Japan and India are getting into the game too over the coming years. More detectors will allow scientists to know where on the sky the merger took place, which then can allow normal telescopes to look for flashes of light (or other forms of electromagnetic radiation) that might occur simultaneously with the merger… as is expected for neutron star mergers but not widely expected for black hole mergers.  The era of gravitational wave astronomy is underway.