The Large Hadron Collider [LHC] is moving into a new era. Up to now the experimenters at ATLAS and CMS have mostly been either looking for the Higgs particle or looking for relatively large and spectacular signs of new phenomena. But with the Higgs particle apparently found, and with the rate at which data is being gathered beginning to level off, the LHC gradually is entering a new phase, in which dominant efforts will involve precision measurements of the Higgs particle’s properties and careful searches for new phenomena that generate only small and subtle effects. And this will be the story for the next couple of years.
The LHC began gathering data in 2010, so we’re well into year three of its operations. Let me remind you of the history — showing the year, the number of proton-proton collisions (“collisions” for short) generated (or estimated in future to be generated) for the ATLAS and CMS experiments, and the energy of each collision (`energy’, in units of TeV, for short)
- 2009: a small number of collisions at up to 2.2 TeV energy
- 2010: about 3.5 million million collisions at 7 TeV energy
- 2011: about 500 million million collisions at 7 TeV energy
- through 6/2012: about 600 million million collisions at 8 TeV energy
- all 2012: about 3,000 million million collisions at 8 TeV
- 2013-2014: shutdown for upgrades/repairs (though data analysis will continue)
- 2015-2018: at least 30,000 million million collisions at around 13 TeV
The number of collisions obtained at the LHCb experiment is somewhat smaller, as planned — and that experiment has been mostly aimed at precision measurements, like this one, from the beginning. There’s also the one-month-a-year program to collide large atomic nuclei, but I’ll describe that elsewhere.
Let’s look at what’s happened so far and what’s projected for the rest of the year. 2010 was a pilot run; only very spectacular phenomena would have shown up. 2011 was a huge jump in the number of collisions, by a factor of 150; it was this data that started really ruling out many possibilities for new particles and forces of various types, and that brought us the first hints of the Higgs particle. Roughly doubling that data set at a slightly higher energy per collision, as has been done so far in 2012, brought us convincing evidence of the Higgs in both the ATLAS and CMS experiments, enough for a discovery. The total data set may be tripled again before the end of the year.
In an experiment, rapid and easy discoveries can occur when there’s a qualitative change in what an experiment is doing.
- Turning it on and running it for the first time, as in 2010.
- Increasing the amount of data by a huge factor, as in 2011.
- Changing the energy by a lot, as in 2015.
But 2012 is different. The energy has increased only by 15%. The amount of data is going up by perhaps a factor of 6 (which in many cases improves the statistical significance of a measurement only by the square root of 6, or about a factor of 2.5 or so.)
Up until now, rapid and easy discoveries have been possible. Except for the Higgs search, which was a high precision search aimed at a known target (at the simplest type of Higgs particle [the “Standard Model Higgs”], or of a type that roughly resembles it), most of the other searches done so far at the LHC have been broad-brush searches, sort of analogous to quickly scanning a new scene with binoculars looking for something that’s obviously out of place. Not that anything done at the LHC is really easy, but it is fair to say that these are the easiest type of searches. You have a pretty good idea, in advance, of what you ought to see; you do your homework, in various ways, using both experimental data and theoretical prediction, to make sure in advance that you’re right about that; and then you look at the data, and see if anything large and unexpected turns up. If not, you can say that you excluded various possibilities for new particles or forces; if it does, you look at it carefully to decide whether you believe it is a statistical fluke or a problem with your experiment, and you try to find more evidence for it in other parts of your data. This type of technique has been used for most of the searches done so far at the LHC, and given how rapidly the data rates were increasing until recently, that was perfectly appropriate.
But nothing has yet turned up in those “easy” searches; the only thing that’s been found is a Higgs-like particle. And now, as we head into the latter half of 2012, we are not going to get so much more data that an easy discovery is likely (though it is still possible.) If an easy search of a particular sort has already been done, and if it hasn’t already revealed a hint of something a little bit out of place, then increasing the data by a factor of a few, while not significantly increasing the energy, isn’t likely to be enough for a convincing signal of something new to emerge in the data by the end of the year.
So instead, experimenters and theorists involved with the LHC are gearing up for more difficult searches. (Not that these haven’t been going on at all — searches for top squarks are tough, and these are well underway — but there haven’t been that many.) Instead of quickly scanning a scene for something amiss, we’re now in need of knowing the expected scene in detail, so that a careful study of the scene can allow us to recognize something which is just slightly out of place. To do this, we now need to know much more precisely what is supposed to be there.
In principle, what we expect to be there is predicted by the “Standard Model”, the equations that we use to predict the behavior of all the known elementary particles and non-gravitational forces, including a Higgs particle of the simplest type (which so far is consistent with existing data.) But getting a prediction to compare with real data isn’t easy or straightforward. The equations are completely known, but predictions are often needed with a precision at the 10% level or better, and that’s not easily obtained. Here are the basic problems:
- Calculations of the basic collisions of quarks, gluons and anti-quarks with precision of much better than 50% are technically difficult, and require considerable cleverness, finesse and sweat; for sufficiently complicated collisions, such calculations are often not yet possible.
- The prediction also requires an understanding of how quarks, gluons and anti-quarks are distributed within the proton; our imprecise knowledge (which comes from previous experiments’ data, not from theoretical calculation), often limits the overall precision on the prediction of what the LHC will observe.
- The predictions need to take account of the complexities of real LHC detectors, which have complicated shapes and many imperfections that the experimenters have to correct for.
In other words, the Standard Model, as a set of equations, is very easy to write down, but reality requires suffering through the complexities of real experimental detectors, the messiness of protons, and the hard work required to get a precise prediction for the basic underlying physical process.
And so as we move through the rest of 2012 and into the data-analysis period of 2013-2014, the real key to progress will be combining predictions from the Standard Model, and measurements that check that these predictions have been made with accuracy as well as precision, to allow the carrying out of many searches for particles or forces that generate small and/or subtle signals. It’s the next phase of the LHC program: careful study of the Higgs particle (checking both what it is expected to do and what it is not expected to do) and methodical searching through the data for signs of something that violates the predictions of the Standard Model. We’ll go back to easy-search mode after the 2013-2014 shutdown, when the LHC turns on at higher collision energy. But until then, the hard work of highly precise measurements will increasingly be the focus.