Of Particular Significance

Opening of LHCP Conference

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 05/13/2013

Greetings from Barcelona, where the LHCP 2013 conference is underway. I wanted to mention a couple of the opening remarks made by CERN’s Sergio Bertolucci and Mirko Pojer, both of whom spoke about the near-term and medium-term future of the Large Hadron Collider [LHC].

It’s worth taking a moment to review what happened in the LHC’s first run. During its first few years, the LHC was initially intended to run at around 14 TeV of energy in each proton-proton collision, and at a moderate collision rate. But shortly after beams were turned on, and before there were any collisions, there occurred the famous accident of September 19, 2008. The ensuing investigation of the cause revealed flaws in the connections between the superconducting magnets, as well as in the system that protects the machine against the effect of a magnet losing its superconductivity (called a “quench”; quenches are expected to happen occasionally, but they have to be controlled.) To keep the machine safe from further problems, it was decided to run the machine at 7 TeV per collision, and make up (in part) for the lower energy by running at a higher collision rate. Then:

  • Late 2009: beams were restarted at 2.2 TeV per collision.
  • 2010: a small number of collisions and a few new experimental results were obtained at 7 TeV per collision
  • 2011: a large number of collisions (corresponding to nearly 100,000 Higgs particles per experiment [i.e. in ATLAS and CMS]) were obtained at 7 TeV per collision
  • 2012: an even larger number of collisions (corresponding to over 400,000 Higgs particles per experiment) were obtained at 8 TeV per collision.

All in all, this “Run 1” of the LHC is widely viewed as enormously successful. For one thing, it showed that (excepting only the flawed but fixable magnet connections) the LHC is an excellent machine and works beautifully.  A high collision rate was indeed achieved, and this, combined with the quality of the experimental detectors and the cleverness of the experimental physicists, was sufficient for discovery of and initial study of what is now referred to as a “Standard Model-like Higgs particle”, as well as for ruling out a wide range of variants of certain speculative ideas [here are a couple of examples.]

Currently, the LHC is shut down for repairs and upgrades, in preparation for Run 2, which will begin in 2015. The machine has been warmed up to room temperature (normally its magnets have to be kept at 1.9 Kelvin, i.e 1.9 degrees Celsius above absolute zero), and, among many adjustments, all of those potentially problematic connections between magnets are being improved, to make it safer for the machine to run at higher energy per collision.

So here’s the update — I hesitate to call this “news”, since none of this very surprising to those who’ve been following events in detail. The plan, according to Bertolucci and to Pojer, includes the following

  • When Run 2 starts in 2015, the energy per collision will probably be 13 TeV, with the possibility of increasing this toward the design energy of 14 TeV later in Run 2. This was more or less expected, given what was learned about the LHC’s superconducting magnets a few years ago: some of these crucial magnets may have quenches too often when operating at 14 TeV conditions, making the accelerator too inefficient at that energy.
  • A big question that is still not decided (and may not be decided until direct experience is gained in 2015) is whether it is better to run with collisions every 50 nanoseconds [billionths of a second], as in 2011-2012, or every 25 nanoseconds, as was the original design for the LHC.  The latter is better for the operation of the experimental detectors and the analysis of the data, but poses more challenges for operating the LHC, and may cause the proton beams to be less stable. Studies on this question may be ongoing  throughout a good part of 2015.
  • Run 2 is currently planned for 2015-2017, but as Pojer reminded us, 2015 will involve starting up the machine at a new energy and collision rate, and so a lot of time in 2015 will be spent on making the machine work properly and efficiently. Somewhat as in 2010, which was a year of pilot running before large amounts of data were obtained in 2011-2012, it is likely that 2015 will also be a year of relatively low data rate. Most of the data in the next run will appear in 2016-2017.  The bottom line is that although there will be new data in 2015, one should remember not to expect overly much news in that first year.

Of course the precise dates and plans may shift.  Life being what it is, it would not be surprising if some of the challenges are a bit worse than expected; this could delay the start of Run 2 by a few months, or require a slightly lower energy at the start. Nor would it be surprising if Run 2 extends into 2018.  But if Run 1 (and the experience at other accelerators) is any guide, then even though some things won’t go as well as hoped, others will go better than expected.

Share via:

Twitter
Facebook
LinkedIn
Reddit

15 Responses

  1. thanks…Matt
    I mean Monte Carlo yes ..
    about my question I meant what is high energy can we expected (>125.5Gev) for pp collision to detect Higgs Boson in this year.

    1. Your question is still a little confusing to me, and so I think that perhaps you are still very confused about how proton-proton collisions work. May I suggest you read a few articles first? Try this one

      http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/whats-a-proton-anyway/proton-collisions-vs-quarkgluonantiquark-mini-collisions/

      and some of the articles that it refers to. The point is that we have far more energy than 125.5 GeV in every pp collision… but that’s not the question you should be asking about. You should be asking how often we have mini-collisions of gluons, quarks and anti-quarks with energy above 125.5 GeV. And then you need to ask what fraction of those minicollisions produce Higgs bosons.

      Monte Carlo programs simply incorporate all the knowledge we have about the internal structure of the proton, and about the known laws of nature. They allow us to calculate how often we have mini-collisions which are energetic enough to make Higgs bosons, and then to calculate how often such mini-collisions actually make Higgs bosons, and anything else.

  2. How low an initial data rate in 2015 are they talking about?

    In 2010 they collected 45\pb compared to 5.5/fb in 2011. It would be very disappointing if they opted similarly for something of the order 100\pb 2015.

    1. Nobody has any idea, because it depends on whether the schedule slips much, and on whether a lot of 25 ns studies are needed, and on what the priorities are as set by the lab directory. But I would not worry much about this; whether 2015 brings us 0.1/fb or 1/fb or 5/fb, it will still pale compared to what will occur in 2016. The really important thing to do in 2015 is to set things up for optimal running in 2016 and beyond… you can’t expect optimal running in 2015. So while I’m sure there will be a lot of pressure to aim for 5/fb in 2015, priorities are clearly to get the best results during Run 2 as a whole, not in 2015 specifically.

  3. From an engineering perspective, it makes a lot of sense to start Run 2 on the safe side and gather a lot of operational data to learn and reflect on what are the better pathways to safely increase the colission energy to 14 MeV.

  4. Sorry, your question isn’t clear. Are you asking how often there are collisions of the quarks, antiquarks and gluons inside the two protons that reach 130 GeV? (Maybe it would help if you clarified *why* you are asking the question.) And by “Monticello” do you mean “Monte Carlo”?

  5. I want to ask about pp colision when energy can be 130 Gev theoretically.
    and how can we expected that by using computational data Monticello.

  6. Could you please fill in just a few more details about the every-25-ns question? For example, is 25 ns equal to one bunch? How long might the machine be running at that rate? How many actual collisions might be observed per 25 ns?

    Thank you.

    Ralph

    1. As you note, each of the two proton beams is made from about 1400 or about 2800 bunches of about 100,000,000,000 protons each. Bunches from the two beams will cross at ATLAS and at CMS every 50 nanoseconds (with 1400 bunches) or 25 nanoseconds (with 2800 bunches). The number of simultaneous proton-proton collisions expected for each bunch crossing will be something like 40 – 80 (for 1400 bunches) or something like 20 – 40 (for 2800 bunches.) This “pile-up” (http://profmattstrassler.com/articles-and-posts/largehadroncolliderfaq/the-trigger-discarding-all-but-the-gold/) makes the measurements more difficult, and reducing the pile-up is one of the reasons that the experiments would like to operate with 2800 bunches and 25 nanoseconds between bunch crossings.

      1. Why will there be fewer collisions per crossing when you add more bunches to the beam? Will there be fewer protons in a bunch?

        1. I think they won’t use fewer protons per bunch, but rather less tightly focused bunches. This may allow the beams to last longer before they have to be dumped and remade. But that’s up to the accelerator physicists to optimize.

  7. Welcome to Barcelona! Use suncream, watch your belongings and avoid thimbleriggers… otherwise enjoy 😛

  8. Enjoy Barcelona! It was my entry on my first European tour. Just remember that Columbus points the wrong way!

  9. Well, I just came from a site where people, a few of whom also comment on Matt’s site, did a lot of LHC, Higgs, and particle physics bashing. I must say that I’m at a loss to understand it. I think the LHC is a wise investment that has already borne fruit. Whether or not Higgs turns out to be the “giver of mass” it is clear that a fundamental spin 0 Boson has been discovered. That is a major discovery that cannot be denied.

    I know that building, operating, maintaining and upgrading colliders like the LHC is not cheap but I do think that CERN spent money wisely. First of all they built the LHC in the old LEP tunnel, and this alone saved a fortune in construction costs. (The U.S. has a hole in the ground in Texas where our LHC the Super-Conducting Super-Collider should have been…The U.S. Congress, so very frugal (ha!) – killed it years ago). Second LHC is still in its infancy, and is only operating at about half its design energy (14 TeV), I think it is very premature to make judgments on the machines success at this point. Anyway, sorry to start this one off on a rant…

    Matt, I do have a question. I read a paper recently: “The universal Higgs fit” (arXiv:1303.3570vI [hep-ph] 14 Mar 2013) where the authors state: “We derive for the first time the SM Higgs boson mass from the measured rates, rather than from the peak positions, obtaining Mh = 124.2 +/- 1.8 GeV.” Is it more accurate to go by the rates?

    1. Using the rates assumes the Standard Model is correct, while directly measuring the mass (from measuring the energy of what the Higgs decays to, and using energy and momentum conservation) doesn’t make that assumption. So I would view these techniques as complementary. I am also not sure that you should trust the quoted uncertainty bar; it may be a bit optimistically small.

Leave a Reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/15/2024

About a month ago, there was a lot of noise, discussion and controversy concerning CERN‘s proposal to build a giant new tunnel and put powerful

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/08/2024