Of Particular Significance

LHC Producing 8 TeV Data

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 04/11/2012

Still early days in the 2012 data-taking run, which just started a couple of weeks ago, but already the Large Hadron Collider [LHC] accelerator wizards, operating the machine at 8 TeV of energy per proton-proton collision (compared to last year’s 7 TeV) have brought the collision rates back up nearly to where they were last year.    This is very good news, in that it indicates there are no significant unexpected technical problems preventing the accelerator from operating at the high collision rates that are required this year.   And the experiments are already starting to collect useful data at 8 TeV.

The challenges for the experiments of operating at 8 TeV and at the 2012 high collision rate are significant.  One challenge is modeling. To understand how their experiments are working, well enough that they can tell the difference between a new physical phenomenon and a badly understood part of their detector, the experimenters have to run an enormous amount of computer simulation, modeling the beams, the collisions, and the detector itself.  Well, 8 TeV isn’t 7 TeV; all of last year’s modeling was fine for last year’s data, but not for this year’s.  So a lot of computers are running at full tilt right now, helping to ensure that all of the needed simulations for 8 TeV are finished before they’re needed for the first round of 2012 data analysis that will be taking place in the late spring and early summer.

Another challenge is “pile-up.”  The LHC proton beams are not continuous; they consist of up to about 1300 bunches of protons, each bunch containing something like 100,000,000,000 protons.  Collisions in each detector occur whenever two bunches pass through each other, every 50 nanoseconds (billionths of a second).  With the beam settings that were seen late in 2011 and that will continue to intensify in 2012, every time two bunches cross at the center of the big experiments ATLAS and CMS, an average of 10 to 20 proton-proton collisions occur essentially simultaneously.  That means that every proton-proton collision in which something interesting happens is doused in the debris from a dozen uninteresting ones.  Moreover, some of the debris from all these collisions hangs around for a while, creating electronic noise that obscures measurements of future collisions.  One of the questions for 2012 is how much of a nagging problem the increasing pile-up will pose for some of the more delicate measurements — especially study of Higgs particle decays, both expected ones and exotic ones, and searches for relatively light-weight new particles with low production rates, such as particles created only via the weak nuclear force (e.g. supersymmetric partners of the W, Z and Higgs particles.)

But I have a lot of confidence in my colleagues; barring a really nasty surprise, they’ll manage pretty well, as they did last year.  And so far, so good!

Share via:

Twitter
Facebook
LinkedIn
Reddit

24 Responses

  1. Right, it wouldn’t solve everything. But it would solve some things. For inspiration one can look perhaps at the MathOverflow projects and Tim Gower’s polymath endeavour. Both of these were clearly impossible. 😉 But despite that, somebody started them and they are now working extremely well for the mathematical community. I’d argue that centralizing an LHC to-do list is an awful lot less speculative than orchestrating a massively-collaborative solution to the Hales-Jewett theorem, which Gowers succeeded in doing. (Or so I read on Wikipedia. 🙂 I’m such an idiot at math I can scarcely even understand what the theorem’s trying to say beyond the obvious, let alone follow their proof!!!)

    I’d venture that if the math community can do this sort of thing, the LHC community should be able to as well. Especially since the LHC is in Europe, and that’s where the web was invented. I’d offer to put up a starting list myself on Google Docs, but if *I* built it, I don’t think they’d come. 🙁

    1. I see no reason why my colleagues shouldn’t think about it.

      Nevertheless, my guess is that it only works for certain types of simple problems that can be farmed out. Very few important Large Hadron Collider projects are of this sort; there are so many ways to make errors and get garbage results that it in my experience is hopeless to ask a non-expert to contribute, unless that non-expert is being supervised by an expert. There is no way, for instance, that we could do anything resembling crowd-science in the LHC context. It would take more experts more time to keep an eye on quality control than it would to take for them to do the project themselves. That’s precisely the problem: if you gave me 100 bright undergraduates eager to work on LHC physics, but not the 30 graduate students to supervise them and the 10 postdocs to supervise the graduate students, I would not be able to use them.

      Moreover, I know enough mathematics to be confident that most problems in math also cannot be done this way. Just because there are some exceptions does not mean that the exceptions are the rule.

      1. By the way, you can see a test case on this site. I essentially tried to do what Gowers did, in the context of Exotic Higgs Decays, back in January. I think it is urgent and I hoped some of my colleagues would react. But nobody did at the time. Perhaps this is partly because it required a very long blog post just to explain the issue. Perhaps it is because very few of the readers would have been experienced enough to get the point, and even fewer in a position to do something about it.

        You realize LHC theorists make up a very small community. A few hundred, many of whom are students. It’s not a lot, for such a complex undertaking.

  2. That’s very interesting. You say “we’re understaffed in critical areas…”. Which areas precisely? If, for example, 50 well-trained ex-theorists retired from their hedge funds and showed up at your office tomorrow to offer a helping hand, with no strings attached, what problems would you point them at? And my follow on question is if they had retired not from hedge funds, but from the Peace Corps, they’d probably need salaries. Could you find the money to pay them? 🙂 I heard on the radio that most areas of theoretical physics are constrained not by the number of trained applicants, but by the number of funded jobs. That’s a rather different problem.

    1. First, we don’t have 50 well-trained ex-string-theorists at hedge funds who could start doing Large Hadron Collider physics tomorrow. It took me five to ten years to prepare for this experiment. Even with 50, these very smart folks would still need two years of hard work and of building their knowledge base before they’d be able to assist at the forefront.

      Second, funding is a big issue, of course. I’ve seen moments of good funding and moments of terrible funding in the last few years. There were critical moments where I raised lots of money and our university cut our budget so much that it didn’t do me any good.

      But there have also been critical moments where there’s been enough money and not enough highly-trained experienced people to fill positions.

      What are the problems that need attention? Well, I’ll give you a few examples (and note these are my personal viewpoints, which are not widely shared by my colleagues.)

      As a full professor, I’ve spent the last four months very inefficiently studying exotic Higgs decays, which we are in some danger of throwing away due to the very difficult triggering conditions at the 2012 LHC. Making sure we keep as many of these events as possible is one of the most important issues for the LHC, in my view, in 2012. Instead of giving this problem to an battalion of postdocs and students, I’ve been writing all the code and making all the plots and analyzing all the results myself, until very recently. (Arguably this is not the best use of time for a full professor.) And as one person I’m only able to study part of the problem; you also need to know how to analyze the data in order to decide which classes of events to trigger on, and we don’t have nearly enough people able and willing to do this. Recently I did convince about 8 people, mostly postdocs from other universities, to help out with the analysis issues. But time is running out. We have to have these studies done very, very soon, or the majority of the 2012 data will be collected without anywhere near optimal sensitivity to exotic Higgs decays.

      And the same issue applies for various supersymmetry studies as well, and other things like them. You need the right triggers in place. It is not clear we have all of them. We do not have nearly enough trained theorists who both recognize the problem and could possibly help in solving it.

      Another class of problems: so far, the experiments have mostly done the obvious classes of data analyses — bread and butter. But nature may not be bread and butter; it may be subtle. The experimentalists are gradually branching out and doing more types of studies. But the actual list of analyses that ought to be done is very, very long — and some of these cannot even be done until other analyses of the data on standard processes are carried out first. I know of perhaps a dozen interesting and complicated analyses that need work by a combination of theorists and experimentalists, for which there are not enough experienced theoretical experts.

      Then there are real hard-core calculations of backgrounds from known processes that the experimentalists need theorists to do. There have been a lot of big improvements in the last few years, but more are needed, and more personnel would help a lot. We can’t do the high-precision measurements that should be possible at the LHC down the road without more precision on theoretical calculations, and extraction of information by theorists from experimental data.

      We need more sophisticated ways of analyzing LHC data, so that we can increase our sensitivity to rare new phenomena. But we are reaching the point where the LHC has collected so much data that any theoretical study of any real sophistication needs to run a huge amount of simulation of known processes. A coordinated effort to develop a repository of these known processes would have been immensely helpful to theorists — but it would have needed coordination, and personnel. We didn’t and don’t have it, and that reduces our collective efficiency… every study I do now takes months longer than it needs to. And of course we need more theorists actively thinking about how to be more sophisticated at the LHC.

      I could go on, but you’re bored by now. So is everyone else. I’ve been complaining about this for years, so you shouldn’t think that most people in my field agree with me.

      1. Oh yep I see …:-(

        what you say is not boring at all but on the contrary, very important and urgent!

        Dear Prof. Strassler, I wish you all the best for successfully getting enough attention and colleagues who help with these important tasks, such that the situation gets better.

        1. Well, I am afraid it is probably too late for me personally to do very much about it; but there is a new generation of bright young scientists at major universities now, and I hope they will be given the support needed to address the issues.

      2. Ok, so let’s fix stuff:

        Suppose there was a centralized database of important LHC problems (imagine a spreadsheet, though ultimately it should be a lot more complex.) Column A lists the problem. (eg, compute the following Higgs amplitudes, write code to create the following plots, test these CMS triggers, or whatever.) Column B lists estimated difficulty / skills required. Column C lists what other items depend on it (recursively of course). Column D lists votes on how important/urgent it is. Yep, you’ll also want to multiply by the voter’s credibility, but that’s easy. Column E lists who’s signed up to solve it, updated with how far they think they’ve got. Column F lists the final solution, and refereed signoff.

        Now when a postdoc wants a project, he sorts by Column B and C, and chooses the most important item he’s skilled in that’s not close to being solved by someone he respects much ;). Everyone gets a birds-eye view of what’s needed for various stages of the Higgs Hunt, and what’s holding stuff up. There’s a lot of subjectivity, but the voting will make that ok. Lots of obvious ways to do that. When you’ve done a calculation, just post the raw result. Don’t waste time writing a paper for the arXiv wrapped in a wordy explanation of its importance. Everyone knows why it’s important: It’s line 3243 on The List.

        And this same list should be made readable by the public. Summarized on the front page, sure, but drillable to an arbitrary level. It’s a spectator sport, right? I assume nobody would claim that group rivalries would make this unworkable. But if so, that’s probably something they’d want to keep very quiet about in this funding climate. 😉

        Is there already such a centralized list? If so, what’s the URL please? I’d be really interested to play around with it. 😉

        Angelina

        1. I admire your enthusiasm!

          There is no centralized database. That is partly because it isn’t as easy to make one as you suggest. Sure, some calculations can indeed just be described simply (as in the examples you gave) and for some of these there are lists, though not centralized. There is a similar list of relatively straightforward experimental analyses that need to be done:

          http://arxiv.org/abs/1105.2838

          So sure, we could make a bigger list for theorists. It shouldn’t be a problem, except for maintaining it. It would be very long and always changing. [There would be some difficulties with people not wanting to reveal what they are working on to their competitors, but let’s ignore that.]

          What worries me about the idea is that there are many things that cannot easily be described, certainly not in a few lines. You’d need a number of paragraphs, or pages, just to illustrate the problem. And that’s the stuff that typically doesn’t get done.

          Moreover, many of the things that need to be done (“think hard about how to use the data from 2011 to allow us to dig deeper into the 2012 data for challenging signals”) are vague — they aren’t easily described problems that you can divide up in some obvious way into sub-problems that are so well-specified that a postdoc can just pick one, carry it out, write up the answer and go to the next one. If I knew what they should calculate, the problem would already be mostly solved. Often I need a team of people to do a scouting expedition before I can plot the course through the territory.

          I could give you many other types of examples… and I’m concerned such a list would be badly biased toward certain classes of problems and leave off very important ones. It would be a lot of work to keep it properly balanced.

          In any case, I don’t think the problem is that we don’t have any way of communicating the problems that exist to the personnel we have. I’ve been giving talks on this stuff for years now. The problems are

          1) for many non-glamorous problems, especially ones that can take a year to do, we don’t have enough people who are both able and willing to take them on; putting those problems on a list will not change that.

          2) for the most sophisticated and difficult problems, we do not have enough trained personnel at the mid-levels to work effectively with the faculty who actually understand the problem, and we do not have enough faculty at top universities (though this is improving) to train the graduate students who will become the future personnel at the mid-levels. I don’t think listing the problems would help much with getting those problems done, because nothing you could put in a list would actually explain the subtleties to a less experienced person. All I could do is write: “come visit me for a couple of weeks and I’ll explain the problem to you.”

          But perhaps one of my colleagues disagrees with me, and has the time and energy to set up and maintain a Wiki on which such a list can be made. While I’m skeptical about its efficacy, I can see it would have some merits, and certainly wouldn’t oppose it being done.

          My own personal approach to this problem had been to try to establish a national institute in Large Hadron Collider physics, to serve as a training ground, knowledge repository, and clearing house. Unfortunately my first (and last) chance to do that was at the end of the Bush administration, and it ran headlong into the 2008 recession. I do occasionally use this website to raise awareness about what I see as important problems, as I did earlier this year in my discussions of exotic Higgs decays, for instance http://profmattstrassler.com/2012/01/27/exotic-decays-of-the-higgs-a-high-priority-for-2012/
          and in disputing claims that somehow supersymmetry had already been excluded back in July of last year.

      3. 🙂

        Hi Angelina, if there were a corresponding button to do it, I would give your comment a “like” or “upvote”.

  3. I whish the LHC wizards all the best to successfully accomplish their important and difficult tasks, nice to hear it looks good so far 🙂

    Dear Prof. Strassler, I apologize for asking this off topic question; but since I care a lot for science generally and fundamental physics picks me in particular :-), I`m worried about this Nature article pointed out by Lumo:

    http://www.nature.com/nature/journal/v484/n7393/full/nj7393-279a.html

    I have no access to the full article article but from some explicit quotes and exerpts reposted on TRF, it seems that this Harvard Professor intends to use this idea of “rating agencies” for science as a tool to abolish research fields or directions he does not like. In one of the quotes he seems to explicitely pounce on string theory for example and so on …

    I`d like to hear your voice of reason on this and what you think about this idea and the article (if you can access it somehow).
    What are the changes that such a thing can really get enforced upon science and scientists? Would anybody (governements or other people who decide about the funding of science pay any attention to such “rating agencies” or would they rather listen to the experts as it is hopefully the case up to now?

    1. I think it’s obvious that any such credit rating scheme would be highly political and impossible to maintain, and highly inaccurate if it were applied, in a double-blind test, to the past. General relativity and even quantum field theory would obviously have done very poorly. Fields change rapidly, and what one graduate student thinks right now is the best thing to study may be the worst by the time the next crop of students earns their Ph.D.s. The idea is unlikely to fly, and if it does, it will never work.

      Incidentally, haven’t we noticed what a catastrophe the credit ratings agencies were for the economy?

      That said, my personal view is that there has been an incredible disaster of wasted talent in the string theory community. It’s been a disaster for particle physics, certainly, to have all those bright young people choose string theory, learn no particle physics, get their Ph.D’s, fail to find good faculty jobs, and leave the field. We could really use that talent right now — particle theory is badly understaffed, in my view.

      So I understand the problem that Loeb is trying to address. But I don’t think a rating agency is a good idea. A better mentoring system and better information sharing might be a better idea. We already have a rumor mill (over 15 years old I believe) about faculty and postdoc jobs that makes it quite clear what’s going on in the job market… and lots of people are sharing information informally over the internet. In other words, social media and decentralization is probably going to be more effective than a centralized system.

      1. Ah ok, thanks for this answer 🙂

        So you really think it is no longer useful for physicists to investigate stuff going too much beyond the SM (but I thought SUSY is not yet stone dead for example?) or for other interested peoply trying to follow or understand more about this just for fun ? … did not know this . And these things should then really be stopped as Loeb probably wants it…?

        Anyway, I think this site largely contributes to making “conventional” particle physics look very cool 🙂

        1. No, that would be far, far too strong a statement. It is a matter of balance… both within the field, and within individuals.

          I’ve done some amount of string theory myself, and knowing string theory has been very important to my theoretical work on other subjects. String theory, conformal field theory, and supersymmetry, as tools for understanding how quantum field theory works, have been incredibly valuable to me, and to the field. They’ve provided numerous technical ideas and conceptual ideas that influence physics at the Large Hadron Collider. Personally I think everyone doing high-energy theoretical physics should be familiar with them, to a greater or lesser degree depending on the nature of their talents.

          The same goes for the field having a certain number of people exploring abstract and/or highly speculative strategies to understanding nature. If we don’t explore then we won’t find. And some people are really, really good at that, and they should do it.

          But the future of the field depends on actually making discoveries, and if we’re understaffed in critical areas that influence our ability to make discoveries, that’s a problem.

          It does not make sense to train lots of people who aren’t all that spectacular at doing formal theoretical work to do formal theory and nothing else. Then, if they fail at doing it, they just leave, and all of their training is lost to the field. If the field had trained them more broadly, it might not have lost their talents, and might perhaps have avoided the understaffing problem at the LHC.

          However, this is all just hypotheticals.

  4. Dear Prof, been reading your stuff on and off, but I’m missing something. Why is the measure of inverse femtobarns so interesting? Seems to me that it has units of (number of particles going through a unit area). Therefore if I make a really tiny beamline and shoot a single proton through it, I’ll get a high count of inverse femtobarns. BUT I’ll have at most one collision (‘cos there was only one proton), so I won’t learn much, ‘cos quantum is probabilistic. What gives? (Or are you saying I’m only allowed to constrain the beamline size to the Compton wavelength of the proton or something? But then the same question would also apply to pointlike particles like electrons, or very massive particles.) I’m really puzzled!
    Thanks!
    Angelina

    1. What you’re really asking is why we talk about cross-sections (in the form of an area), and the luminosity integrated over time (in the form of an inverse-area), such that

      Number of Collisions that Produce X = Cross Section to Produce X * Time-Integrated-Luminosity.

      If we choose to express Cross-Section in barns or nanobarns or femtobarns, we should express time-integrated-luminosity in terms of inverse barns or nanobarns or femtobarns, so that the number of collisions is a pure number.

      This way of doing things confusing to everyone the first time they learn it; I spend a significant fraction of a particle physics class on it, making sure students understand the subtle points. And you can indeed make the usual calculation invalid if you do funny things with the beams such as what you just described. (That’s a purely classical issue, by the way — nothing quantum mechanical about it.) I can go into those technicalities at some other time when I have my lecture notes handy (I worked hard to get a clear pedagogical presentation and don’t want to reinvent it today.)

      What is really being expressed by the statement that the total cross-section of proton-proton scattering is 0.11 barns is that if the centers of two protons heading in opposite directions come within about 2 femtometers, they will undergo substantial scattering.

      And what is being expressed by the statement that there were about 5 inverse femtobarns of time-integrated-luminosity last year at both ATLAS and CMS is that there were about

      0.11 barns * 5 femtobarns^(-1) * (10^15 femtobarns/barn) = 0.5 * 10^15 proton-proton collisions

      last year. This is really the main important content in saying that there were 5 inverse femtobarns of collisions last year.

      Similarly, since the cross-section to produce 125 GeV Higgs particles is about 17 picobarns = 17000 femtobarns, there were about

      17000 femtobarns * 5 femtobarns^(-1) ~ 85,000 Higgs particles (I’m being very rough here…)

      produced last year at both ATLAS and CMS.

      But we could calculate the same number of Higgs particles if we kept track of the fact that

      a) the number of collisions last year was 0.5 * 10^15
      b) the probability of making a 125 GeV Higgs particle in a 7 TeV proton-proton collision is about 1.6*10^(-11)

      using

      Number of Higgs particles produced = Number of proton-proton collisions * Probability to produce a Higgs particle in a proton-proton collision

      and this would definitely be more intuitive.

      It turns out that calculations of interesting scattering of high-energy quarks and/or gluons in a proton-proton collision are most conveniently and precisely expressed as a cross-section, not as a probability. That’s the reason that keep track of the number of collisions the way we do, using a time-integrated-luminosity with units of inverse area.

      1. Thanks so much! that’s very useful. I hadn’t realized the proton-proton cross section was 0.11 barns. I’m a ditz – that’s my missing datum.
        However it sounds odd to hear you talk about the calculation being invalid when you “do funny things with the beams”. Shouldn’t a precise physical theory predict its own demise somewhat more quantitatively? It would make me feel more satisfied if I was told that the cross-section calculations break down as the beam intensity approaches X, or as the beam diameter approaches Y x (average proton separation), or something like that. I’m left wondering whether my thought experiment is ok with 100 protons squeezed into my tiny tube? Or 10^6? What’s the magic number? I heard on Science Friday that apparent paradoxes can often serve to eludiate underlying principles of physics. (That’s how Einstein discovered Relativity apparently.) This sounds like one too. Perhaps you could include this Godonkan Experiment as a problem in your particle physics class if the general concept is confusing to actual physics students too. At least I’m in good company 🙂

        1. No, no, I didn’t mean anything so elaborate or profound. I just mean that when you interpret a concept like “cross-section” you make an assumption that you’re dealing with a reasonable beam and a reasonable target, and not taking some funny extreme case. It’s all classical physics, no paradoxes or quantum physics or any of that. It has nothing to do with predictions or a theory; it’s just knowing how to use the concept of cross-section properly.

          What you need to do is study the concept of cross-section in a perfectly ordinary classical scattering problem, where you scatter a beam of area A_1 of solid balls of radius r_1 off a target of area A_2 made of balls of radius r_2 (or off a second beam of area A_2 off of balls of radius r_2.) Then you can see where the area of the two beams comes up, and how you should interpret the result.

      2. Oh! It’s as simple as that! Just a classical probabilistic interpretation based upon ratios of areas. I’m so embarrassed. Well, in that case my silly toy model breaks down *precisely* when the area A_1 of the target/beam is less than the cross sectional area of the individual balls. That’s the point at which the balls completely fill my pipe. It’s a sharp cutoff. For your p-p cross-section of 0.11 barns, that would translate to 10^-14m diameter. So concepts of luminosity are well defined down to beam diameters of 10^-14m. And the LHC has rms beam size of 1.6×10^-5m at the interaction points (according to Wikipedia), so we’re well away from the danger zone. Other scattering amplitudes would obviously give rise to different minimum meaningful beam diameters. Only sense in which this is a quantum effect I suppose is that the cross section for these processes must be computed quantumly, but it’s nothing to do with the Compton wavelength. Now I think I understand. Sorry it took me so long but thanks for helping me discover it for myself.
        (BTW, I couldn’t reply to your reply^2, so this may be out of sequence.)
        Angelina

    2. What you’re really asking is why we talk about cross-sections (in the form of an area), and the luminosity integrated over time (in the form of an inverse-area), such that

      Number of Collisions that Produce X = Cross Section to Produce X * Time-Integrated-Luminosity.

      If we choose to express Cross-Section in barns or nanobarns or femtobarns, we should express time-integrated-luminosity in terms of inverse barns or nanobarns or femtobarns, so that the number of collisions is a pure number.

      This way of doing things is confusing to everyone the first time they learn it; I spend a significant fraction of a particle physics class on it, making sure students understand the subtle points. And you can indeed make the usual calculation invalid if you do funny things with the beams such as what you just described. (That’s a purely classical issue, by the way — nothing quantum mechanical about it.) I can go into those technicalities at some other time when I have my lecture notes handy (I worked hard to get a clear pedagogical presentation and don’t want to reinvent it today.)

      What is really being expressed by the statement that the total cross-section of proton-proton scattering is 0.11 barns is that if the centers of two protons heading in opposite directions come within about 2 femtometers, they will undergo substantial scattering.

      Region where collision can occur ~
      Pi * r^2 ~ 3 * (2*10^(-15) meters)^2 ~ 0.12 * 10^(-28) meters^2 ~ 0.12 barns

      And what is being expressed by the statement that there were about 5 inverse femtobarns of time-integrated-luminosity last year at both ATLAS and CMS is that there were about

      0.11 barns * 5 femtobarns^(-1) * (10^15 femtobarns/barn) = 0.5 * 10^15 proton-proton collisions

      last year. This is really the main important content in saying that there were 5 inverse femtobarns of collisions last year.

      Similarly, since the cross-section to produce 125 GeV Higgs particles is about 17 picobarns = 17000 femtobarns, there were about

      17000 femtobarns * 5 femtobarns^(-1) ~ 85,000 Higgs particles (I’m being very rough here…)

      produced last year at both ATLAS and CMS.

      But we could calculate the same number of Higgs particles if we kept track of the fact that

      a) the number of collisions last year was 0.5 * 10^15
      b) the probability of making a 125 GeV Higgs particle in a 7 TeV proton-proton collision is about 1.6*10^(-11)

      using

      Number of Higgs particles produced = Number of proton-proton collisions * Probability to produce a Higgs particle in a proton-proton collision

      and this would definitely be more intuitive.

      It turns out that for processes in which interesting scattering of high-energy quarks and/or gluons occurs during a proton-proton collision, the calculations of the rates are most conveniently and precisely expressed as a cross-section, not as a probability. That’s the reason that we keep track of the number of collisions the way we do, using a cross-section with units of area times a time-integrated-luminosity with units of inverse area.

  5. Firstly, allow me to gush with gratitude, again.
    I wonder if readers know about meltronx.com? I keep an eye on the LHC via my favorite panel. It amazes me that I can see what’s happening, in this detail, in real time.

  6. Prof, I have a question concerning how the background is calculated (the null hypothesis). Assuming for a moment that there is a Higgs with m = 125 GeV, the background and the signal can be calculated. Is the background only hypothesis just the background+signal – signal? Or is the energy that would have gone into the signal somehow then “spread out” across the original background? I hope this question makes sense.

    1. It’s background+signal – signal. In the absence of the Higgs, the collisions of the protons that would have made a Higgs particle will make something else very common — typically the most common thing that is made in proton-proton collisions: a splat of hadrons, or at best, a pair of jets. http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-known-apparently-elementary-particles/jets-the-manifestation-of-quarks-and-gluons/ Such events do not look like the background for the Higgs signal.

      Only a tiny tiny tiny fraction of the no-longer-Higgs collisions will make anything that could possibly be mistaken for a Higgs — and thereby enter into the background for a Higgs signal.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

On my recent trip to CERN, the lab that hosts the Large Hadron Collider, I had the opportunity to stop by the CERN control centre

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 10/30/2024

Geneva, Switzerland, is not known for its sunny weather, and seeing the comet here was almost impossible, though I caught some glimpses. I hope many

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 10/21/2024