Of Particular Significance

Higgs Results from The First Week of the Moriond Conference

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/07/2012

[UPDATE: Tevatron results start a few paragraphs down; LHC results will appear soon]

[2nd UPDATE: ATLAS  new results added: the big unexpected news.   As far as I can tell CMS, which got its results out much earlier in the year, didn’t add anything very new in its talk today.]

[3rd UPDATE: some figures from the talks added]

[4th UPDATE: more understanding of the ATLAS lack of excesses in new channels, and what it does to the overall excess at 125 GeV; reduction in local significance from about 3.5 sigma to about 2.5, and with look-elsewhere effect, now the probability the whole thing is an accident is 10%, not 1%.  Thanks to a comment for pointing out how large the effect was.]

This morning there are were several talks about the Higgs at the Moriond Electroweak conference.  There will be were talks coming from the Tevatron experiments CDF and DZero; we expected new results on the search for the Higgs particle from each experiment separately, and combined together.  There were also talks from the Large Hadron Collider [LHC] experiments CMS and ATLAS.  It wasn’t widely known how much new we’d see; they don’t have any more data than they had in December, since the LHC has been on winter shut-down since then, but ATLAS especially still hasn’t presented all of the results based on its 2011 data, so they may present new information.  The expectation was that the impact of today’s new results would be incremental; whatever we learned today wouldn’t dramatically change the situation.  The Tevatron results will certainly cause a minor ruckus, though, because there will surely be controversy about them, by their very nature.  I gave you a sense for that yesterday.  They aren’t likely to convince doubters.  But they might provide more pieces of evidence in favor of a lightweight Higgs (though not necessarily at the value of around 125 GeV/c2 currently preferred by ATLAS and CMS; see below.)

There are two things I didn’t explain yesterday that are probably worth knowing about.

First, if you look at Figure 2 in my post from yesterday, you’ll notice that the shape of the Higgs signal at the Tevatron experiments is very broad.  It doesn’t have a nice sharp peak at the mass of the Higgs (115 GeV in the figure).  This is because (as I discussed yesterday) it is hard to measure jets very precisely.  For this reason CDF and DZero will be able to address the question: “is there or is there not a lightweight Higgs-like particle”, but they will not easily be able to address the question “is its mass 115 GeV, 120 GeV, 125 GeV or 130 GeV?” very well.  So we’re really talking about them addressing something only slightly beyond a Yes-No question — and one which requires them to understand their backgrounds really well.  This is to be contrasted with the two-photon and four-lepton results from ATLAS and CMS, which with more data are the only measurements, in my view, that can really hope to establish a signal of a Higgs particle in a completely convincing way.  These are the only measurements that will see something that could not be mimicked by a mis-estimated background.

Second, the key to the CDF and DZero measurements is being able to identify jets that come from a bottom quark or anti-quark — a technique which is called “b-tagging the jets” — because, as I described yesterday, they are looking for Higgs decays to a bottom quark and a bottom antiquark, so they want to keep events that have two b-tagged jets and throw away others.  I have finished a new short article that explains the basic principles are behind b-tagging, so you can get an idea of what the experimenters are actually doing to enhance the Higgs signal and reduce their backgrounds.  Now b-tagging is never perfect; you will miss some jets from bottom quarks, and accidentally pick up some that don’t come from bottom quarks.  But one part of making the Tevatron measurement  involves making their b-tagging techniques better and better.  CDF, at least, has already claimed in public that they’ve done this.

Will update this after information becomes available and when time permits.

UPDATES: New Tevatron Results and New ATLAS Results

New Tevatron Results

Tevatron claims a lightweight Higgs; to be precise, the combination of the two experiments CDF and DZero is incompatible with the absence of a lightweight Higgs at 2.2 standard deviations (or “sigmas”), after the look elsewhere effect.  CDF sees a larger effect than DZero; but the CDF data analysis method seems more aggressive.   But both methods are far too complicated for me to evaluate.

The combination of DZero and CDF results from the Tevatron shows that their observed limit on the Higgs production rate as a function of its mass (solid line) lies about two sigma above the expected limit in the absence of any Higgs (dashed line) indicating an excess of events that appears consistent with a Higgs signal roughly in the 115-135 GeV mass range. By itself this result is not confidence-inspiring, but it does add weight to what we know from ATLAS and CMS at the LHC.

2.2 sigma is not much, and excesses of this size come and go all the time.  We even saw that several times this past year. But you can certainly view today’s result from the Tevatron experiments as another step forward toward a convincing case, when you combine it with what ATLAS and CMS currently see.  At minimum, assuming that the Higgs particle is of Standard Model type (the simplest possible type of Higgs particle), what CDF and DZero claim is certainly consistent with the moderate evidence that ATLAS and CMS are observing.  

There’s more content in that statement than you might think.  For example, if there were two Higgs particles, rather than one, the rate for the process CDF and DZero are measuring could easily be reduced somewhat relative to the Standard Model.  In this case they wouldn’t have found even the hint they’ve got.  (I explained why yesterday, toward the end of the post.)  Meanwhile the process that ATLAS and CMS are measuring might not be reduced in such a scenario, and could even be larger — so it would certainly be possible, if there were a non-Standard-Model-like Higgs at 125 GeV, for ATLAS and CMS to see some evidence, and CDF and DZero to see none.  That has not happened.  If you take the CDF and DZero hint seriously, it points — vaguely — toward a lightweight Standard-Model-like Higgs.  Or more accurately, it does not point away from a lightweight Standard-Model-like Higgs.

However, we do have to keep in mind that, as I noted, CDF and DZero can only say the Higgs mass seems as though it might be in the range 115 to 135 GeV; they cannot nail it down better than that, using their methods, for the reasons I explained earlier.  So their result is consistent with a Standard Model Higgs particle  at 125 GeV, which would agree with the hints at ATLAS and CMS, but it is also consistent with one at 120 GeV, which would not agree.   Thus Tevatron bolsters the case for a lightweight Higgs, but would be consistent both with the current hints at LHC and with other parts of the range that the LHC experiments have not yet excluded.  If the current ATLAS and CMS hints went away with more data, the Tevatron results might still be correct, and in that case ATLAS and CMS would start  seeing hints at a different mass.

But given what ATLAS and CMS see: the evidence from December, and the step forward in January with the CMS update in their two-photon data, something around 125 GeV remains the most likely value mass for a Standard Model Higgs.  The issue cannot be considered settled yet, but so far nothing has gotten in the way of this hypothesis.

Now, the inevitable caveats.

First, as with any measurement, these results cannot automatically be assumed to be correct; indeed most small excesses go away when more data is accumulated, either because they are statistical fluctuations or because of errors that get tracked down — but unfortunately we will not get any more data from the now-closed Tevatron to see if that will happen.  The plausibility of Tevatron’s claims needs to be evaluated, and (in contrast to the two photon and four lepton results from ATLAS and CMS, which are relatively straightforward to understand) this won’t be easy or uncontroversial.  The CDF and DZero people did a very fancy analysis with all sorts of clever tricks, which has the advantage that it makes the measurement much more powerful, but the disadvantage of making it obscure to those who didn’t perform it.

One other caveat is that we will have to be a little cautious literally combining results from Tevatron with those from the LHC.  There’s no sense in which [this statement is factually incorrect as stated, as commenters from CDF are pointing out; there are indeed several senses in which it was done blind.  I should have been more precise about what was meant, which was more of a general knowledge of how difficult it is to avoid bias in determining the backgrounds for this measurement.  Let me add that this is not meant to suggest anything about CDF, or DZero, in particular; doing any measurement of this type is extraordinarily difficult, and those who did it deserve applause.  But they’re still human.] the Tevatron result was done `blind’; it was done with full knowledge that LHC already has a hint at 125,  and since the Tevatron is closed and all its data is final, this is Tevatron’s last chance (essentially) to contribute to the Higgs particle search.  Combining experiments is fine if they are truly independent; if they are not, you are at risk of bolstering what you believe because you believe it, rather than because nature says it.

New ATLAS results 

ATLAS has now almost caught up with CMS, in that its searches for Higgs particles decaying to two photons and to two lepton/anti-lepton pairs (or “four leptons” for short) have now been supplemented by (preliminary! i.e., not yet publication-ready) results in searches for Higgs particles decaying to

  • a lepton, anti-lepton, neutrino and anti-neutrino
  • a tau lepton/anti-lepton pair
  • a bottom quark/anti-quark pair (which is what CDF and DZero looked for too)

(The only analysis ATLAS is missing is the one that CMS added in January, separating out events with two photons along with two jets.) In contrast to the CMS experiment, which found small excesses (just 1 sigma) above expectation in each of these three channels, ATLAS does not.  [And I’ve been reminded to point out that the first channel has changed; in December, with 40% of the data analyzed, there was a small excess.] So CDF and DZero’s results from today take us a step forward toward a convincing case, while ATLAS’s result takes us a small step backward.  That’s par for the course in science when you’re squinting to see something that’s barely visible.

In the same search as performed by CDF and DZero, and in the same region where they see an excess, ATLAS sees no excess at all; but ATLAS has less data and is currently less sensitive to this channel than CDF and DZero, so there is no clear contradiction.

But one can’t get too excited about this.  Statistics are still so low in these measurements that it would be easy for this to happen.  And determining the backgrounds in these measurements is tough.  If you make a mistake in a background estimation, you could make a small excess appear where there really isn’t one, or you could make a real excess disappear.  It cuts both ways.

But actually there is a really important result coming out of ATLAS today; it is the deficit of events in the search for the Higgs decaying to a tau lepton/anti-lepton pairs.  For a putative Higgs below 120 GeV, ATLAS sees even fewer tau lepton/anti-lepton events than it expected from pure background — in other words, the background appears to have fluctuated low.  But this means there is not likely to be a Standard Model-like Higgs signal there, because the likelihood that the background plus a Higgs signal would have fluctuated very low is small.  [UPDATE: actually, looking again, I think I am somewhat overstating the importance of this deficit in taus compared to the lack of excess in the other two channels, which is also important. To be quantitative about this would require more information.  In any case, the conclusion is the same.]    And this allows ATLAS to exclude new regions in the mass range for the Standard Model Higgs, at 95% confidence!

This is very important!  One of the things that I have complained about with regard to those who’ve overplayed the December Higgs hints is that you can’t really say that the evidence for a Higgs around 125 GeV is good if you can’t start excluding both above and below that mass.  Well, ATLAS has started to do that.  Granted, it isn’t 99% exclusion, and since this is the Higgs we’re talking about, we need high standards.  But at 95% confidence, ATLAS now excludes, for a Standard Model Higgs, 110-117.5, 118.5-122.5, and 129-539 GeV.  Said better, if there is a Standard Model Higgs in nature, ATLAS alone restricts it (to 95% confidence only, however) to the range 117.5 – 118.5 GeV or 122 – 129 GeV.

ATLAS, just from its own data alone, excludes (pink-shaded regions) the Standard Model Higgs particle at 95% confidence (but not yet at 99%) across the entire allowed range except around 118 GeV and between 122 and 129 GeV, where the two-photon and four-lepton searches provide some positive evidence. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. The red dotted line indicates how well this experiment would perform, on average, if there were no Standard Model Higgs signal.

The window is closing.  Not only has ATLAS completely excluded the old hints of a Standard Model Higgs at 115 from the LEP collider, it seems it has probably excluded CMS’s hint around 120, which was the next best option for the Higgs after 125.  And as far as I can tell, this is coming mainly from the tau lepton/anti-lepton measurement As I said above in an update, I think it is really a mix of all three channels… hard to be quantitative about that without talking to the experts.

So if the Standard Model Higgs is what nature has to offer us, we’re probably down to a tiny little slice around 118 GeV for which there’s no evidence, and a window that has 125 GeV smack in the middle of it, where the evidence, though not much stronger today, if we include both Tevatron and ATLAS, than it was yesterday, is certainly no weaker.

UPDATE: Well, it’s been pointed out to me by the first commenter that the last statement is misleading, because it doesn’t emphasize how the ATLAS excess at 126 GeV has decreased substantially  in significance. Somehow I thought originally that the decrease was marginal. But it isn’t.

The statistics numbers as I think I have them now: What was previously about 3.5 sigma local significance for the Higgs-like peak at 125 GeV is now down to 2.5, and what seemed only 1% likely in December to be a fluctuation is now 10% likely.

There is an issue, however, with combining many measurements.  Of course the two-photon and four-lepton results from ATLAS are the same as before, and they are just as significant; nothing changed.  But the other three measurements came in low, and that pulls the significance of the combination down.  However, I must remind you again how difficult the last three measurements are.  I would trust the first two before the last three.  So I think we should be careful not to overinterpret this change.   When you combine what you trust most with what you trust least, you reduce your confidence in what you have.

That said, it also indicates why one should be very cautious with small amounts of data.

Comparison of the December ATLAS results (left), combining all measurements that were available at the time, with the March 2012 ATLAS results (right). I've lined them up as best I could, given the scales were slightly different. What is shown is how large a Higgs signal can be excluded, in units of the Standard Model expectation, as a function of the Higgs mass. Anywhere the solid line dips below the dotted line marked "1" is a place where the Standard Model is 95% excluded. Compared to December, there is much more excluded and the height of the peak at 126 GeV is noticeably lower.

Share via:

Twitter
Facebook
LinkedIn
Reddit

26 Responses

  1. Matt, can you just comment what is the excess at 200 GeV shown at Tevatron graph?

    1. Fair question. Unfortunately there’s no way for me to know; they didn’t give enough plots and details, and they cut off the plot at 200. There seem to be little excesses in many channels, not in one particular channel that might be easy to understand. Maybe they have a background estimation problem; maybe they are not including some subtle background and that’s creating an apparent excess. Or it may just be statistical fluctuations, with no real `explanation’. Tevatron’s 2.2 sigma result would be largely uninteresting on its own, because 2 sigma excesses are common; it is important in that it adds some weight to an existing case. Of course the Standard Model Higgs (or anything like it) is completely ruled out up in that region by LHC data.

  2. Hi Matt,
    Thoughtful, as always. But I’m not sure I understand your suggestion that somehow the tevatron people could mentally nudge their particular brand of analysis into alignment with the LHC results. I hope you can comment to Ben’s reply. We have to be careful here in what impressions are left. Sensitivities are on full-alert status on both sides of the Atlantic.

    I think that the point is that the Tevatron has done precisely what the Tevatron can do with the channels and resolutions implicit in those channels that ppbar require. They’ve done it in as strict a manner as possible. Heroically, as in ATLAS and CMS. The window is closing, and that in and of itself might be as important as the significance of the collective hints, which I think is your point in FB. “Look elsewhere” is maybe no longer a dilution that needs to be applied?

    Anyhow, I’d hate for someone to misunderstand what I think you’re trying to say about history and unintentional bias. I don’t think that it even _could_ apply in these (independent) analyses, let alone whether it might have.

    Like the Persians “sneaking” up on the Greeks over 15 years…this Higgs saga will be the slowest surprise attack in HEP ever!

    Chip

    1. Chip, thanks for your comment.

      I love your Persian analogy!!

      I have to think carefully about the poking and the prodding that one ought to do on a measurement of this type. I’m not so much worried about look-elsewhere effects here; we know from previous exclusions that we’re only looking for the Standard Model Higgs in a narrow range that is comparable to the resolution of the measurement. So as I said in an earlier post, we’re kind of in a Yes-No situation: Either there’s an excess in the window or there isn’t. That’s crude, of course, but it’s not a terrible approximation.

      The issue, then, is how hard does one work to get the backgrounds down, knowing that this is the last chance to get the best possible result. Can the work done to reduce the backgrounds introduce biases? Now I start talking out of my hat — but for fun, and to give Ben a big fat target, I’ll do it. A disclaimer: I don’t really entirely believe what I’m about to say, because I haven’t thought it through, but it’s the kind of thing I worry about.

      For instance: the harder one tries to knock backgrounds down and increase sensitivity by dividing up the data, the more uncertain theoretical calculations become. Now I know that all the backgrounds are “data-driven”, but there can be hidden assumptions that a Monte Carlo’s distribution is roughly correct that go into those data-driven estimates. Those might begin to fail as one improves the experimental measurement techniques; it’s well known that dividing events up by the number of jets can do that. Can there be a bias introduced by not checking all of the different ways that the Monte Carlo could go wrong? In other words, while I am sure that every effort was made to establish the technique and optimize the sensitivity before looking at the data — these folks are all professionals and I trust them to do that — what I worry about is whether certain systematic effects (a) weren’t thought of before the data was examined, and (b) weren’t thought of after the data was examined and before presentation because there was a natural human tendency to like the answer, and not worry at the time, while rushing to finish the result, about whether there might be theory-related problems with the data-driven backgrounds.

      Let me add one more remark: I have the same concerns at CMS and ATLAS, so this is not special to the Tevatron folks. These measurements of broad and small excesses are really, really tough to get right without mucking up the backgrounds somehow. And it’s not as though I could do it.

      Ok, Ben, and others at CDF and DZero: please disabuse me of my misconceptions. We aim for accuracy and precision here.

      1. Hi Matt,

        I think in practice, the opposite of what you say is true (but for almost the same reasons).

        Background modeling is an iterative procedure. As you add more data, you have to check that the backgrounds are still well understood. Tevatron and LHC experiments look at many control regions, and keep trying to update the modeling, adding new systematics as they become necessary. If the overall rates and shapes of the background are consistent with data in kinematically similar control regions, it is hard to get a background contribution that mimicks signal but keeps the overall agreement looking good. But that’s what our colored bands are in our plots, the degree to which we understand the backgrounds at the 1 sigma and 2 sigma level. Once we start seeing excesses creep out of that range, it is true that we take a hard look at our models to see if they are working properly in the control regions. When the model works, we don’t pay as much attention to it. When there is an excess or a deficit, we use a higher level of scrutiny. This has the effect that big experiments typically slow down when we see excesses and take a closer look. And this means that really exciting results take longer to reach the community, which is probably annoying for theorists …

        So, I would say that there is more of a risk of experimentalists changing the modeling to remove excesses, especially if there was a signal present in both the signal region and the control region.

        1. I certainly don’t disagree the argument runs both ways (creating false signals and removing real ones) and we can certainly have the same concerns about whether ATLAS has done this with their latest three searches. The advantage they have is that they have other independent (and easier) search strategies, so if they’ve removed a real signal this time, in the end it won’t matter.

          It’s a bit like baseball — does knowing that you are in danger of removing real signals then bias you toward making one? Look, I don’t have answers. I just know what you’re doing is really, really hard and needs some serious scrutiny by outside experts, as do all of these more difficult Higgs searches.

    2. I agree with Ben that the analyses are done in a blind manner — by being optimized in expected sensitivity — but there’s an additional requirement for a truly blind analysis: reporting exactly what you find, no matter what it is.

      In practice, there is more second-guessing of unexpected results (eg Wjj excesses), leading to cross-checks etc that can potentially get the result scuttled and never reported.

      So if another experiment has reported an interesting hint, it’s not hard to imagine that it would boost the confidence of experiments whose results in data appear to confirm it.

      I’m not suggesting that CDF or D0 did anything other than a perfectly blind analysis, but I see the point Matt is making that the process can’t helped but be influenced by the LHC results. We can only speculate about what the Tevatron would have reported, when they would have announced it and how they would have phrased it in the absence of the LHC results.

      Daniel

  3. Hi Daniel,

    You are missing the point. The H->bb excess is more consistent with a 120 GeV signal than 135 GeV. A Higgs signal at 120 GeV has 2 times the number of events than a signal at 130 GeV. The resolution of the H->bb channels is more than 20 GeV (when using a multivariate analysis technique). Therefore, if there was a 120 GeV signal in the data, you would see an excess at 120 GeV of about the right size for the standard model prediction. But the excess would appear to be even bigger (almost twice the standard model prediction) at 130 GeV.

    It does not go the other way. If there was a 130 GeV signal in the data, when you do a search at 120 GeV, you would see an excess less than half the size as expected by the standard model.

    Therefore, if you look at the limits, it is most interesting to see what is the lowest mass where the excess develops – the most probable location of the excess – rather than where the excess peaks.

    Also, if the CDF H->bb signal was really at 135 GeV, then CDF H->WW would see it. Whereas, if the CDF H->bb signal is really 120 GeV, this would also explain why CDF H->WW does not see it: H->WW is not that sensitive to 120 GeV as H->bb.

    1. Hmm… but isn’t that only true, Ben, if you *assume* that your signal has an SM Higgs cross-section? Couldn’t you have asked: is the H–>bb excess more consistent with a 120 GeV particle of unknown cross-section than with a 135 GeV particle of unknown cross-section? Would you then have gotten a different answer?

      1. Hi Matt,

        Well, we are trying to do a SM search, so that is what I was assuming.

        But I think my argument holds qualitatively as long as there is a relationship between the unknown cross section of the particle with mass 120 and the unknown cross section of the particle if it were at 135 GeV.

        As long as the production cross section for the beyond-SM theory is falling as a function of mass, which is in general true for high mass states which are produced at threshold, then my argument holds. One would always see a growing excess as a function of the Higgs mass, due to the experimental resolution.

        One would be hard-pressed to find a cross-section prediction which is flat as a function of Higgs mass.

        Now … if the branching ratios H->bb in the non-SM Higgs theory were not falling as fast as a function of mass because other decay modes were not opening up, the argument would be mitigated. In the SM, the Higgs branching ratios fall faster than the cross section as a function of mass due to the increase in H->WW. If you could find a way to make the H->bb branching ratios increase as a function of mass, you could make sigma*BR be constant as a function of mass, and then this argument would no longer be true.

  4. HI Matt,

    I take offense to your statement about the blindness of the Tevatron analyses. There is a very good sense as to how blind they are. Analyses are optimized according to expected sensitivity. This is the way it has been done for the last 10 years at the Tevatron for Higgs searches. The expected sensitivity has improved in the last round. As a past convener of the CDF Higgs group, I can assure you that the only after the analyses are optimized for expected limits, and the analyses have been signed off, are observed limits calculated.

    1. Ben — thanks for your comment. You have every right to defend your experiment and your experimental techniques here. I speak from a general concern about how backgrounds are determined in these measurements, but not from a detailed knowledge of what was done this round, which I will only be able to obtain over time.

  5. Hi Matt,

    Very nice article!

    One clarification:

    The second caveat isn’t relevant. We optimize on expected sensitivity and then look at the data. The Tevatron results are indeed truly independent from the LHC. There is no step where we use the prior knowledge from the LHC in our searches.

    Thanks again for the nice article!

    1. I understand there is no direct reliance on the LHC and you guys do absolutely everything you can to avoid any priors leaking in. But history teaches that even the best scientists can have subtle biases that even they don’t know about. It’s always better when there’s no possibility of a bias anywhere.

  6. The Tevatron results are over-sold. It’s driven by an excess in H->bb channels at CDF. This excess peaks at 135 GeV — and it’s beaten down to 120-ish by the lack of an excess in H->WW.

  7. The evidence provided by ATLAS alone is definitively weaker than yesterday. They quote (slide 25) as global p0 in the low mass range from 110 to 146 GeV a value as high as 10%, against the 0.6% they quoted in December. So, they have not simply squeezed the region left for the Higgs, but they have dramatically decreased the significance of the excess at 125 GeV. This is clearly due to the absence of any excess in the further search channels they have added today. These data are, thus, totally inconclusive and today the situation is much more unclear than yesterday.
    Moreover, in the December seminar the H →WW(∗) channel, when its analysis included 2.05 fb-1 of data, was declared featuring a broad low-significance excess, while now it seems that even that modest excess vanished completely with the full dataset.
    Untill one experiment will be able to show a clear peak in the four lepton golden channel, we will remain in the dark.

    1. Thanks for the comment. I need to look at your first point more carefully… I missed that quantitative statement. And you’re right that I should have mentioned that there used to be an excess in WW with 2 inverse fb!

      However, I never believed that ATLAS’s significance was likely to be as high as it appeared, because the bumps in the two-photon and four-lepton channels don’t look like a signal. They’re too large and too narrow, so it is likely that they are driven by big statistical fluctuations, causing an artificial and temporary inflation of the significance. That’s part of why I was so much more conservative than some of my fellow-bloggers. So I’m not sure that things are really more unclear than yesterday; I think that a lot of people were over-interpreting the data that was in place after December.

Leave a Reply to johnRCancel reply

Search

Buy The Book

A decay of a Higgs boson, as reconstructed by the CMS experiment at the LHC

Related

I recently pointed out that there are unfamiliar types of standing waves that violate the rules of the standing waves that we most often encounter

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/25/2024

Recently, the first completed search for what is nowadays known as SUEP — a Soft-Unclustered-Energy Pattern, in which large numbers of low-energy particles explode outward

POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 03/15/2024