Of Particular Significance

Final Days of Busy Visit to CERN

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 12/02/2014

I’m a few days behind (thanks to an NSF grant proposal that had to be finished last week) but I wanted to write a bit more about my visit to CERN, which concluded Nov. 21st in a whirlwind of activity. I was working full tilt on timely issues related to Run 2 of the Large Hadron Collider [LHC], currently scheduled to start early next May.   (You may recall the LHC has been shut down for repairs and upgrades since the end of 2012.)

A certain fraction of my time for the last decade has been taken up by concerns about the LHC experiments’ ability to observe new long-lived particles, specifically ones that aren’t affected by the electromagnetic or strong nuclear forces. (Long-lived particles that are affected by those forces are easier to search for, and are much more constrained by the LHC experiments.  More about them some other time.)

This subject is important to me because it is a classic example of how the trigger systems at LHC experiments could fail us — whereby a spectacular signal of a new phenomena could be discarded and lost in the very process of taking and storing the data! If no one thinks carefully about the challenges of finding long-lived particles in advance of running the LHC, we can end up losing a huge opportunity, unnecessarily. Fortunately some of us are thinking about it, but we are small in number. It is an uphill battle for those experimenters within ATLAS and CMS [the two general purpose experiments at the LHC] who are working hard to make sure they have the required triggers available. I can’t tell you how many times people within the experiments — even at the Naturalness conference I wrote about recently — have told me “such efforts are hopeless”… despite the fact that their own experiments have actually shown, already in public and in some cases published measurements (including this, this, this, this, this, and this), that it is not. Conversely, many completely practical searches for long-lived particles have not been carried out, often because there was no trigger strategy able to capture them, or because, despite the events having been recorded, no one at ATLAS or CMS has had time or energy to actually search through their data for this signal.

Now what is meant by “long-lived particles”?

Most types of particles decay (i.e., disintegrate into lower-mass particles). Any particular type of particle has an average lifetime, though an individual one may live a longer or shorter life than the average.  A neutron, if found outside a stable atomic nucleus, decays into a proton, and electron and an anti-neutrino after about 15 minutes… sometimes shorter, sometimes longer, but 15 minutes on average.

In the context of the LHC, the term “long-lived particle” doesn’t mean a particle will outlast you and me! or even a neutron.  It merely that its average lifetime is long enough that, if traveling at a good fraction of the speed of light, it is likely to move a measurable distance before it decays… i.e. that it lives a trillionth of a second or longer. The ATLAS and CMS experiments, despite being the size of small office buildings, are able to detect when a particle’s decay occurs a millimeter (about 1/25 of an inch) away, or even less, from the location of the proton-proton collision where it was created. (Pretty amazing, huh? My experimental colleagues are awesome.) This astounding ability is critical in identifying bottom and charm quarks, which often decay at a location separated from the collision point by as little as a millimeter.

In contrast to a neutron, the Higgs particle which was discovered at ATLAS and CMS in 2012 is very short-lived; a typical Higgs particle, from its birth to its death at the LHC, won’t even travel an atom’s width from the location at which it was created, living about a billionth of a trillionth of a second. But there are other known particles that are much longer lived, such as the muon. A muon, once produced, will survive for two millionths of a second on average — during which, if it is traveling at a good fraction of the speed of light*, it can travel a good fraction of a mile.

*Due to time dilation, a consequence of relativity à la Einstein, if the muon is traveling very close to the speed of light relative to us, it will actually, from our point of view, live longer and travel further than average. This is why very high energy muons created in cosmic ray collisions high in the atmosphere often are able to travel hundreds of miles, often reaching the Earth’s surface.

But the really important question for the LHC is whether it might be able to produce as-yet unknown particles that have long lifetimes. We haven’t found a long-lived particle for a while — the bottom quark was the most recently discovered long-lived elementary particle, and that was about 40 years ago — but we found many before, and why shouldn’t there be others? Indeed such particles are predicted, with small to moderate certainty, in various speculations that have been considered by theorists over the years, including some creative ideas about the nature of dark matter, certain forms of supersymmetry, various attempts to understand the masses of the known particles, etc. What got me interested in the problem, back in 2001 or so, is that I noticed that if one takes a broad view, and considers a wider set of variants of these speculative ideas, one quickly sees that long-lived particles are far more common than they are in the most popular variants of these ideas. (After some intermittent research, this observation was finally published in 2006 work with Kathryn Zurek.)

In other words, due to cultural biases among scientists, but not due to anything about nature itself, the possibility of long-lived particles was viewed as far more remote than it really is. (Though the biases have improved since 2006, to a certain extent they still remain.) And for this reason, the LHC experiments, as of 2006, had prepared for only a fraction of the possibilities, and were even lacking trigger strategies that could be sensitive to certain types of long-lived particles. The worry was especially acute if the long-lived particles were created mainly in the decays of a lightweight Higgs particle. A Higgs particle of mass 125 GeV/c², such as the one we have in nature, has such low mass-energy (125 GeV) compared to the energy of LHC proton-on-proton collisions (7000 GeV in 2011, 8000 GeV in 2012, and 13,000 GeV starting in 2015) that its decays are generally unspectacular and are difficult to observe; the 2012 discovery of the Higgs required careful trigger design, and this will become even more important for Higgs studies in Run 2. So I have spent a lot of time since 2007 trying to help change the situation, so that we can be sure that, if such particles exist, we will discover them, and if we don’t find them, we’ll be confident we did a thorough search and didn’t just miss one for lack of trying.  To this end, I spent much of my time at CERN giving talks and having extended conversations, both planned and spontaneous, with experimental colleagues at ATLAS and CMS, and discussing the issues with theorist colleagues as well.

I realize this is a bit abstract, so in a day or two I’ll give you a more detailed example of how a new trigger strategy can be crucial in allowing for the possibility of a discovery of long-lived particles.

Share via:

Twitter
Facebook
LinkedIn
Reddit

58 Responses

  1. http://physics.aps.org/articles/v8/6
    Everyday experience tells us that big objects—eggs and humans—do not appear to exist in a superposition of states like that possible for more quantum objects, such as electrons. Does this mean quantum physics fundamentally doesn’t apply to objects beyond a certain size? A new experiment that allows the motion of a large atom in an optical lattice to be tracked could help in the search for a size cutoff. Using this setup, Carsten Robens at the University of Bonn, Germany, and his colleagues demonstrated that a cesium atom travels in a truly nonclassical fashion, moving as a quantum superposition of states and thus occupying more than one distinct location at a time. Larger objects have been observed to have such inherently quantum properties, but the observation of Robens et al. is based on a stringent test considered to be the gold standard for confirming that a superposition exists. As such, their experiment constrains theories of physics that aim to replace quantum mechanics. Their technique could also be used to test superpositions on even more macroscopic scales, such as with larger atoms or molecules.

  2. Great post, a topic I’ve often wondered about.
    Re “so that ….if we don’t find them, we’ll be confident we did a thorough search and didn’t just miss one for lack of trying”….
    Owa, I imagine that’s the hard hard part. Searching for a needle in a haystack full of needles is bad enough, but if you’re not sure what the ‘needle’ looks like….Even the simplest of experiments get complicated when you try to extract limits of detection for objects with vague properties…as always, I’m filled with admiration for the particle physicists..

  3. Back in the day when bubble chambers were used as “detectors”, neutrino interactions with other particles showed up in very distinct ways: “inbalanced” set of contrails, to start with, there was mass/energy missing from the picture.

    These bubble chambers that involve neutrino interactions are still very interesting.

    Kind regards, GEN

  4. Is the long-lived nature or not being affected by electromagnetic or strong nuclear forces the harder measurement problem?

    It seems like some hypothetical particles would not be detectable with current detectors at any distance.

    How would you go about designing a collider optimized to search for long lived particles?

    Thanks,
    Mike

    1. Long lifetimes aren’t really a barrier to detection, (Short ones however…) but interactions are. Consider neutrinos, which are stable but exceedingly difficult to detect due to only interacting via the weak force.

      However if you are able to record the creation event of a particle you can detect it by what is missing when comparing ‘before’ and ‘after’. This is how the neutrino was discovered. (Actually how it was proposed, long, long before it was even detected directly.); when looking at beta decays it seemed as if some invisible, undetectable thing was whizzing away taking variable amounts of energy with it.

    2. The lack of electromagetnic and strong nuclear interactions means you can’t detect the particle while it is “alive”; i.e., is traveling and hasn’t decayed. Neutrinos are like this (while neutrons are not.)

      The long lifetime means that when it “dies”, i.e. falls apart into other things, you have to be ready for its decay products at a macroscopic and random distance from where you created it. That’s no problem as long as (a) some of its decay products have electromagnetic or strong nuclear interactions, (b) the decay occurs inside your detector somewhere, and (c) your detector design allows you to find particles that appear at random places inside your detector.

      So a particle of this type can only be seen when it dies, not while it is alive. If its death occurs right away, no problem; the particles from its decay come from the collision point and your detector (and trigger) is ready for that. Its when death occurs late that you have a problem.

      Of course if death occurs after the long-lived particle has traveled a hundred miles, your detector won’t see it. Then, your indication that it is present is that you will see that something detected appears to be recoiling against something undetected. In the case of Figure 1 and 2, if the X lifetime allowed it to travel a kilometer, the event would have a single jet recoiling against … apparently nothing. Unfortunately, that happens a lot with known particles (neutrinos!) so it’s not a distinctive enough signature for an easy discovery. People do try though: http://arxiv.org/abs/1404.1344

  5. @veeramohan,

    None of this is within my field of expertise, but it is my understanding that you are confusing two different kinds of processes that involve spontaneous symmetry breaking, in fact, each process involves a different type of symmetry being broken.

    What these two processes have in common (as far as I know), is that they both involve the spontaneous breaking of a given symmetry, and they both contribute to the mass of certain particles through that symmetry being broken, but that is as far as they have some similarity.

    We have to go back the basics of the generalized analysis of classical mechanics (langrangian, hamiltonian) to understand how symmetries and conservation laws are deeply intertwined.

    A certain kind of conservation law (of a certain physical dimension) is only related to a given kind of symmetry, so, there form pairs.

    For any given physical dimension, there is only one type of symmetry that it pairs with, like say, the conservation of energy pairs up with the symmetry of time, while the conservation of momentum pairs up with the symmetry of space (position).

    Since quantum field theories cannot explain the masses of particles (in fact, the prediction is that they should be massless!), figuring out a way to explain the “unpredictable” mass of particles involved a hack, a nice trick with the equations.

    Since the mass of particles looked like one of the conservation laws was broken (the conservation of mass), at the time, people like Nambu and others suggested the idea of looking after a corresponding breaking of symmetry, and that is how this “crazy” idea came to be.

    Kind regards, GEN

    1. /Since the mass of particles looked like one of the conservation laws was broken (the conservation of mass)/– Thanks Mr. Gastón E. Nusimovich.

  6. Gold stone bosons break away from a heavy Higgs mecanism (FIELD) at short distance (spontaneous symmetry breaking) – leaving photons Massless – in relativistic FIELD theory. So photons are relatively massless ?
    This is like correlation in Quantum Entanglement – as if Gold stone bosons are correlation between photon and spin zero particle ?

    But by Philip Anderson Mechanism, in a superconductor, it is non-relativistic that, “The electromagnetic modes are massive (Meissner effect) despite the gauge invariance” – in which Lorentz invariance was crucial – “A massive collective mode which exists in all super conductors – the oscillation of the amplitude of the super conducting gap”.
    What is the difference between these two higgs like bosons found in experimental instruments and in “experiment” ?

    1. @veeramohan,

      None of this is within my field of expertise, but it is my understanding that you are confusing two different kinds of processes that involve spontaneous symmetry breaking, in fact, each process involves a different type of symmetry being broken.

      What these two processes have in common (as far as I know), is that they both involve the spontaneous breaking of a given symmetry, and they both contribute to the mass of certain particles through that symmetry being broken, but that is as far as they have some similarity.

      We have to go back the basics of the generalized analysis of classical mechanics (langrangian, hamiltonian) to understand how symmetries and conservation laws are deeply intertwined.

      A certain kind of conservation law (of a certain physical dimension) is only related to a given kind of symmetry, so, there form pairs.

      For any given physical dimension, there is only one type of symmetry that it pairs with, like say, the conservation of energy pairs up with the symmetry of time, while the conservation of momentum pairs up with the symmetry of space (position).

      Since quantum field theories cannot explain the masses of particles (in fact, the prediction is that they should be massless!), figuring out a way to explain the “unpredictable” mass of particles involved a hack, a nice trick with the equations.

      Since the mass of particles looked like one of the conservation laws was broken (the conservation of mass), at the time, people like Nambu and others suggested the idea of looking after a corresponding breaking of symmetry, and that is how this “crazy” idea came to be.

  7. “In other words, due to cultural biases among scientists, but not due to anything about nature itself, the possibility of long-lived particles was viewed as far more remote than it really is.”

    Could you expand on that? Let’s say I’m a physicist biased against long-lived particles. How does this affect my work, in practical terms?

    1. You would not suggest that your graduate students look for them, and would not look for them yourself; you would not support (and would argue, in meetings, against) the development of a trigger pathway that looks for them, since that uses up valuable personnel time and also means data would be kept that you think is a waste of time; you would not worry about compromises that are made, say to make data storage more compact, or standard analysis more efficient, even if those compromises make it much more difficult or impossible to discover long-lived particles; and when designing an upgrade to the detector, you wouldn’t worry about cost-saving measures that, again, make it much more difficult to discover long-lived particles, and you certainly wouldn’t necessarily add something to the detector design that would make it easier to find them. It’s all extremely practical and has a direct impact on what those who want to discover these things can do.

      1. The behaviours & consequences of that “cultural bias” is something I frequently see in my engineering profession. I know exactly what you mean.
        If the engineering is product oriented, one significant consequence is you get blind-sided by your competitors, lose market share, or miss a big opportunity. Do it enough, and the company becomes irrlevant, goes out of business, or gets acquired by a competitor.

  8. Matt: It is a big surprise for me that there are detectors within a mm of the interaction vertex. From the picture of three story high ATLAS or CMS, it does not look like that! What am I missing?

    1. Matt: Let me add a question. What is the shortest distance of a piece of detector from the interaction vertex? Thanks.

      1. “Connect-the-dots”. If you make careful measurements of particle locations every 10 centimeters, you can draw a line through these locations (actually a curve, to account for the magnetic field in the detector) and extrapolate backward to figure out where the track’s path was before you started measuring it. Most tracks that you detect, when you extrapolate back, will intersect each other at a point dead center in the detector — the collision point of the two protons. But if you see several tracks that, when you extrapolate them back, intersect at a point which is away from the collision point, you have probably detected tracks of particles that were emitted in the decay of a heavier particle which traveled some distance from the collision point before it decayed. The tracking in the experiments is so fantastically good that they can detect intersecting tracks whose intersection is displaced by less than a millimeter. That is even though the first piece of apparatus is several centimeters away from the collision point.

        1. Is it correct that a packet of protons collide with another packet increasing the chance of getting some pairs to collide? So when you having multiple collision what effect does that have in the accuracy of detection?

          1. Multiple collisions happen in every collision of two packets (or “bunches” as the LHC people call them). The tracker is again used to separate tracks that came from different collisions. Multiple collisions (or “pile-up”) are indeed a source of challenges that the experiments have worked hard to mitigate… but it will get worse as the LHC goes to higher and higher collision rates.

          2. Is the collinearity of the bunches the challenge? The detector (magnets) not being longer enough to increase the collinearity and maybe even the spacing between possible collisions.

            If that is the case would not the projected linear collider planned for the US be a more accurate experiment?

  9. Matt,
    It sounds perhaps silly but I would search for the Top prime quark and perhaps Dark Matter particles hopefully the origin of already well known UFO dust particles at CERN. See my essay:
    Dark Matter UFO Dust Particles or Quantum Knots in the LHC at CERN ?
    [Editor’s note: link removed; this is not an advertising site.]

  10. Matt, at the risk of asking a silly question, why is this also not a problem for believing we have found the Higgs particle? From what I understand, we believe it is the Higgs because it is a boson, and we believe it is a boson because we have counted the spins of the decay particles, which we think total zero. (If wrong, I apologise, and lease correct me.) But how do we know there was not also something like a neutrino given off, which would turn our boson into a fermion?

    1. Conservation of energy. The same way the neutrino was discovered — something was missing from the decay of a neutron when you tried to conserve energy and momentum using just the observed proton and neutron. [Slightly oversimplified from the historical truth, but close enough.] Nothing is missing when the Higgs decays to two photons; the energy and momentum match up just right.

          1. Thanks. As for background, chemistry, so elementary particles are for interest only. Actually, I have revisited one of the papers, and noticed γγ detected, so I guess I was not paying enough attention 🙁

  11. @John Duffield:

    Do you mean that scientists have inferred the existence of particles like the photon, but have not proven its existence?

    So, could we say that our eyes also infer photons?

    It sounds like spooky stuff, since would could argue that our eyes sense a presence of something that may or may not be actually there, because we have only inferred its existence.

    Kind regards, GEN

    1. No. There’s no issues with the existence of photons. Or electrons or protons or neutrons or neutrinos. What I mean is that scientists have inferred the existence the W boson, the top quark, and the Higgs boson. AFAIK the existence of the top quark was inferred from the existence of the W boson which was in itself inferred, and what was actually detected was an electron.

      1. mmm, I see your point regarding the apparent lack of “solid” evidence related to the discoveries of the W’s and the Z, or the Higgs boson.

        Regarding the so called “lack of evidence”, it might make (some) sense the use of such a concept with the discovery of quarks, mostly because of the main consequence of quark confinement: we just can’t have naked, free quarks, so, there is no direct evidence of any one of them.

        But the theory of quarks that we have is able to explain, with the least amount of arbitrary parameters (occam’s razor), all the experimental evidence pertaining to the strong force.

        Getting back to the Ws, the Zs and the Higgs boson, it is not so that there is a real “lack” of evidence, but then again, the most important aspect of the whole thing is that we have a set of interrelated theories, each one of the is mathematically and physically consistent, and they are consistent between them, we can make predictions from those theories, we can design experiments based on those predictions to either validate or refute the theories, and the experiments have consistently validated the theories.

        Higgs, Englert, Brout and the other three guys independently came up with the same broken symmetry theory, then Weinberg and Salam independently found a real and solid use for the Higgs boson, the electroweak theory, and in the process, they reinvented the BEH theory, and then, Venkman and t’Hooft redefined the entire electroweak theory and the Higgs mechanism by making it renormalizable, which originally it was not.

        The fact that all these things are consistent within this incredible framework of thought that is physics, and that nature favors it with experimental validation is the real solid evidence that we all agree to be “good enough” to be considered valid, so far.

        Kind regards, GEN

  12. Matt: this is good. I think physicists tend to lose sight of the fact that most of their “particles” are “discovered” by inference only, and are so very ephemeral that they can hardly be said to exist. So I think it’s good to focus on longer-lived particles. But I’d like to see even more focus on longer-lived particles. Because with respect, you can’t explain how pair production actually works, and what the electron actually is. Until you can, IMHO you’re missing the trick. And spotting some new type of flash in the pan just isn’t going to make up for that.

  13. On the large scale ….. are gravity waves dead now ?
    After the usual hype of BICEP2 now with planck there are Anti- hype !
    Some one please explain this …….

    1. I’m not an expert on BICEP2, but it is my understanding that a more detailed analysis of Planck data related to space dust around the Milky Way indicates that It cannot be ruled out that what BICEP2 has measured regarding polarization (of the CMB signal) could also be explained by an expected interaction of the CMB signal with that space dust all around us.

      So, are gravitational waves from inflation dead?

      I would not rush such a conclusion just yet.

      Scientists will have to figure out and design other experiments that could “walk around” the space dust to be able to capture the polarization of the CMB signal without the annoying effects from space dust and then see what shows up.

      Kind regards, GEN

  14. Well, this is a proper site where we could discuss (with sound arguments) why space is dark and also why the night sky is dark but the sky at daytime is shiny and it presents a nice light-blue tint.

    The whole affair has to do with particle physics, entirely.

    Kind regards, GEN

    1. Jeez…Sorry moderator. I take your point GEN but it was a light hearted comment at best and besides lots of people wonder around the main topic (including you). But maybe your right it was a bit early to chime in off topic on this great blog of Matt’s.

      1. @PaulH,

        This site is run by its owner, Matt, and I have nothing to do with the moderation of it, but since we are all eager visitors of this site, I guess that it makes a lot of sense to take care of some of the usual protocols and etiquette that apply on the web in general, and at blog sites like this one, in particular.

        Kind regards, GEN

      2. Sadly when it comes to this blog you will find commentators that write such posts in all seriousness, a sad thing about the internet in general is that there is no position you can take in jest that isn’t also taken seriously by another, often whole forums of them.

  15. Recently Dr Lawrence Krauss was interviewed on CNN, re: NASA’s black hole Friday gig, and said, … if one takes the density of the entire universe you only need twice that to condense to a black hole … ” We all could be living in a black hole.”

    He also said something which sparked an interesting idea in my amateurish mind, he said, “of course we cannot see a black hole because black holes don’t shine.” (he explained that the escape velocity will need to be greater than c hence light cannot escape, fine) but he also said, “we can only see the effects around the black hole.”

    So, he’s an amateur’s take on that, could the EMR be the effects of what is really going on in the “spacetime” surrounding it? In other words is a field composed of an array of tiny black holes that fill the entire universe and all oscillating or “jittering” (don’t know why they would vibrate but we know it’s there in the quantum jitter)?

    1. Well there are a few problems there. Firstly, small black holes are unstable; they emit all sorts of particles and the BHs you suggest would be small indeed.

      There’s also ‘uniformity’; a field being ‘composed of an array’ immediately causes problems. As far as we know so far space is the same in all directions, as are the fields in it, and they are also ‘smooth’ or continuous. An array however is distinctly nonuniform. There are nodes where something is and gaps between them. Something moving in one direction will pass through or near more nodes than something passing in another direction, space would have ‘preferred’ directions at some scale.

      There’s also the uncertainty; if such BHs were ‘real’ and able to jitter they should occasionally (Or not so occasionally) merge. Small BHs are also incredibly dense; they would fill space with massive amounts of, well mass. All of these objections can be overcome with fancy enough mathematics, but at that point you’re not really dealing with black holes anymore as we usually understand them.

      However there ARE a number of theories involving ‘cellular automata’ and\or quantization of spacetime which basically suggest the universe consists of some kind of array of smallest units in some fashion. They’re an interesting subject and I have some hopes pinned on their reality.

      1. Your points are well taken but see if an amateur can put some spin on them;

        “… small black holes are unstable …” yes but if they were the smallest entities then it would not matter they will exist and the instability maybe the source of the “quantum jitter”.

        “… uniformity …” ” (array of BHs) would be distinctly nonuniform …” yes and yes and both could be real phenomena. It all depends on the observer and sensors which we use to measure “uniformity. We have physics to somewhat understand down to Planck’s scale but we nothing below neither formulation nor (much to my dismay) sensors (and experiments) to make any attempt to investigate further down the scale. (instead we are wasting vast amounts of money and human resourse building bigger and bigger firecrackers, sorry for the politics).

        ” … There are nodes where something is and gaps between them. Something moving in one direction will pass through or near more nodes than something passing in another direction, space would have ‘preferred’ directions at some scale. ”

        Very true and an excellent observation and I would say still consistent with the possible BHs array concept. This non-uniformity of space (variable density) could very well explain the waveform and why EMR are waves. Also note, the “continuous” range of frequencies that makeup EMR. Here is where “your” uniformity begins and by coincidence is where our measurements begin as well.

        “… There’s also the uncertainty; if such BHs were ‘real’ and able to jitter they should occasionally (Or not so occasionally) merge. … mass …”

        Obviously true, we are all composed of massive particles. Again, I say still consistent with the BHs array idea. Here I would direct you to “Dirac’s antimatter”. Matter seemed to have won the battle at the “Big Bang” but who is really winning the war with time?

        One thing you left out of your comment is dark matter and dark energy. Why cannot we see dark matter? What is it, that some believe is causing the expansion of the universe itself, that is a ‘big force’ maybe the biggest in existence comparable to the Big Bang. Could it be the same mechanism acting on the universe (massive particles)? Could it be the configuration of space itself, an array of BH’s?

        I enjoy these amateurish speculations, better than a boring novel, 🙂

        1. When a star collapses, does that happen FTL? Wouldn’t the moment of singularity cause some kind of time lapse?…. And the event horizon formation (inflation) also form FTL? At least from the perspective of someone inside? We have no idea what happens inside a black hole, maybe inside it’s all expanding out from singularity (plank scale minimum bounce) but from the outside time runs slower (gravity)….I have a deep suspicion black holes will hold deep secrets, like Russian dolls, universes within universes. That’s my amature 2 pence worth, Beer speculation as Matt would say respectfully 🙂

          1. *When a sufficiently massive black hole forming star I mean…….. I also spelt amateur wrong…. Oh dear.

          2. Nothing can collapse FTL, taking a very, very massive sphere of mass say a light year wide ball of lead.) and letting it collapse, the collapse eventually reaches a maximum speed; there’s only so much gravitational potential energy to go around and it eventually all gets converted to motion.

            Several physicists have presented a paper on BHs that are expanding internally; http://arxiv.org/abs/1407.0989 In this case BHs collapse to a minimum then bounce back in a faction of a second as a ‘white hole’. However, due to gravity effects what would be seen on the outside is a very, very long period of slow emission getting slowly faster until the hole evaporates, just what we expect to see from Hawking radiation.

            The fun thing about singularities is that they’re pretty much a word we invented for ‘we know we’re wrong here but we don’t know why.’ You are probably aware of the ‘ultraviolet catastrophe’ where classical physics suggested that every bit of matter should be glowing infinitely brightly. (Thus emitting infinite energy and energy density.) This was solved by the application of quantum mechanical ideas. More recently there was the issue of renormalization.

            When it is said that the laws of physics ‘break down’ what is meant is that they are incomplete, that something else must take over. I think that all our wondering about singularities and infinities is akin to those early physicists trying to figure out how something can glow infinitely bright but still obey energy conservation. As you say, who knows what we may discover on the way to explaining these things.

        2. On your first comment; the ‘smallest entities’ thing immediately moves us away from ‘real’ black holes; it’s like saying the universe is made of the ‘smallest rocks’ (Thankyou SMBC.); such things must then behave in ways that a larger equivalent does not.

          On your second statement, the thing about ‘bigger firecrackers’ is that it is by and large the only way to sense down to smaller scales. When your light microscope doesn’t work because there are things smaller than the light wavelength you need something of shorter wavelength and greater energy, such as an electron microscope. To peer into a proton you need high energy phenomena indeed. I am certainly not sure how one would go about looking into the planck scale without a big banger indeed.

          The issue of whether we have sensitive enough measurements to detect nonuniformity is one that is still open in physics, like ‘rolled up dimensions’ or inaccuracies in relativity all we can do is set limits. (And they don’t care much about what theory you use, if the universe has a distinct center or edge then there will be nonuniformity, even if space itself is uniform ‘all the way down’.)

          On your third comment I wonder then what would keep an array of BHs stable? Surely they could very quickly jitter into each other and start merging, growing larger. Again math can solve these problems if tweaked enough but I’m really not sure calling these units of yours ‘black holes’ is honest, they seem to lack all of the properties of said holes. Maybe something more neutral like ‘cells’ or ‘points’ would be better.

          1. Thanks for the reply kudzu.
            So the idea of a Planck scale bounce in a collapse isn’t so crazy. Yes my take is a singularity is only a mathematical entity until quantum gravity is worked out? (An infinity that will be tamed like the UV catastrophe was).

            Thanks for the link also.

  16. That is an interesting and surprising post. I had just assumed that ‘long lived’ particles are the first thing any new collider would search for.

    I’m also curious as to the LHC energy limitations. Is there a simple rule for individual particle production in a Proton/Proton collider like the LHC? I know the proton is a composite particle and I was told that because of this, to get a general picture of the maximum energy of the individual particles produced by a proton/proton collision you take the beam energy and divide by six. Thus if you have for example, beams of 3.6 TeV for a total energy of 7.2 TeV in a PP collider you should expect the maximum energy of individual particles produced to be ~3600 GeV/6 = 600 GeV. Is there any truth to that?

    1. A rule of thumb like “Energy divided by six” leaves out so much crucial detail that it is quite useless. At the 8 TeV LHC, limits on new W’ particles reached 3 TeV, while limits on gluinos reached 1.5 TeV, and limits on higgsinos didn’t even get much above 100 GeV. You have to look at each particle and how it is produced and observed.

      1. Do you have to fold in energy distribution of quarks and gluons for calculation of each event. Is this accurately known?

        1. Not for each event, which has a lot of randomness, but for the full distributions of events. The probabilities of finding quarks, antiquarks and gluons inside protons with a certain amount of energy are known in most relevant regions to better than 5 – 10%, but that’s a long story.

    1. Indeed you are using my model as a target, which of course tickles me — you’re sending the public off to find what I happen to have predicted, thank you. I will write about this some other time, but am trying to clarify some issues with Alan Barr first.

  17. BTW, Matt, It is my understanding that if this research of yours on “novel” triggering strategies to look after new kinds of long-lived particles leads to an actual discovery, that really looks like “nobel” material to me.

    The best of wishes on this wonderful endeavour of yours.

    Kind regards, GEN

Leave a Reply

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.

Related

This is my second post on the subject of why “the speed of light (in empty space)”, more accurately referred to as “the cosmic speed

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 10/03/2024

Things have been extremely busy! I have If any of these might interest you, here are the details!

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON 09/20/2024