Category Archives: LHC Background Info

Quantum Field Theory, String Theory, and Predictions (Part 7)

Appropriate for Advanced Non-Experts

[This is the seventh post in a series that begins here.]

In the last post in this series, I pointed out that there’s a lot about quantum field theory [the general case] that we don’t understand.  In particular there are many specific quantum field theories whose behavior we cannot calculate, and others whose existence we’re only partly sure of, since we can’t even write down equations for them. And I concluded with the remark that part of the reason we know about this last case is due to “supersymmetry”.

What’s the role of supersymmetry here? Most of the time you read about supersymmetry in the press, and on this website, it’s about the possible role of supersymmetry in addressing the naturalness problem of the Standard Model [which overlaps with and is almost identical to the hierarchy problem.] But actually (and I speak from personal experience here) one of the most powerful uses of supersymmetry has nothing to do with the naturalness problem at all.

The point is that quantum field theories that have supersymmetry are mathematically simpler than those that don’t. For certain physical questions — not all questions, by any means, but for some of the most interesting ones — it is sometimes possible to solve their equations exactly. And this makes it possible to learn far more about these quantum field theories than about their non-supersymmetric cousins.

Who cares? you might ask. Since supersymmetry isn’t part of the real world in our experiments, it seems of no use to study supersymmetric quantum field theories.

But that view would be deeply naive. It’s naive for three reasons. Continue reading

Off to Illinois’s National Labs For a Week of Presentations

I have two very different presentations to give this week, on two very similar topics. First I’m going to the LHC Physics Center [LPC], located at the Fermilab National Accelerator Laboratory, host of the now-defunct Tevatron accelerator, the predecessor to the Large Hadron Collider [LHC]. The LPC is the local hub for the United States wing of the CMS experiment, one of the two general-purpose experiments at the LHC. [CMS, along with ATLAS, is where the Higgs particle was discovered.] The meeting I’m attending is about supersymmetry, although that’s just its title, really; many of the talks will have implications that go well beyond that specific subject, exploring more generally what we have and still could search for in the LHC’s existing and future data.  I’ll be giving a talk for experts on what we do and don’t know currently about one class of supersymmetry variants, and what we should be perhaps be trying to do next to cover cases that aren’t yet well-explored.

Second, I’ll be going to Argonne National Laboratory, to give a talk for the scientists there, most of whom are not particle physicists, about what we have learned so far about nature from the LHC’s current data, and what the big puzzles and challenges are for the future.  So that will be a talk for non-expert scientists, which requires a completely different approach.

Both presentations are more-or-less new and will require quite a bit of work on my part, so don’t be surprised if posts and replies to comments are a little short on details this week…

At a CMS/Theory Workshop in Princeton

For Non-Experts Who've Read a Bit About Particle Physics

I spent yesterday, and am spending today, at Princeton University, participating in a workshop that brings together a group of experts from the CMS experiment, one of the two general purpose experiments at the Large Hadron Collider (where the Higgs particle was discovered.) They’ve invited me, along with a few other theoretical physicists, to speak to them about additional strategies they might use in searching for phenomena that are not expected to occur within the Standard Model (the equations we use to describe the known elementary particles and forces.) This sort of “consulting” is one of the roles of theorists like me. It involves combining a broad knowledge of the surprises nature might have in store for us with a comprehensive understanding of what CMS and its competitor ATLAS (as well as other experiments at and outside the LHC) have and have not searched for already.

A lot of what I’ll have to say is related to what I said in Stony Brook at the SEARCH workshop, but updated, and with certain details adjusted to match the all-CMS audience.

Yesterday afternoon’s back-and-forth between the theorists and the experimentalists was focused on signals that are very hard to detect directly, such as (still hypothetical) dark matter particles. These could perhaps be produced in the LHC’s proton-proton collisions, but could then go undetected, because (like neutrinos) they pass without hitting anything inside of CMS. But even though we can’t detect these particles directly, we can sometimes tell indirectly that they’re present, if the collision simultaneously makes something else that recoils sharply away from them. That sometime else could be a photon (i.e. a particle of light) or a jet (the spray of particles that tells you that a high-energy gluon or quark was produced) or perhaps something else. There was a lot of interesting discussion about the various possible approaches to searching for such signals more effectively, and about how the trigger strategy might need to be adjusted in 2015, when the LHC starts taking data again at higher energy per collision, so that CMS remains maximally sensitive to their presence. Clearly there is much more work to do on this problem.

A Busy Week at CERN

A week at CERN, the laboratory that hosts the Large Hadron Collider [LHC] (where the Higgs particle was discovered), is always extremely packed, and this one was no exception. It’s very typical that on a given day I’ll have four or five intense one-on-one scientific meetings with colleagues, both theorists and experimenters, and will attend a couple of presentations on hot topics — or perhaps ten, if there’s a conference going on (which is almost always.) Work starts at 9 am and typically ends at 7 pm. And of course I have my own work to do — papers to finish, for instance — so after a break for dinner, I keep working til midnight. Squeezing in time for writing blog posts can be tough under these conditions! But at least it is for very good reasons.

Just this morning I’ve just attended two talks related to a future particle physics collider that people are starting to think seriously about… a collider (currently called T-LEP) that would be built in an 80 kilometer-long [50 mile-long] circular tunnel, and in which electrons and positrons [positron = anti-electron] would be smashed together.  The physics program of such a machine would be quite broad, including intensive studies of the four heaviest known particles in nature: the Z particle, the W particle, the Higgs particle and the top quark. Any one of them might reveal secrets when investigated in detail.  In fact, T-LEP’s extremely precise measurements, made in the 100-500 GeV = 0.1-0.5 TeV energy range, would be used to check the equations that explain how the Higgs field gives elementary particles their masses to one part in a thousand, and to potentially be indirectly sensitive to effects of unknown particles and forces all the way up to 10-30 TeV energy scales.

After that I had a typical meeting with an experimentalist at the CMS experiment, discussing the many ways that one might still make discoveries using the existing 2011-2012 LHC data. The big concern here is that the LHC experimenters are so busy getting ready for the 2015 run of the LHC that they may not fully exploit the data that they already have.

Off to more meetings…

Visiting the Host Lab of the Large Hadron Collider

Greetings from Geneva, and CERN, the laboratory that hosts the Large Hadron Collider [LHC], where the Higgs particle was found by the physicists at the ATLAS and CMS experiments. Between jet lag, preparing a talk for Wednesday, and talking to many experimental and theoretical particle physicists from morning til night, it will be a pretty exhausting week.

The initial purpose of this trip is to participate in a conference held by the LHCb experiment, entitled “Implications of LHCb measurements and future prospects.” Its goal is to bring theoretical particle physicists and LHCb experimenters together, to exchange information about what has been and what can be measured at LHCb.

On this website I’ve mostly written about ATLAS and CMS, partly because LHCb’s measurements are often quite subtle to explain, and partly because the Higgs particle search, the highlight of the early stage of the LHC, was really ATLAS’s and CMS’s task. But this week’s activities gives me a nice opportunity to put the focus on this very interesting experiment, which is quite different from ATLAS and CMS both in its design and in its goals, and to explain its important role.

ATLAS and CMS were built as general purpose detectors, whose first goal was to find the Higgs particle and whose second was to find (potentially rare) signs of any other high-energy processes that are not predicted by the Standard Model, the equations we use to describe all the known particles and forces of nature. Crudely speaking, ATLAS and CMS are ideal for looking for new phenomena in the 100 to 5000 GeV energy range (though we won’t reach the upper end of the range until 2015 and beyond.)

LHCb, by contrast, was built to study in great detail the bottom and charm quarks, and the hadrons (particles made from quarks, anti-quarks and gluons) that contain them. These quarks and their antiquarks are produced in enormous abundance at the LHC. They and the hadrons that contain them have masses in the 1.5 to 10 GeV/c² range… not much heavier than protons, and much lower than what ATLAS and CMS are geared to study. And this is why LHCb has been making crucial high-precision tests of the Standard Model using bottom- and charm-containing hadrons.  (Crucial, but not, despite repeated claims by the LHCb press office, capable of ruling out supersymmetry, which no single measurement can possibly do.)

Although this is the rough division of labor among these experiments, it’s too simplistic to describe the experiments this way. ATLAS and CMS can do quite a lot of physics at the low mass range, and in some measurements can compete well with LHCb.   Less well-known is that LHCb may be able to do a small but critical set of measurements involving higher energies than is their usual target.

LHCb is very different from ATLAS and CMS in many ways, and the most obvious is its shape. ATLAS and CMS look like giant barrels centered on the location of the proton-proton collisions, and are designed to measure as many particles as possible that are produced in the collision of two protons. LHCb’s shape is more like a wedge, with one end surrounding the collision point.

Left: Cut-away drawing of CMS, which is shaped like a barrel with proton-proton collisions occurring at its center.  ATLAS's shape is similar. Right: the LHCb experiment is shaped something like a wedge, with collisions occurring at one end.

Left: Cut-away drawing of CMS, which is shaped like a barrel with proton-proton collisions occurring at its center. ATLAS’s shape is similar. Right: Cut-away drawing of LHCb, which is shaped something like a wedge, with collisions occurring at one end.

This shape only allows it to measure those particle that go in the “forward” direction — close to the direction of one of the proton beams. (“Backward” would be near the other beam; the distinction between forward and backward is arbitrary, because the two proton beams have the same properties. “Central” would be far from either beam.) Unlike ATLAS and CMS, LHCb is not used to reconstruct the whole collision; many of the particles produced in the collision go into backward or central regions which LHCb can’t observe.  This has some disadvantages, and in particular put LHCb out of the running for the Higgs discovery. But a significant fraction of the bottom and charm quarks produced in proton-proton collisions go “forward” or “backward”, so a forward-looking design is fine if it’s bottom and charm quarks you’re interested in. And such a design is a lot cheaper, too. It also means that LHCb  is well positioned to make some other measurements where the forward direction is important. I’ll give you one or two examples later in the week.

To make their measurements of bottom and charm quarks, LHCb makes use of the fact that these quarks decay after about a trillionth of a second (a picosecond) [or longer if, as is commonly the case, there is significant time dilation due to Einstein's relativity effects on very fast particles].  This is long enough for them to travel a measurable distance — typically a millimeter or more. LHCb is designed to make the measurements of charged particles with terrific precision, allowing them to infer a slight difference between the proton-proton collision point, from which most low-energy charged particles will emerge, and the location where some other charged particles may have been produced in the decay of a bottom hadron or some other particle that travels a millimeter or more before decaying. The ability to do precision “tracking” of the charged particles makes LHCb sensitive to the presence of any as-yet unknown particles that might be produced and then decay after traveling a small or moderate distance. More on that later in the week.

A computer reconstruction of the tracks in a proton-proton collision measured by LHCb.  Most tracks start at the proton-proton collision point, but the two tracks drawn in purple emerge from a different point, the apparent location of the decay of a hadron containing a bottom quark.

A computer reconstruction of the tracks in a proton-proton collision, as measured by LHCb. Most tracks start at the proton-proton collision point at left, but the two tracks drawn in purple emerge from a different point about 15 millimeters away, the apparent location of the decay of a hadron, whose inferred trajectory is the blue line, and whose mass (measured from the purple tracks) indicates that it contained a bottom quark.

One other thing to know about LHCb; in order to make their precise measurements possible, and to deal with the fact that they don’t observe a whole collision, they can’t afford to have too many collisions going on at once. ATLAS and CMS have been coping with ten to twenty simultaneous proton-proton collisions; this is part of what is known as “pile-up”. But near LHCb the LHC beams are adjusted so that the number of collisions at LHCb is often limited to just one or two or three simultaneous collisions. This has the downside that the amount of data LHCb collected in 2011 was about 1/5 of what ATLAS and CMS each collected, while for 2012 the number was more like 1/10.  But LHCb can do a number of things to make up for this lower rate; in particular their trigger system is more forgiving than that of ATLAS or CMS, so there are certain things they can measure using data of a sort that ATLAS and CMS have no choice but to throw away.

The Twists and Turns of Hi(gg)story

In sports, as in science, there are two very different types of heroes.  There are the giants who lead the their teams and their sport, winning championships and accolades, for years, and whose fame lives on for decades: the Michael Jordans, the Peles, the Lou Gherigs, the Joe Montanas. And then there are the unlikely heroes, the ones who just happen to have a really good day at a really opportune time; the substitute player who comes on the field for an injured teammate and scores the winning goal in a championship; the fellow who never hits a home run except on the day it counts; the mediocre receiver who turns a short pass into a long touchdown during the Super Bowl.  We celebrate both types, in awe of the great ones, and in amused pleasure at the inspiring stories of the unlikely ones.

In science we have giants like Newton, Darwin, Boyle, Galileo… The last few decades of particle physics brought us a few, such as Richard Feynman and Ken Wilson, and others we’ll meet today.  Many of these giants received Nobel Prizes.   But then we have the gentlemen behind what is commonly known as the Higgs particle — the little ripple in the Higgs field, a special field whose presence and properties assure that many of the elementary particles of nature have mass, and without which ordinary matter, and we ourselves, could not exist.  Following discovery of this particle last year, and confirmation that it is indeed a Higgs particle, two of them, Francois Englert and Peter Higgs, have been awarded the 2013 Nobel Prize in physics.  Had he lived to see the day, Robert Brout would have been the third.

My articles Why The Higgs Particle Matters and The Higgs FAQ 2.0; the particles of nature and what they would be like if the Higgs field were turned off; link to video of my public talk entitled The Quest for the Higgs Boson; post about why Higgs et al. didn’t win the 2012 Nobel prize, and about how physicists became convinced since then that the newly discovered particle is really a Higgs particle;

The paper written by Brout and Englert; the two papers written by Higgs; the paper written by Gerald Guralnik, Tom Kibble and Carl Hagen; these tiny little documents, a grand total of five and one half printed pages — these were game-winning singles in the bottom of the 9th, soft goals scored with a minute to play, Hail-Mary passes by backup quarterbacks — crucial turning-point papers written by people you would not necessarily have expected to find at the center of things.  Brout, Englert, Higgs, Guralnik, Kibble and Hagen are (or rather, in Brout’s case, sadly, were) very fine scientists, intelligent and creative and clever, and their papers, written in 1964 when they were young men, are imperfect but pretty gems.  They were lucky: very smart but not extraordinary physicists who just happened to write the right paper at the right time. In each case, they did so

History in general, and history of science in particular, is always vastly more complex than the simple stories we tell ourselves and our descendants.  Making history understandable in a few pages always requires erasing complexities and subtleties that are crucial for making sense of the past.  Today, all across the press, there are articles explaining incorrectly what Higgs and the others did and why they did it and what it meant at the time and what it means now.  I am afraid I have a few over-simplified articles of my own. But today I’d like to give you a little sense of the complexities, to the extent that I, who wasn’t even alive at the time, can understand them.  And also, I want to convey a few important lessons that I think the Hi(gg)story can teach both experts and non-experts.  Here are a couple to think about as you read:

1. It is important for theoretical physicists, and others who make mathematical equations that might describe the world, to study and learn from imaginary worlds, especially simple ones.  That is because

  • 1a. one can often infer general lessons more easily from simple worlds than from the (often more complicated) real one, and
  • 1b. sometimes an aspect of an imaginary world will turn out to be more real than you expected!

2. One must not assume that research motivated by a particular goal depends upon the achievement of that goal; even if the original goal proves illusory, the results of the research may prove useful or even essential in a completely different arena.

My summary today is based on a reading of the papers themselves, on comments by John Iliopoulos, and on a conversation with Englert, and on reading and hearing Higgs’ own description of the episode.

The story is incompletely but perhaps usefully illustrated in the figure below, which shows a cartoon of how four important scientific stories of the late 1950s and early 1960s came together. They are:

  1. How do superconductors (materials that carry electricity without generating heat) really work?
  2. How does the proton get its mass, and why are pions (the lightest hadrons) so much lighter than protons?
  3. Why do hadrons behave the way they do; specifically, as suggested by J.J. Sakurai (who died rather young, and after whom a famous prize is named), why are there photon-like hadrons, called rho mesons, that have mass?
  4. How does the weak nuclear force work?  Specifically, as suggested by Schwinger and developed further by his student Glashow, might it involve photon-like particles (now called W and Z) with mass?

These four questions converged on a question of principle: “how can mass be given to particles?”, and the first, third and fourth were all related to the specific question of “how can mass be given to photon-like particles?’’  This is where the story really begins.  [Almost everyone in the story is a giant with a Nobel Prize, indicated with a parenthetic (NPyear).]

My best attempt at a cartoon history...

My best attempt at a cartoon history…

In 1962, Philip Anderson (NP1977), an expert on (among other things) superconductors, responded to suggestions and questions of Julian Schwinger (NP1965) on the topic of photon-like particles with mass, pointing out that a photon actually gets a mass inside a superconductor, due to what we today would identify as a sort of “Higgs-type’’ field made from pairs of electrons.  And he speculated, without showing it mathematically, that very similar ideas could apply to empty space, where Einstein’s relativity principles hold true, and that this could allow elementary photon-like particles in empty space to have mass, if in fact there were a kind of Higgs-type field in empty space.

In all its essential elements, he had the right idea.  But since he didn’t put math behind his speculation, not everyone believed him.  In fact, in 1964 Walter Gilbert (NP1980 for chemistry, due to work relevant in molecular biology — how’s that for a twist?) even gave a proof that Anderson’s idea couldn’t work in empty space!

But Higgs immediately responded, arguing that Gilbert’s proof had an important loophole, and that photon-like particles could indeed get a mass in empty space.

Meanwhile, about a month earlier than Higgs, and not specifically responding to Anderson and Gilbert, Brout and Englert wrote a paper showing how to get mass for photon-like particles in empty space. They showed this in several types of imaginary worlds, using techniques that were different from Higgs’ and were correct though perhaps not entirely complete.

A second paper by Higgs, written before he was aware of Brout and Englert’s work, gave a simple example, again in an imaginary world, that made all of this much easier to understand… though his example wasn’t perhaps entirely convincing, because he didn’t show much detail.  His paper was followed by important theoretical clarifications from Guralnik, Hagen and Kibble that assured that the Brout-Englert and Higgs papers were actually right.  The combination of these papers settled the issue, from our modern perspective.

And in the middle of this, as an afterthought added to his second paper only after it was rejected by a journal, Higgs was the first person to mention something that was, for him and the others, almost beside the point — that in the Anderson-Brout-Englert-Higgs-Guralnik-Hagen-Kibble story for how photon-like particles get a mass, there will also  generally be a spin-zero particle with a mass: a ripple in the Higgs-type field, which today we call a Higgs-type particle.  Not that he said very much!   He noted that spin-one (i.e. photon-like) and spin-zero particles would come in unusual combinations.  (You have to be an expert to even figure out why that counts as predicting a Higgs-type particle!)  Also he wrote the equation that describes how and why the Higgs-type particle arises, and noted how to calculate the particle’s mass from other quantities.  But that was it.  There was nothing about how the particle would behave, or how to discover it in the imaginary worlds that he was considering;  direct application to experiment, even in an imaginary world, wasn’t his priority in these papers.

Equation (2b) is the first time the Higgs particle explicitly appears in its modern form

In his second paper, Higgs considers a simple imaginary world with just a photon-like particle and a Higgs-type field.  Equation 2b is the first place the Higgs-type particle explicitly appears in the context of giving photon-like particles a mass (equation 2c).  From Physical Review Letters, Volume 13, page 508

About the “Higgs-type” particle, Anderson says nothing; Brout and Englert say nothing; Guralnik et al. say something very brief that’s irrelevant in any imaginable real-world application.  Why the silence?  Perhaps because it was too obvious to be worth mentioning?  When what you’re doing is pointing out something really “important’’ — that photon-like particles can have a mass after all — the spin-zero particle’s existence is so obvious but so irrelevant to your goal that it hardly deserves comment.  And that’s indeed why Higgs added it only as an afterthought, to make the paper a bit less abstract and a bit easier for  a journal to publish.  None of them could have imagined the hoopla and public excitement that, five decades later, would surround the attempt to discover a particle of this type, whose specific form in the real world none of them wrote down.

In the minds of these authors, any near-term application of their ideas would probably be to hadrons, perhaps specifically Sakurai’s theory of hadrons, which in 1960 predicted the “rho mesons”, which are photon-like hadrons with mass, and had been discovered in 1961.  Anderson, Brout-Englert and Higgs specifically mention hadrons at certain moments. But none of them actually considered the real hadrons of nature, as they were just trying to make points of principle; and in any case, the ideas that they developed did not apply to hadrons at all.  (Well, actually, that’s not quite true, but the connection is too roundabout to discuss here.)  Sakurai’s ideas had an element of truth, but fundamentally led to a dead end.  The rho mesons get their mass in another way.

Meanwhile, none of these people wrote down anything resembling the Higgs field which we know today — the one that is crucial for our very existence — so they certainly didn’t directly predict the Higgs particle that was discovered in 2012.   It was Steven Weinberg (NP1979) in 1967, and Abdus Salam (NP1979) in 1968, who did that.  (And it was Weinberg who stuck Higgs’ name on the field and particle, so that everyone else was forgotten.) These giants combined

  • the ideas of Higgs and the others about how to give mass to photon-like particles using a Higgs-type field, with its Higgs-type particle as a consequence…
  • …with the 1960 work of Sheldon Glashow (NP1979), Schwinger’s student, who like Schwinger proposed the weak nuclear force was due to photon-like particles with mass,…
  • …and with the 1960-1961 work of Murray Gell-Man (NP1969) and Maurice Levy and of Yoichiro Nambu (NP2008) and Giovanni Jona-Lasinio, who showed how proton-like or electron-like particles could get mass from what we’d now call Higgs-type fields.

This combination gave the first modern quantum field theory of particle physics: a set of equations that describe the weak nuclear and electromagnetic forces, and show how the Higgs field can give the W and Z particles and the electron their masses. It is the primitive core of what today we call the Standard Model of particle physics.  Not that anyone took this theory seriously, even Weinberg.  Most people thought quantum field theories of this type were mathematically inconsistent — until in 1971 Gerard ‘t Hooft (NP1999) proved they were consistent after all.

The Hi(gg)story is populated with giants.  I’m afraid my attempt to tell the story has giant holes to match.  But as far as the Higgs particle that was discovered last year at the Large Hadron Collider, the unlikely heroes of the story are the relatively ordinary scientists who slipped in between the giants and actually scored the goals.

Quantum Field Theory, String Theory and Predictions (Part 3)

[This is the third post in a series; here's #1 and #2.]

The quantum field theory that we use to describe the known particles and forces is called the “Standard Model”, whose structure is shown schematically in Figure 1. It involves an interweaving of three quantum field theories — one for the electromagnetic force, one for the weak nuclear force, and one for the strong nuclear force — into a single more complex quantum field theory.

SM_Interactions

Fig. 1: The three non-gravitational forces, in the colored boxes, affect different combinations of the known apparently-elementary particles. For more details see http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-known-apparently-elementary-particles/

We particle physicists are extremely fortunate that this particular quantum field theory is just one step more complicated than the very simplest quantum field theories. If this hadn’t been the case, we might still be trying to figure out how it works, and we wouldn’t be able to make detailed and precise predictions for how the known elementary particles will behave in our experiments, such as those at the Large Hadron Collider [LHC].

In order to make predictions for processes that we can measure at the LHC, using the equations of the Standard Model, we employ a method of successive approximation (with the jargon name “method of perturbations”, or “perturbation `theory’ ”). It’s a very common method in math and science, in which

  • we make an initial rough estimate,
  • and then correct the estimate,
  • and then correct the correction,
  • etc.,

until we have a prediction that is precise enough for our needs.

What are those needs? Well, the precision of any measurement, in any context, is always limited by having

  • a finite amount of data (so small statistical flukes are common)
  • imperfect equipment (so small mistakes are inevitable).

What we need, for each measurement, is a prediction a little more precise than the measurement will be, but not much more so. In the difficult environment of the LHC, where measurements are really hard, we often need only the first correction to the original estimate; sometimes we need the second (see Figure 2).

Fig. 2: LONLONNLO

Fig. 2: Top quark/anti-quark pair production rate as a function of the energy of the LHC collisions, as measured by the LHC experiments ATLAS and CMS, and compared with the prediction within the Standard Model.  The measurements are the colored points, with bars indicating their uncertainties.  The prediction is given by the colored bands — purple for the initial estimate, red after the first correction, grey after the second correction — whose widths indicate how uncertain the prediction is at each stage.  The grey band is precise enough to be useful, because its uncertainties are comparable to those of the data.  And the data and Standard Model prediction agree!

Until recently the calculations were done by starting with Feynman’s famous diagrams, but the diagrams are not as efficient as one would like, and new techniques have made them mostly obsolete for really hard calculations.

The method of successive approximation works as long as all the forces involved are rather “weak”, in a technical sense. Now this notion of “weak” is complicated enough (and important enough) that I wrote a whole article on it, so those who really want to understand this should read that article. The brief summary suitable for today is this: suppose you took two particles that are attracted to each other by a force, and allowed them to glom together, like an electron and a proton, to form an atom-like object.  Then if the relative velocity of the two particles is small compared to the speed of light, the force is weak. The stronger the force, the faster the particles will move around inside their “atom”.  (For more details see this article. )

For a weak force, the method of successive approximation is very useful, because the correction to the initial estimate is small, and the correction to the correction is smaller, and the correction to the correction to the correction is even smaller. So for a weak force, the first or second correction is usually enough; one doesn’t have to calculate forever in order to get a sufficiently precise prediction. The “stronger” the force, in this technical sense, the harder you have to work to get a precise prediction, because the corrections to your estimate are larger.

If a force is truly strong, though, everything breaks down. In that case, the correction to the estimate is as big as the estimate, and the next correction is again just as big, so no method of successive approximation will get you close to the answer. In short, for truly strong forces, you need a completely different approach if you are to make predictions.

In the Standard Model, the electromagnetic force and the weak nuclear force are “weak” in all contexts. However, the strong nuclear force is (technically) “strong” for any processes that involve distances comparable to or larger than a proton‘s size (about 100,000 times smaller than an atom) or energies comparable to or smaller than a proton’s mass-energy (about 1 GeV). For such processes, successive approximation does not work at all; it can’t be used to calculate a proton’s size or mass or internal structure. In fact the first step in that method would estimate that quarks and anti-quarks and gluons are free to roam independently and the proton should not exist at all… which is so obviously completely wrong that no method of correcting it will ever give the right answer.  I’ll get back to how we show protons are predicted by these equations, using big computers, in a later post.

But there’s a remarkable fact about the strong nuclear force. As I said, at distances the size of a proton or larger, the strong nuclear force is so strong that successive approximation doesn’t work. Yet, at distances shorter than this, the force actually becomes “weak”, in the technical sense, and successive approximation does work there.

Let me make sure this is absolutely clear, because the difference between what we think of colloquially as “weak” is different from “weak” in the sense I’m using it here.  Suppose you put two quarks very close together, at a distance r closer together than the radius R of a proton.  In Figure 3 I’ve plotted how big the strong nuclear force (purple) and the electromagnetic force (blue) would be between two quarks, as a function of the distance between them. Notice both forces are very strong (colloquially) at short distances (r << R), but (I assert) both forces are weak (technically) there.  The electromagnetic force is much the weaker of the two, which is why its curve is lower in the plot.  

Now if you move the two quarks apart a bit (increasing r, but still with r << R), both forces become smaller; in fact both decrease almost like 1/r², which would be your first, naive estimate, same as in your high school science class. If this naive estimate were correct, both forces would maintain the same strength (technically) at all distances r.  

But this isn’t quite right.  Since the 1950s, it was well-known that the correction to this estimate (using successive approximation methods) is to make the electromagnetic force decrease just a little faster than that; it becomes a tiny bit weaker (technically) at longer distances.  In the 60s, that’s what most people thought any force described by quantum field theory would do. But they were wrong.  In 1973, David Politzer, and David Gross and Frank Wilczek, showed that for the quantum field theory of quarks and gluons, the correction to the naive estimate goes the other direction; it makes the force decrease just a little more slowly than 1/r². [Gerard 't Hooft had also calculated this, but apparently without fully recognizing its importance...?] It is the small, accumulating excess above the naive estimate — the gradual deviation of the purple curve from its naive 1/r² form — that leads us to say that this force becomes technically “stronger” and “stronger” at larger distances. Once the distance r becomes comparable to a proton’s size R, the force becomes so “strong” that successive approximation methods fail.  As shown in the figure, we have some evidence that the force becomes constant for r >> R, independent of distance.  It is this effect that, as we’ll see next time, is responsible for the existence of protons and neutrons, and therefore of all ordinary matter.

Fig. 3: How the electromagnetic force (blue) and the strong nuclear force (purple) vary as a function of the distance r between two particles that feel the corresponding force. The horizontal axis shows r in units of the confinement scale R; the vertical axis shows the force in units of the minimum strength of the strong nuclear force, which it exerts for r > R.

Fig. 3: How the electromagnetic force (blue) and the strong nuclear force (purple) vary as a function of the distance r between two quarks. The horizontal axis shows r in units of the proton’s radius R; the vertical axis shows the force in units of the constant value that the strong nuclear force takes for r >> R.  Both forces are “weak” at short distances, but the strong nuclear force becomes “strong” once r is comparable to, or larger than, R.

So: at very short distances and high energies, the strong nuclear force is a somewhat “weak” force, stronger still than the electromagnetic and weak nuclear forces, but similar to them.  And therefore, successive approximation can tell you what happens when a quark inside one speeding proton hits a quark in a proton speeding the other direction, as long as the quarks collide with energy far more than 1 GeV. If this weren’t true, we could make scarcely any predictions at the LHC, and at similar proton-proton and proton-antiproton colliders! (We do also need to know about the proton’s structure, but we don’t calculate that: we simply measure it in other experiments.)  In particular, we would never have been able to calculate how often we should be making top quarks, as in Figure 2.  And we would not have been able to calculate what the Standard Model, or any other quantum field theory, predicts for the rate at which Higgs particles are produced, so we’d never have been sure that the LHC would either find or exclude the Standard Model Higgs particle. Fortunately, it is true, and that is why precise predictions can be made, for so many processes, at the LHC.  And the success of those and other predictions underlies our confidence that the Standard Model correctly describes most of what we know about particle physics.

But still, the equations of the strong nuclear force have only quarks and anti-quarks and gluons in them — no protons, neutrons, or other hadrons.  Our understanding of the real world would certainly be incomplete if we didn’t know why there are protons.  Well, it turns out that if we want to know whether protons and neutrons and other hadrons are actually predicted by the strong nuclear force’s equations, we have to test this notion using big computers. And that’s tricky, even trickier than you might guess.

Continued here

 

Did the LHC Just Rule Out String Theory?!

Over the weekend, someone said to me, breathlessly, that they’d read that “Results from the Large Hadron Collider [LHC] have blown string theory out of the water.”

Good Heavens! I replied. Who fed you that line of rubbish?!

Well, I’m not sure how this silliness got started, but it’s completely wrong. Just in case some of you or your friends have heard the same thing, let me explain why it’s wrong.

First, a distinction — one that is rarely made, especially by the more rabid bloggers, both those who are string lovers and those that are string haters. [Both types mystify me.] String theory has several applications, and you need to keep them straight. Let me mention two.

  1. Application number 1: this is the one you’ve heard about. String theory is a candidate (and only a candidate) for a “theory of everything” — a silly term, if you ask me, for what it really means is “a theory of all of nature’s particles, forces and space-time”. It’s not a theory of genetics or a theory of cooking or a theory of how to write a good blog post. But it’s still a pretty cool thing. This is the theory (i.e. a set of consistent equations and methods that describes relativistic quantum strings) that’s supposed to explain quantum gravity and all of particle physics, and if it succeeded, that would be fantastic.
  2. Application number 2: String theory can serve as a tool. You can use its mathematics, and/or the physical insights that you can gain by thinking about and calculating how strings behave, to solve or partially solve problems in other subjects. (Here’s an example.) These subjects include quantum field theory and advanced mathematics, and if you work in these areas, you may really not care much about application number 1. Even if application number 1 were ruled out by data, we’d still continue to use string theory as a tool. Consider this: if you grew up learning that a hammer was a religious idol to be worshipped, and later you decided you didn’t believe that anymore, would you throw out all your hammers? No. They’re still useful even if you don’t worship them.

BUT: today we are talking about Application Number 1: string theory as a candidate theory of all particles, etc. Continue reading