Of Particular Significance

Greetings from Geneva, and CERN, the laboratory that hosts the Large Hadron Collider [LHC], where the Higgs particle was found by the physicists at the ATLAS and CMS experiments. Between jet lag, preparing a talk for Wednesday, and talking to many experimental and theoretical particle physicists from morning til night, it will be a pretty exhausting week.

The initial purpose of this trip is to participate in a conference held by the LHCb experiment, entitled “Implications of LHCb measurements and future prospects.” Its goal is to bring theoretical particle physicists and LHCb experimenters together, to exchange information about what has been and what can be measured at LHCb.

On this website I’ve mostly written about ATLAS and CMS, partly because LHCb’s measurements are often quite subtle to explain, and partly because the Higgs particle search, the highlight of the early stage of the LHC, was really ATLAS’s and CMS’s task. But this week’s activities gives me a nice opportunity to put the focus on this very interesting experiment, which is quite different from ATLAS and CMS both in its design and in its goals, and to explain its important role.

ATLAS and CMS were built as general purpose detectors, whose first goal was to find the Higgs particle and whose second was to find (potentially rare) signs of any other high-energy processes that are not predicted by the Standard Model, the equations we use to describe all the known particles and forces of nature. Crudely speaking, ATLAS and CMS are ideal for looking for new phenomena in the 100 to 5000 GeV energy range (though we won’t reach the upper end of the range until 2015 and beyond.)

LHCb, by contrast, was built to study in great detail the bottom and charm quarks, and the hadrons (particles made from quarks, anti-quarks and gluons) that contain them. These quarks and their antiquarks are produced in enormous abundance at the LHC. They and the hadrons that contain them have masses in the 1.5 to 10 GeV/c² range… not much heavier than protons, and much lower than what ATLAS and CMS are geared to study. And this is why LHCb has been making crucial high-precision tests of the Standard Model using bottom- and charm-containing hadrons.  (Crucial, but not, despite repeated claims by the LHCb press office, capable of ruling out supersymmetry, which no single measurement can possibly do.)

Although this is the rough division of labor among these experiments, it’s too simplistic to describe the experiments this way. ATLAS and CMS can do quite a lot of physics at the low mass range, and in some measurements can compete well with LHCb.   Less well-known is that LHCb may be able to do a small but critical set of measurements involving higher energies than is their usual target.

LHCb is very different from ATLAS and CMS in many ways, and the most obvious is its shape. ATLAS and CMS look like giant barrels centered on the location of the proton-proton collisions, and are designed to measure as many particles as possible that are produced in the collision of two protons. LHCb’s shape is more like a wedge, with one end surrounding the collision point.

Left: Cut-away drawing of CMS, which is shaped like a barrel with proton-proton collisions occurring at its center.  ATLAS's shape is similar. Right: the LHCb experiment is shaped something like a wedge, with collisions occurring at one end.
Left: Cut-away drawing of CMS, which is shaped like a barrel with proton-proton collisions occurring at its center. ATLAS’s shape is similar. Right: Cut-away drawing of LHCb, which is shaped something like a wedge, with collisions occurring at one end.

This shape only allows it to measure those particle that go in the “forward” direction — close to the direction of one of the proton beams. (“Backward” would be near the other beam; the distinction between forward and backward is arbitrary, because the two proton beams have the same properties. “Central” would be far from either beam.) Unlike ATLAS and CMS, LHCb is not used to reconstruct the whole collision; many of the particles produced in the collision go into backward or central regions which LHCb can’t observe.  This has some disadvantages, and in particular put LHCb out of the running for the Higgs discovery. But a significant fraction of the bottom and charm quarks produced in proton-proton collisions go “forward” or “backward”, so a forward-looking design is fine if it’s bottom and charm quarks you’re interested in. And such a design is a lot cheaper, too. It also means that LHCb  is well positioned to make some other measurements where the forward direction is important. I’ll give you one or two examples later in the week.

To make their measurements of bottom and charm quarks, LHCb makes use of the fact that these quarks decay after about a trillionth of a second (a picosecond) [or longer if, as is commonly the case, there is significant time dilation due to Einstein’s relativity effects on very fast particles].  This is long enough for them to travel a measurable distance — typically a millimeter or more. LHCb is designed to make the measurements of charged particles with terrific precision, allowing them to infer a slight difference between the proton-proton collision point, from which most low-energy charged particles will emerge, and the location where some other charged particles may have been produced in the decay of a bottom hadron or some other particle that travels a millimeter or more before decaying. The ability to do precision “tracking” of the charged particles makes LHCb sensitive to the presence of any as-yet unknown particles that might be produced and then decay after traveling a small or moderate distance. More on that later in the week.

A computer reconstruction of the tracks in a proton-proton collision measured by LHCb.  Most tracks start at the proton-proton collision point, but the two tracks drawn in purple emerge from a different point, the apparent location of the decay of a hadron containing a bottom quark.
A computer reconstruction of the tracks in a proton-proton collision, as measured by LHCb. Most tracks start at the proton-proton collision point at left, but the two tracks drawn in purple emerge from a different point about 15 millimeters away, the apparent location of the decay of a hadron, whose inferred trajectory is the blue line, and whose mass (measured from the purple tracks) indicates that it contained a bottom quark.

One other thing to know about LHCb; in order to make their precise measurements possible, and to deal with the fact that they don’t observe a whole collision, they can’t afford to have too many collisions going on at once. ATLAS and CMS have been coping with ten to twenty simultaneous proton-proton collisions; this is part of what is known as “pile-up”. But near LHCb the LHC beams are adjusted so that the number of collisions at LHCb is often limited to just one or two or three simultaneous collisions. This has the downside that the amount of data LHCb collected in 2011 was about 1/5 of what ATLAS and CMS each collected, while for 2012 the number was more like 1/10.  But LHCb can do a number of things to make up for this lower rate; in particular their trigger system is more forgiving than that of ATLAS or CMS, so there are certain things they can measure using data of a sort that ATLAS and CMS have no choice but to throw away.

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 14, 2013

Just in case you weren’t convinced by yesterday’s post that the shutdown, following on a sequester and a recession, is doing some real damage to this nation’s scientists, science, and future, here is another link for you.

Jonathan Lilly is a oceanographer, a senior research scientist at NorthWest Research Associates in Redmond, Washington, and I can vouch that he is a first-rate scientist and an excellent blogger.  He writes in an article entitled

Stories from the front: oceanographers navigate the government shutdown

about a wide range of damaging problems affecting this field of study.  What’s nice about this post, compared to my own general one from yesterday, is that he has a lot of specific detail.

Here are some other links, demonstrating the breadth and depth of the impact:

http://www.npr.org/blogs/health/2013/10/10/230750627/shutdown-imperils-costly-lab-mice-years-of-research

http://www.wired.com/wiredscience/2013/10/what-does-a-federal-shutdown-mean-for-conservation-and-ag-science/

http://www.forbes.com/sites/eliseackerman/2013/10/07/the-shutdown-versus-science-national-observatory-latest-victim-of-washington-politics/

http://www.wired.com/wiredscience/2013/10/government-shutdown-affects-biomedical-research/

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 11, 2013

Maybe you think this shutdown isn’t all that bad?  Perhaps you’re not talking to scientists, or thinking about their role in society. The effects of the government shutdown continue to ripple outward.  Scientific research doesn’t cope well with shutdowns.

http://www.ibtimes.com/us-government-shutdown-antarctic-research-program-5-other-shuttered-science-programs-1419918

In many fields, the research has to be maintained continuously; if you shut it down, even for a short period, all your work is wasted.   (more…)

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 10, 2013

In sports, as in science, there are two very different types of heroes.  There are the giants who lead the their teams and their sport, winning championships and accolades, for years, and whose fame lives on for decades: the Michael Jordans, the Peles, the Lou Gherigs, the Joe Montanas. And then there are the unlikely heroes, the ones who just happen to have a really good day at a really opportune time; the substitute player who comes on the field for an injured teammate and scores the winning goal in a championship; the fellow who never hits a home run except on the day it counts; the mediocre receiver who turns a short pass into a long touchdown during the Super Bowl.  We celebrate both types, in awe of the great ones, and in amused pleasure at the inspiring stories of the unlikely ones.

In science we have giants like Newton, Darwin, Boyle, Galileo… The last few decades of particle physics brought us a few, such as Richard Feynman and Ken Wilson, and others we’ll meet today.  Many of these giants received Nobel Prizes.   But then we have the gentlemen behind what is commonly known as the Higgs particle — the little ripple in the Higgs field, a special field whose presence and properties assure that many of the elementary particles of nature have mass, and without which ordinary matter, and we ourselves, could not exist.  Following discovery of this particle last year, and confirmation that it is indeed a Higgs particle, two of them, Francois Englert and Peter Higgs, have been awarded the 2013 Nobel Prize in physics.  Had he lived to see the day, Robert Brout would have been the third.

My articles Why The Higgs Particle Matters and The Higgs FAQ 2.0; the particles of nature and what they would be like if the Higgs field were turned off; link to video of my public talk entitled The Quest for the Higgs Boson; post about why Higgs et al. didn’t win the 2012 Nobel prize, and about how physicists became convinced since then that the newly discovered particle is really a Higgs particle;

The paper written by Brout and Englert; the two papers written by Higgs; the paper written by Gerald Guralnik, Tom Kibble and Carl Hagen; these tiny little documents, a grand total of five and one half printed pages — these were game-winning singles in the bottom of the 9th, soft goals scored with a minute to play, Hail-Mary passes by backup quarterbacks — crucial turning-point papers written by people you would not necessarily have expected to find at the center of things.  Brout, Englert, Higgs, Guralnik, Kibble and Hagen are (or rather, in Brout’s case, sadly, were) very fine scientists, intelligent and creative and clever, and their papers, written in 1964 when they were young men, are imperfect but pretty gems.  They were lucky: very smart but not extraordinary physicists who just happened to write the right paper at the right time. In each case, they did so

History in general, and history of science in particular, is always vastly more complex than the simple stories we tell ourselves and our descendants.  Making history understandable in a few pages always requires erasing complexities and subtleties that are crucial for making sense of the past.  Today, all across the press, there are articles explaining incorrectly what Higgs and the others did and why they did it and what it meant at the time and what it means now.  I am afraid I have a few over-simplified articles of my own. But today I’d like to give you a little sense of the complexities, to the extent that I, who wasn’t even alive at the time, can understand them.  And also, I want to convey a few important lessons that I think the Hi(gg)story can teach both experts and non-experts.  Here are a couple to think about as you read:

1. It is important for theoretical physicists, and others who make mathematical equations that might describe the world, to study and learn from imaginary worlds, especially simple ones.  That is because

  • 1a. one can often infer general lessons more easily from simple worlds than from the (often more complicated) real one, and
  • 1b. sometimes an aspect of an imaginary world will turn out to be more real than you expected!

2. One must not assume that research motivated by a particular goal depends upon the achievement of that goal; even if the original goal proves illusory, the results of the research may prove useful or even essential in a completely different arena.

My summary today is based on a reading of the papers themselves, on comments by John Iliopoulos, and on a conversation with Englert, and on reading and hearing Higgs’ own description of the episode.

The story is incompletely but perhaps usefully illustrated in the figure below, which shows a cartoon of how four important scientific stories of the late 1950s and early 1960s came together. They are:

  1. How do superconductors (materials that carry electricity without generating heat) really work?
  2. How does the proton get its mass, and why are pions (the lightest hadrons) so much lighter than protons?
  3. Why do hadrons behave the way they do; specifically, as suggested by J.J. Sakurai (who died rather young, and after whom a famous prize is named), why are there photon-like hadrons, called rho mesons, that have mass?
  4. How does the weak nuclear force work?  Specifically, as suggested by Schwinger and developed further by his student Glashow, might it involve photon-like particles (now called W and Z) with mass?

These four questions converged on a question of principle: “how can mass be given to particles?”, and the first, third and fourth were all related to the specific question of “how can mass be given to photon-like particles?’’  This is where the story really begins.  [Almost everyone in the story is a giant with a Nobel Prize, indicated with a parenthetic (NPyear).]

My best attempt at a cartoon history...
My best attempt at a cartoon history…

In 1962, Philip Anderson (NP1977), an expert on (among other things) superconductors, responded to suggestions and questions of Julian Schwinger (NP1965) on the topic of photon-like particles with mass, pointing out that a photon actually gets a mass inside a superconductor, due to what we today would identify as a sort of “Higgs-type’’ field made from pairs of electrons.  And he speculated, without showing it mathematically, that very similar ideas could apply to empty space, where Einstein’s relativity principles hold true, and that this could allow elementary photon-like particles in empty space to have mass, if in fact there were a kind of Higgs-type field in empty space.

In all its essential elements, he had the right idea.  But since he didn’t put math behind his speculation, not everyone believed him.  In fact, in 1964 Walter Gilbert (NP1980 for chemistry, due to work relevant in molecular biology — how’s that for a twist?) even gave a proof that Anderson’s idea couldn’t work in empty space!

But Higgs immediately responded, arguing that Gilbert’s proof had an important loophole, and that photon-like particles could indeed get a mass in empty space.

Meanwhile, about a month earlier than Higgs, and not specifically responding to Anderson and Gilbert, Brout and Englert wrote a paper showing how to get mass for photon-like particles in empty space. They showed this in several types of imaginary worlds, using techniques that were different from Higgs’ and were correct though perhaps not entirely complete.

A second paper by Higgs, written before he was aware of Brout and Englert’s work, gave a simple example, again in an imaginary world, that made all of this much easier to understand… though his example wasn’t perhaps entirely convincing, because he didn’t show much detail.  His paper was followed by important theoretical clarifications from Guralnik, Hagen and Kibble that assured that the Brout-Englert and Higgs papers were actually right.  The combination of these papers settled the issue, from our modern perspective.

And in the middle of this, as an afterthought added to his second paper only after it was rejected by a journal, Higgs was the first person to mention something that was, for him and the others, almost beside the point — that in the Anderson-Brout-Englert-Higgs-Guralnik-Hagen-Kibble story for how photon-like particles get a mass, there will also  generally be a spin-zero particle with a mass: a ripple in the Higgs-type field, which today we call a Higgs-type particle.  Not that he said very much!   He noted that spin-one (i.e. photon-like) and spin-zero particles would come in unusual combinations.  (You have to be an expert to even figure out why that counts as predicting a Higgs-type particle!)  Also he wrote the equation that describes how and why the Higgs-type particle arises, and noted how to calculate the particle’s mass from other quantities.  But that was it.  There was nothing about how the particle would behave, or how to discover it in the imaginary worlds that he was considering;  direct application to experiment, even in an imaginary world, wasn’t his priority in these papers.

Equation (2b) is the first time the Higgs particle explicitly appears in its modern form
In his second paper, Higgs considers a simple imaginary world with just a photon-like particle and a Higgs-type field.  Equation 2b is the first place the Higgs-type particle explicitly appears in the context of giving photon-like particles a mass (equation 2c).  From Physical Review Letters, Volume 13, page 508

About the “Higgs-type” particle, Anderson says nothing; Brout and Englert say nothing; Guralnik et al. say something very brief that’s irrelevant in any imaginable real-world application.  Why the silence?  Perhaps because it was too obvious to be worth mentioning?  When what you’re doing is pointing out something really “important’’ — that photon-like particles can have a mass after all — the spin-zero particle’s existence is so obvious but so irrelevant to your goal that it hardly deserves comment.  And that’s indeed why Higgs added it only as an afterthought, to make the paper a bit less abstract and a bit easier for  a journal to publish.  None of them could have imagined the hoopla and public excitement that, five decades later, would surround the attempt to discover a particle of this type, whose specific form in the real world none of them wrote down.

In the minds of these authors, any near-term application of their ideas would probably be to hadrons, perhaps specifically Sakurai’s theory of hadrons, which in 1960 predicted the “rho mesons”, which are photon-like hadrons with mass, and had been discovered in 1961.  Anderson, Brout-Englert and Higgs specifically mention hadrons at certain moments. But none of them actually considered the real hadrons of nature, as they were just trying to make points of principle; and in any case, the ideas that they developed did not apply to hadrons at all.  (Well, actually, that’s not quite true, but the connection is too roundabout to discuss here.)  Sakurai’s ideas had an element of truth, but fundamentally led to a dead end.  The rho mesons get their mass in another way.

Meanwhile, none of these people wrote down anything resembling the Higgs field which we know today — the one that is crucial for our very existence — so they certainly didn’t directly predict the Higgs particle that was discovered in 2012.   It was Steven Weinberg (NP1979) in 1967, and Abdus Salam (NP1979) in 1968, who did that.  (And it was Weinberg who stuck Higgs’ name on the field and particle, so that everyone else was forgotten.) These giants combined

  • the ideas of Higgs and the others about how to give mass to photon-like particles using a Higgs-type field, with its Higgs-type particle as a consequence…
  • …with the 1960 work of Sheldon Glashow (NP1979), Schwinger’s student, who like Schwinger proposed the weak nuclear force was due to photon-like particles with mass,…
  • …and with the 1960-1961 work of Murray Gell-Man (NP1969) and Maurice Levy and of Yoichiro Nambu (NP2008) and Giovanni Jona-Lasinio, who showed how proton-like or electron-like particles could get mass from what we’d now call Higgs-type fields.

This combination gave the first modern quantum field theory of particle physics: a set of equations that describe the weak nuclear and electromagnetic forces, and show how the Higgs field can give the W and Z particles and the electron their masses. It is the primitive core of what today we call the Standard Model of particle physics.  Not that anyone took this theory seriously, even Weinberg.  Most people thought quantum field theories of this type were mathematically inconsistent — until in 1971 Gerard ‘t Hooft (NP1999) proved they were consistent after all.

The Hi(gg)story is populated with giants.  I’m afraid my attempt to tell the story has giant holes to match.  But as far as the Higgs particle that was discovered last year at the Large Hadron Collider, the unlikely heroes of the story are the relatively ordinary scientists who slipped in between the giants and actually scored the goals.

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 8, 2013

This post is a continuation of three previous posts: #1, #2 and #3.

When the Strong Nuclear Force is Truly Strong

Although I’ve already told you a lot about how we make predictions using the Standard Model of particle physics, there’s more to the story. The tricky quantum field theory that we run into in real-world particle physics is the one that describes the strong nuclear force, and the gluons and quarks (and anti-quarks) that participate in that force. In particular, for processes that involve

  • distances comparable to or larger than the proton‘s size, 100,000 times smaller than an atom, and/or
  • low-energy processes, with energies at or below the mass-energy (i.e. E=mc² energy) of a proton, about 1 GeV,

the force between quarks, gluons and anti-quarks becomes so “strong” (in a technical sense: strong enough that it makes these particles rush around at nearly the speed of light) that the methods I described previously do not work at all.

That’s bad, because how can one be sure our equations for the quarks and gluons — the quantum field theory equations of the strong nuclear force — are the correct ones, if we can’t check that these equations correctly predict the existence and the masses of the proton and neutron and other hadrons ( a general term referring to any particles made from quarks, anti-quarks and gluons)?

Fortunately, there is a way to check our equations, by brute force. We simulate the behavior of the quark and gluon fields on a computer. Sounds simple enough, but you should not get the idea that this is easy. Even figuring out how to do this requires a lot of cleverness, and making the calculations fast and practical requires even more cleverness. Only expert theoretical physicists can carry out these calculations, and make predictions that are relevant directly for the real world.  Don’t try this at home.

The first step is to simplify the problem, and consider an imaginary world, an idealized world that is simpler than the real world. Since the strong nuclear force is extremely strong inside a proton, the electromagnetic and weak nuclear forces are small effects by comparison. So it makes sense to do the calculation in an imaginary world where the strong nuclear force is present but all other forces are turned off. If you put those unimportant forces in, you’d have a much more complicated computer problem and yet the answers would barely change. So including the other forces would be a big waste of time and effort.

Here we use an imaginary world as an idealization — a bit like treating the earth as a perfect sphere. Obviously the earth is not a sphere — it has mountains and valleys and tides and a slight bulge at the equator — but if you’re computing some simple properties of the earth’s effect on the moon, including these details will waste a lot of your time without affecting your calculation very much. The art of being a scientist requires knowing what you need to include in your calculations, and knowing what not to include because it makes no difference.  In fact we do this all the time in particle physics; gravity’s effect on measurements at the Large Hadron Collider [LHC] is tiny, so we do our calculations in an imaginary world without gravity, a harmless simplification.

Here’s another idealization: although there are six types (often called “flavors”) of quarks — up, down, strange, charm, bottom and top — the last three are heavier than a proton and consequently don’t play much of a role in the proton, or in the other low-mass hadrons that I’ll focus on here. So the imaginary, idealized, simplified world in which the calculations are carried out has (see Figure 1)

  • Three “flavors” of quark fields: up, down and strange, each with its own mass, and each with a charge (analogous to electric charge in the case of the electric force) which is whimsically called “color”.  Color can take three values, whimsically called “red”, “green” or “blue”. These fields give rise to both the quark particles and their antiparticles, called anti-quarks, which carry anti-color (anti-red, anti-blue, anti-green);
  • Eight gluon fields (each carrying a “color” and an “anti-color”.) [You might have guessed there’d be nine; but when color and anti-color are the same there are some little subtleties which aren’t relevant today, so I ask you to just accept this for now.]

So now we have a quantum field theory of three flavors of quarks with three possible colors, along with corresponding anti-quarks, and eight gluons which generate the strong nuclear force among the quarks, antiquarks and gluons. This isn’t the real world, but it is close enough to give us very accurate answers about the real world. And this is the one the experts actually put on a computer, to see if our equations do indeed predict that quarks, antiquarks and gluons form protons and other hadrons.

Fig. 1: The fields of the stripped-down world in which calculations of proton and other hadron masses are done. Up, down and strange quark fields (responsible for both quarks and anti-quarks) interact with gluon fields (responsible for gluon particles.) Each of the eight quark fields has a ``charge'' (named, whimsically, red, green or blue) and each gluon field has a color and an anti-color.
Fig. 1: The fields of the stripped-down world in which calculations of the proton mass and other hadron masses are done. Up, down and strange quark fields (responsible for both quarks and anti-quarks) interact with gluon fields (responsible for gluon particles.) Each of the eight quark fields has a “charge” (named, whimsically, red, green or blue) and each gluon field has a color and an anti-color.

Does it work? Yes! In Figure 2 is a plot showing the experimentally measured and computer-calculated values of the masses of various hadrons found in nature. Each hadron’s measured mass is the vertical location of a horizontal black line; the hadron’s symbol appears below that line at the bottom of the plot. I’ve written the names of a few of the most famous hadrons on the plot:

  • the spin-zero pions,
  • the spin-1 rho mesons and omega meson,
  • the spin-1/2 “nucleons”, meaning the proton and the neutron, and
  • the spin-3/2 Delta particles.

The colored dots represent different computer calculations of the masses of these hadrons; the vertical colored bars show how uncertain each calculation is. You can see that, within the uncertainties of the calculations, the measurements and calculations agree. And thus we learn that indeed the quantum field theory of this idealized world

  • predicts that hadrons such as protons do exist
  • predicts the ones we observe, without a lot of extra ones or missing ones
  • predicts correctly the masses of these hadrons

from which we conclude that

  • the quantum field theory with the fields shown in Figure 1 has something to do with the real world
  • we were wise to choose the imaginary world of Figure 1 for our study, because clearly the idealizations we made didn’t affect our final results to an extent that they caused disagreements with the real world

    Fig. 2: The masses of various hadrons, whose names appear at bottom and whose measured masses appear as grey horizontal lines, as calculated by computer: each colored dot is a calculation, whose  uncertainty is shown by a vertical bar.  I have written the names of some famous hadrons.
    Fig. 2: The masses of various hadrons, whose names appear at bottom and whose measured masses appear as grey horizontal lines, as calculated by computer: each colored dot is a particular calculation, whose uncertainty is shown by a vertical bar. I have written the names of some famous hadrons.

All looks great! And it is. However, I’ve lied to you. I haven’t actually told you how hard it is to obtain these answers. So let me give you a little more insight into what you have to do to obtain these calculations. You have to go off into even more imaginary worlds.

How the Calculation is Really Done: Off In Imaginary Worlds

The imaginary world I’ve described so far is still not simple enough for the calculation to be possible. The actual calculations require that we make predictions in worlds very different from our own. Two simplifications have to do with something you’d think would be essential: space itself. In order to do the calculation, we have to imagine

  • that the world, rather than being enormous, is made of just a tiny little box — a box only large enough to hold a single proton or other hadron;
  • that space itself, rather than being continuous, forms a discrete grid, or lattice, in which the distances between points on the grid are somewhat but not enormously smaller than the distance across a proton.

This is schematically illustrated in Figure 3, though the grids used today are denser and the boxes a bit larger.  The size of a proton, relative to the finite grid of points, is indicated by the round circle.

Fig. 3: The calculations are done in a world whose space is a small grid.  This picture of a 4 x 4 x 4 grid is a cartoon to make the idea clear; today, grids of 32 x 32 x 32 are not unusual.
Fig. 3: The calculations are done in a world whose space is a small grid. Note, however, that this picture of a 4 x 4 x 4 grid is a cartoon to make the idea clear; with modern computers, grids of 32 x 32 x 32 are not unusual.

Advances in computer technology are certainly helping avoid this problem… the better and faster are your computers, the denser you can take your grid and the larger you can take your box. But simulating a large chunk of the world, with space that is essentially continuous, is way out of reach right now. So this is something we have to accept, and deal with.  Unlike the idealizations that led us to study the quantum field theory in Figure 1, choosing to study the world on a finite grid does change the calculations substantially, and experts have to correct their answers after they’ve calculated them.

And there’s one more simplification necessary. The smaller are the up and down and strange quark masses, the harder the calculation becomes. If these masses were zero, the calculation just would be impossible. Even with the real world’s quark masses (the up quark mass is about 1/300 of a proton’s mass, the down quark 1/150, and the strange quark about 1/12) calculations still aren’t really possible — and they weren’t even close to possible until rather recently. So calculations have to be done in an imaginary world with much larger quark masses, especially for the up and down quark, than are present in the real world.

Fig. 4: Two types of imaginary worlds arise here.  First, the real world is stripped down, with all irrelevant particles and forces dropped, giving the red imaginary world.  Then this world's space is made into the grid of Figure 3, and the up, down and strange quark masses are raised.  In this purple imaginary world, calculations are easier, but give very wrong answers; only by extrapolating (Figure 5) are the predictions extracted.
Fig. 4: Two types of imaginary worlds arise here. First, the real world is stripped down, with all irrelevant particles and forces dropped, giving the red imaginary world. Then this world’s space is made into the grid of Figure 3, and the up, down and strange quark masses are raised. In this purple imaginary world, calculations become practical, but they give incorrect answers; only by extrapolating (Figure 5) are useful predictions extracted.

So since we can’t calculate in the real world, but have to calculate in a world with a small spatial grid and heavier quarks, how can we hope to get reasonable answers for the hadron masses? Well, this is another place where the experts earn our respect. The trick is to learn how to extrapolate. For example:

  • Do the calculation for fields in a small box.
  • Then do the calculation again in a medium-sized box (which takes a lot longer.)
  • Then do the calculation in a larger box (still small, but big enough that it uses about as much computer time as you can spare.)

Now, if you know how going from a small to medium to larger box should change your answer, then you can infer, from the answers you obtain, what the answer would be in a huge box where the walls are so far away they don’t matter.

The experts do this, and they do the same thing for the space grid, computing with denser grids and extrapolating to a world where space is continuous. And they do the same thing for the quark masses: they start with moderately large quark masses, and they shrink them in several steps. And knowing from theoretical arguments what should happen to the hadron masses as the quark masses change, they can extrapolate from the ones they calculate to the ones that would be predicted if the quark masses were the real-world ones. You can see this in Figure 4. As the up and down quark masses are reduced, the pion mass gets smaller, and the “nucleon” (i.e. proton and neutron) masses becomes smaller too. (Also shown is the Omega hadron; this has three unpaired strange quarks, and you can see its mass doesn’t depend much on the up and down quark masses.) The experts take the actual calculations (colored dots), and draw a properly-shaped curve through all the dots. Then they go to the point on the horizontal axis where the quark masses equal their real-world values and the pion mass comes out agreeing with experiment, and they draw a vertical black line upward. The intersection of the black vertical and blue curved line (the black X mark) is then the prediction for what the proton and neutron mass should be in the real world. Well, you can see that the black X is pretty close, within about 0.030 GeV/c², to what we find in experiments: 0.938 and 0.939 GeV/c² for the proton and neutron mass.  And this is how all of the results shown in Figure 2 are obtained: extrapolating to the real world by calculating in a few imaginary ones.

Fig. 5: Calculations (colored dots) are done with larger quark masses than in the real world, and one must extrapolate to the smaller quark masses of the "real" world (black dotted vertical line) to make predictions (black X's).  "N" stands for "nucleon", meaning both protons and neutrons.
Fig. 5: Calculations (colored dots) are done with larger quark masses than in the real world, and the results are as much as 50% too large.  One must extrapolate to the smaller quark masses of the “real” or “physical” world (black dotted vertical line) to make predictions (black X’s). “N” stands for “nucleon”, meaning both protons and neutrons.

The Importance of Such Calculations

This is a tremendous success story. The equations of the strong nuclear force were first written down correctly in 1973. Calculations like this were just becoming possible in the mid-1980s. Only in the 1990s did the agreement start to become impressive. And now, with modern computer power, it’s become almost routine to see results like this.

More than that, these methods have become essential tools. There are many important predictions made for experiments which are partly made with the methods I described in my previous post and partly using these computer calculations. For example, they are extremely important for precise predictions of the decays of hadrons with one heavy quark, such as B and D mesons, which I have written about here and here. If we didn’t have such precise predictions, we couldn’t use measurements of these decays to check for unknown phenomena that are absent from the Standard Model.

But There’s So Still Much That We Can’t Compute

Despite all this success, the limitations of the method are profound. Although computers are fine for learning the masses of hadrons, and some of their other properties, and quite a few other interesting things, they are terrible for understanding everything that can happen when two protons (or other hadrons) bump into each other.  Basically, computer techniques can’t handle things that change rapidly over time.

For example, the data in Figure 6 show two of the simplest things you’d like to know:

  • how does the probability that two protons will collide change, if you increase the energy of the collision?
  • what is the probability, if they collide, that they will remain intact, rather than breaking apart into a spray of other hadrons?

We can measure the answer (the black points are data, the black curve is an attempt to fit a smooth curve to the data.) But no one can predict this curve by starting with the quantum field theory of the strong nuclear force — not using successive approximation, fancy math, brute force computer simulation, string theory, or any other method currently available. [Experts: there are plenty of attempts to model these curves (look up “pomeron”.) But the models involve independent equations that can’t actually be derived from or clearly related to the quantum field theory equations for quarks and gluons.]

Fig. 6: The probability for two protons to collide (upper data points) and to collide without breaking (lower data points) as a function of the collision energy.
Fig. 6: The probability for two protons to collide (upper data points, “total”) and to collide without breaking (lower data points, “elastic”), as a function of the energy of one proton as viewed by the other proton.  Data are taken from many experiments, including the LHC at the far right.  The curve shows an attempt to fit the data, but this data cannot currently be predicted starting from the equations for quarks and gluons.

At the LHC, when a quark from one proton hits a quark from another proton, we can predict, using the successive approximation (“perturbative”) methods described in my previous post, what happens to the quarks. But what happens to the other parts of the two protons when the two quarks strike each other? We can’t even begin to predict that, either with successive approximation or with computers.

My point? The quantum field theory of the strong nuclear force allows us to make many predictions. But still, many very basic natural phenomena for which the strong nuclear force is responsible cannot currently be predicted using any known method.

Stay Tuned. It’s going to get worse.

Continued here

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 7, 2013

This year’s Nobel Prize, presumably to be given for the prediction of the particle known today as the “Higgs boson”, will be awarded next week.  But in the meantime, the American Physical Society has made a large number of awards.  A few of them are to people whose work I know about, so I thought I’d tell you just a little about them.

The J. J. Sakurai prize went to Professors Zvi Bern, Lance Dixon and David Kosower, for the work that I have already described on this website here and here.  Dixon, a wide-ranging expert in particle physics, quantum field theory and string theory, was a young professor at the Stanford Linear Accelerator Center when I was a Stanford graduate student.  He taught an excellent course on string theory, and provided a lot of scientific advice and insight outside the classroom.  Bern and Kosower were young scientists using string theory to learn about how to do computations in quantum field theory, and their surprising results formed the starting point for my Ph. D. thesis (which has their names in its title.)   The range of their work is hard to describe in a paragraph, but let’s just say that no one is surprised that they were awarded a prize of this magnitude.

The Dannie Heineman Prize for Mathematical Physics was awarded to my former colleague Greg Moore, a professor at Rutgers University.  “For eminent contributions to mathematical physics with a wide influence in many fields, ranging from string theory to supersymmetric gauge theory, conformal field theory, condensed matter physics and four-manifold theory.”  Allow me to translate:

  • string theory: you’ve heard about it, probably
  • supersymmetric gauge theory: quantum field theories with supersymmetry, which I’ll be writing about soon
  • conformal field theory: basically, quantum field theories that are scale invariant
  • condensed matter physics: the study of solids and liquids and their mechanical and electrical properties, and lots of other things too, in which quantum field theory is sometimes a useful tool
  • four-manifold theory: the mathematics of spaces which have four-spatial dimensions, or three-spatial dimensions and one-time dimension.  These spaces are very interesting to mathematicians, and also, they’re interesting because we live in one.

This is not the complete range of Moore’s work by any means.  Unfortunately this website doesn’t yet have pages that can put his work in proper context, but perhaps I’ll return to it later.  But again, no surprise here to see Moore’s name on this award.

The Tom W. Bonner Prize in Nuclear Physics was awarded to experimental physicist William A. Zajc, currently chairman of the Columbia University physics department.  Zajc has been heavily involved in one of the most surprising discoveries of the past fifteen years: that a hot dense fireball of quarks, anti-quarks and gluons (produced in the collision of two relatively large atomic nuclei) behaves in a very unexpected way, more like a very low viscosity liquid rather like than a gas.  I’ve known him partly because of his interest in the attempts to apply string theory to certain quantum field theories that are perhaps relevant in the modeling of this novel physical system… something I’ll also probably be writing about in the relatively near future.

And the W.K.H. Panofsky Prize in Experimental Particle Physics went to Kam-Biu Luk (Berkeley) and Yifang Wang (Director of China’s Institute of High Energy Physics): For their leadership of the Daya Bay experiment, which produced the first definitive measurement of the theta-13 angle of the neutrino mixing matrix.  For the same experiment, the Henry Primakoff Award for Early-Career Particle Physics went to Daniel A. Dwyer of Lawrence Berkeley Laboratory.  I wrote about the Daya Bay measurement here; their result is one of the major measurements in particle physics in the past few years.

I wish I knew more about the other recipients outside my areas of expertise, but other bloggers will have to cover those stories.

Anyway, no surprises, but some very deserving scientists.  Let’s see if next Tuesday brings the same result.

Picture of POSTED BY Matt Strassler

POSTED BY Matt Strassler

ON October 3, 2013

Search

Buy The Book

Reading My Book?

Got a question? Ask it here.

Media Inquiries

For media inquiries, click here.