First Time Visitor?This site addresses various aspects of science, with a current focus on particle physics. I aim to serve the public, including those with no background knowledge of physics. If you're not yourself an expert, you might want to click on "New? Start Here" or "About" to get started. If you'd like to watch my hour-long public lecture about the Higgs particle, try ``Movie Clips''.
- In Memory of Joe Polchinski, the Brane Master
- The Significance of Yesterday’s Gravitational Wave Announcement: an FAQ
- A Scientific Breakthrough! Combining Gravitational and Electromagnetic Waves
- LIGO and VIRGO Announce a Joint Observation of a Black Hole Merger
- Watch for Auroras
- An Experience of a Lifetime: My 1999 Eclipse Adventure
- Ongoing Chance of Northern (or Southern) Lights
- Lights in the Sky (maybe…)
Category Archives: Higgs
One of the concepts that’s playing a big role in contemporary discussions of the laws of nature is the notion of “vacua”, the plural of the word “vacuum”. I’ve just completed an article about what vacua are, and what it means for a universe to have multiple vacua, or for a theory that purports to describe a universe to predict that it has multiple vacua. In case you don’t want to plunge right in to that article, here’s a brief summary of why this is interesting and important.
Outside of physics, most people think of a vacuum as being the absence of air. For physicists thinking about the laws of nature, “vacuum” means space that has been emptied of everything — at least, emptied of everything that can actually be removed. That certainly means removing all particles from it. But even though vacuum implies emptiness, it turns out that empty space isn’t really that empty. There are always fields in that space, fields like the electric and magnetic fields, the electron field, the quark field, the Higgs field. And those fields are always up to something.
First, all of the fields are subject to “quantum fluctuations” — a sort of unstoppable jitter that nothing in our quantum world can avoid. [Sometimes these fluctuations are referred to as “virtual particles”; but despite the name, those aren’t particles. Real particles are well-behaved, long-lived ripples in those fields; fluctuations are much more random.] These fluctuations are always present, in any form of empty space.
Second, and more important for our current discussion, some of the fields may have average values that aren’t zero. [In our own familiar form of empty space, the Higgs field has a non-zero average value, one that causes many of the known elementary particles to acquire a mass (i.e. a rest mass).] And it’s because of this that the notion of vacuum can have a plural: forms of empty space can differ, even for a single universe, if the fields of that universe can take different possible average values in empty space. If a given universe can have more than one form of empty space, we say that “it has more than one vacuum”.
There are reasons to think our own universe might have more than one form of vacuum — more than just the one we’re familiar with. It is possible that the Standard Model (the equations used to describe all of the known elementary particles, and all the known forces except gravity) is a good description of our world, even up to much higher energies than our current particle physics experiments can probe. Physicists can predict, using those equations, how many forms of empty space our world would have. And their calculations show that our world would have (at least) two vacua: the one we know, along with a second, exotic one, with a much larger average value for the Higgs field. (Remember, this prediction is based on the assumption that the Standard Model’s equations apply in the first place.) An electron in empty space would have a much larger mass than the electrons we know and love (and need!)
The future of the universe, and our understanding of how the universe came to be, might crucially depend on this second, exotic vacuum. Today’s article sets the stage for future articles, which will provide an explanation of why the vacua of the universe play such a central role in our understanding of nature at its most elemental.
For those who haven’t heard: Professor Gerry Guralnik died. Here’s the New York Times obituary, which contains a few physics imperfections (though the most serious mistake in an earlier version was corrected, thankfully), but hopefully avoids any errors about Guralnik’s life. Here’s another press release, from Brown University.
Guralnik, with Tom Kibble and Carl Hagen, wrote one of the four 1964 papers which represent the birth of the idea of the “Higgs” field, now understood as the source of mass for the known elementary particles — an idea that was confirmed by the discovery of a type of “Higgs” particle in 2012 at the Large Hadron Collider. (I find it sad that the obituary is sullied with a headline that contains the words “God Particle” — a term that no physicist involved in the relevant research ever used, and which was invented in the 1990s, not as science or even as religion, but for $$$… by someone who was trying to sell his book.) The other three papers — the first by Robert Brout and Francois Englert, and the second and third by Peter Higgs, were rewarded with a Nobel Prize in 2013; it was given just to Englert and Higgs, Brout having died too early, in 2011. Though Guralnik, Hagen and Kibble won many other prizes, they were not awarded a Nobel for their work, a decision that will remain forever controversial.
But at least Guralnik lived long enough to learn, as Brout sadly did not, that his ideas were realized in nature, and to see the consequences of these ideas in real data. In the end, that’s the real prize, and one that no human can award.
It’s been a quiet couple of weeks on the blog, something which often indicates that it’s been anything but quiet off the blog. Such was indeed the case recently.
For one thing, I was in Canada last week. I had been kindly invited to give two talks at the University of Western Ontario, one of Canada’s leading universities for science. One of the talks, the annual Nerenberg lecture (in memory of Professor Morton Nerenberg) is intended for the general public, so I presented a lecture on The 2013 Nobel Prize: The 50-Year Quest for the Higgs Boson. While I have given a talk on this subject before (an older version is on-line) I felt some revisions would be useful. The other talk was for members of the applied mathematics department, which hosts a diverse group of academics. Unlike a typical colloquium for a physics department, where I can assume that the vast majority of the audience has had university-level quantum mechanics, this talk required me to adjust my presentation for a much broader scientific audience than usual. I followed, to an extent, my website’s series on Fields and Particles and on How the Higgs Field Works, both of which require first-year university math and physics, but nothing more. Preparation of the two talks, along with travel, occupied most of my free time over recent days, so I haven’t been able to write, or even respond to readers’ questions, unfortunately.
I also dropped in at Canada’s Perimeter Institute on Friday, when it was hosting a small but intense one-day workshop on the recent potentially huge discovery by the BICEP2 experiment of what appears to be a signature of gravitational waves from the early universe. This offered me an opportunity to hear some of the world’s leading experts talking about the recent measurement and its potential implications (if it is correct, and if the simplest interpretation of it is correct). Alternative explanations of the experiment’s results were also mentioned. Also, there was a lot of discussion about the future, both the short-term and the long-term. Quite a few measurements will be made in the next six to twelve months that will shed further light on the BICEP2 measurement, and on its moderate conflict with the simplest interpretation of certain data from the Planck satellite. Further down the line, a very important step will be to reduce the amount of B-mode polarization that arises from the gravitational lensing of E-mode polarization, a method called “delensing”; this will make it easier to observe the B-mode polarization from gravitational waves (which is what we’re interested in) even at rather small angular scales (high “multipoles”). Looking much further ahead, we will be hearing a lot of discussion about huge new space-based gravitational wave detectors such as BBO [Big Bang Observatory]. (Actually the individual detectors are quite small, but they are spaced at great distances.) These can potentially measure gravitational waves whose wavelength is comparable to the size of the Earth’s orbit or even larger, which is still much smaller than those apparently detected by BICEP2 in the polarization of the cosmic microwave background. Anyway, assuming what BICEP2 has really done is discover gravitational waves from the very early universe, this subject now a very exciting future and there is lots to do, to discuss and to plan.
I wish I could promise to provide a blog post summarizing carefully what I learned at the conference. But unfortunately, that brings me to the other reason blogging has been slow. While I was away, I learned that the funding situation for science in the United States is even worse than I expected. Suffice it to say that this presents a crisis that will interfere with blogging work, at least for a while.
Familiar throughout our international culture, the “Big Bang” is well-known as the theory that scientists use to describe and explain the history of the universe. But the theory is not a single conceptual unit, and there are parts that are more reliable than others.
It’s important to understand that the theory — a set of equations describing how the universe (more precisely, the observable patch of our universe, which may be a tiny fraction of the universe) changes over time, and leading to sometimes precise predictions for what should, if the theory is right, be observed by humans in the sky — actually consists of different periods, some of which are far more speculative than others. In the more speculative early periods, we must use equations in which we have limited confidence at best; moreover, data relevant to these periods, from observations of the cosmos and from particle physics experiments, is slim to none. In more recent periods, our confidence is very, very strong.
In my “History of the Universe” article [see also my related articles on cosmic inflation, on the Hot Big Bang, and on the pre-inflation period; also a comment that the Big Bang is an expansion, not an explosion!], the following figure appears, though without the colored zones, which I’ve added for this post. The colored zones emphasize what we know, what we suspect, and what we don’t know at all.
Notice that in the figure, I don’t measure time from the start of the universe. That’s because I don’t know how or when the universe started (and in particular, the notion that it started from a singularity, or worse, an exploding “cosmic egg”, is simply an over-extrapolation to the past and a misunderstanding of what the theory actually says.) Instead I measure time from the start of the Hot Big Bang in the observable patch of the universe. I also don’t even know precisely when the Hot Big Bang started, but the uncertainty on that initial time (relative to other events) is less than one second — so all the times I’ll mention, which are much longer than that, aren’t affected by this uncertainty.
I’ll now take you through the different confidence zones of the Big Bang, from the latest to the earliest, as indicated in the figure above.
I have to admit that this post is really only important for experimentalists interested in searching for non-Standard Model decays of the Higgs particle. I try to keep these technical posts very rare, but this time I do need to slightly amend a technical point that I made in an article a few weeks ago. Continue reading
In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years.
What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues? Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle?
Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case?
Michelson and Morley, and What They Discovered
In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It’s very hard to show that your lost keys are nowhere in the house — you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”.
Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds. This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether.
Both of Michelson’s measurements failed to detect any ether wind, and did so expertly and convincingly. And for the convincing method that he invented — an experimental device called an interferometer, which had many other uses too — Michelson won the Nobel Prize in 1907. Meanwhile the failure to detect the ether drove both FitzGerald and Lorentz to consider radical new ideas about how matter might be deformed as it moves through the ether. Although these ideas weren’t right, they were important steps that Einstein was able to re-purpose, even more radically, in his 1905 equations of special relativity.
In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward.
Epicycles or a Revolution?
There were various attempts to make sense of Michelson and Morley’s experiment. Some interpretations involved tweaks of the notion of the ether. Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists. (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect.
If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery. As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory. This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory. (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding.
[Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.]
So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak. The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles.
Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know. A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913.
My point is that the experiments that were needed to properly interpret Michelson-Morley’s result
- did not involve trying to detect motion through the ether,
- did not involve building even more powerful and accurate interferometers,
- and were not immediately obvious to the practitioners in 1888.
This should give us pause. We might, if we continue as we are, be heading in the wrong direction.
Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary.
This is not to say that a 100 TeV collider isn’t needed! It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries. Or it might be that the 100 TeV machine is just what we need to learn something revolutionary. But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature.
Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history.
One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate. Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces.
Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right. But it was.
One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1. This is illustrated in Figure 2.
These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter. If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width.
But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be.
Ok Fine; What Other Experiments Should We Do?
So what are the non-obvious experiments we should be doing now or in the near future? Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.) Although I’m thinking about these things, I do not yet have any good ideas. Until I do, or someone else does, this is all just talk — and talk does not impress physicists.
Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit. I agree with that objection.
Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised.
Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path.
During the gap between the first run of the Large Hadron Collider [LHC], which ended in 2012 and included the discovery of the Higgs particle (and the exclusion of quite a few other things), and its second run, which starts a year from now, there’s been a lot of talk about the future direction for particle physics. By far the most prominent option, both in China and in Europe, involves the long-term possibility of a (roughly) 100 TeV proton-proton collider — that is, a particle accelerator like the LHC, but with 5 to 15 times more energy per collision.
Do we need such a machine? Continue reading