Category Archives: The Scientific Process

BICEP2′s Cosmic Polarization: Published, Reduced in Strength

I’m busy dealing with the challenges of being in a quantum superposition, but you’ve probably heard: BICEP2′s paper is now published, with some of its implicit and explicit claims watered down after external and internal review. The bottom line is as I discussed a few weeks ago when I described the criticism of the interpretation of their work (see also here).

  • There is relatively little doubt (but it still requires confirmation by another experiment!) that BICEP2 has observed interesting polarization of the cosmic microwave background (specifically: B-mode polarization that is not from gravitational lensing of E-mode polarization; see here for more about what BICEP2 measured)
  • But no one, including BICEP2, can say for sure whether it is due to ancient gravitational waves from cosmic inflation, or to polarized dust in the galaxy, or to a mix of the two; and the BICEP2 folks are explicitly less certain about this, in the current version of their paper, than in their original implicit and explicit statements.

And we won’t know whether it’s all just dust until there’s more data, which should start to show up in coming months, from BICEP2 itself, from Planck, and from other sources. However, be warned: the measurements of the very faint dust that might be present in BICEP2′s region of the sky are extremely difficult, and the new data might not be immediately convincing. To come to a consensus might take a few years rather than a few months.  Be patient; the process of science, being self-correcting, will eventually get it straight, but not if you rush it.

Sorry I haven’t time to say more right now.

The BICEP2 Dust-Up Continues

The controversy continues to develop over the interpretation of the results from BICEP2, the experiment that detected “B-mode” polarization in the sky, and was hailed as potential evidence of gravitational waves from the early universe, presumably generated during cosmic inflation. [Here’s some background info about the measurement].

Two papers this week (here and here) gave more detailed voice to the opinion that the BICEP2 team may have systematically underestimated the possible impact of polarized dust on their measurement.  These papers raise (but cannot settle) the question as to whether the B-mode polarization seen by BICEP2 might be entirely due to this dust — dust which is found throughout our galaxy, but is rather tenuous in the direction of the sky in which BICEP2 was looking.

I’m not going to drag my readers into the mud of the current discussion, both because it’s very technical and because it’s rather vague and highly speculative. Even the authors of the two papers admit they leave the situation completely unsettled.  But to summarize, the main purpose and effect of these papers seems to be this:

Continue reading

Dark Matter Debates

Last week I attended the Eighth Harvard-Smithsonian Conference on Theoretical Astrophysics, entitled “Debates on the Nature of Dark Matter”, which brought together leading figures in astronomy, astrophysics, cosmology and particle physics. Although there wasn’t much that was particularly new, it was a very useful conference for taking stock of where we are. I thought I’d bring you a few selected highlights that particularly caught my eye. Continue reading

Will BICEP2 Lose Some of Its Muscle?

A scientific controversy has been brewing concerning the results of BICEP2, the experiment that measured polarized microwaves coming from a patch of the sky, and whose measurement has been widely interpreted as a discovery of gravitational waves, probably from cosmic inflation. (Here’s my post about the discovery, here’s some background so you can understand it more easily. Here are some of my articles about the early universe.)  On the day of the announcement, some elements of the media hailed it as a great discovery without reminding readers of something very important: it’s provisional!

From the very beginning of the BICEP2 story, I’ve been reminding you (here and here) that it is very common for claims of great scientific discoveries to disappear after further scrutiny, and that a declaration of victory by the scientific community comes much more slowly and deliberately than it often does in the press. Every scientist knows that while science, as a collective process viewed over time, very rarely makes mistakes, individual experiments and experimenters are often wrong.  (To its credit, the New York Times article contained some cautionary statements in its prose, and also quoted scientists making cautionary statements.  Other media outlets forgot.)

Doing forefront science is extremely difficult, because it requires near-perfection. A single unfortunate mistake in a very complex experiment can create an effect that appears similar to what the experimenters were looking for, but is a fake. Scientists are all well-aware of this; we’ve all seen examples, some of which took years to diagnose. And so, as with any claim of a big discovery, you should view the BICEP2 result as provisional, until checked thoroughly by outside experts, and until confirmed by other experiments.

What could go wrong with BICEP2?  On purely logical grounds, the BICEP2 result, interpreted as evidence for cosmic inflation, could be problematic if any one of the following four things is true:

1) The experiment itself has a technical problem, and the polarized microwaves they observe actually don’t exist.

2) The polarized microwaves are real, but they aren’t coming from ancient gravitational waves; they are instead coming from dust (very small grains of material) that is distributed around the galaxy between the stars, and that can radiate polarized microwaves.

3) The polarization really is coming from the cosmic microwave background (the leftover glow from the Big Bang), but it is not coming from gravitational waves; instead it comes from some other unknown source.

4) The polarization is really coming from gravitational waves, but these waves are not due to cosmic inflation but to some other source in the early universe.

The current controversy concerns point 2. Continue reading

What if the Large Hadron Collider Finds Nothing Else?

In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years.

What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues?  Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle?

Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case?

Michelson and Morley, and What They Discovered

In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It’s very hard to show that your lost keys are nowhere in the house — you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”.

Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds.  This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether.

Both of Michelson’s measurements failed to detect any ether wind, and did so expertly and convincingly. And for the convincing method that he invented — an experimental device called an interferometer, which had many other uses too — Michelson won the Nobel Prize in 1907. Meanwhile the failure to detect the ether drove both FitzGerald and Lorentz to consider radical new ideas about how matter might be deformed as it moves through the ether. Although these ideas weren’t right, they were important steps that Einstein was able to re-purpose, even more radically, in his 1905 equations of special relativity.

In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward.

Epicycles or a Revolution?

There were various attempts to make sense of Michelson and Morley’s experiment.   Some interpretations involved  tweaks of the notion of the ether.  Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists.   (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect.

If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery.  As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory.  This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature  and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory.  (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding.

[Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.]

So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak.  The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles.

Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know.  A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913.

My point is that the experiments that were needed to properly interpret Michelson-Morley’s result

  • did not involve trying to detect motion through the ether,
  • did not involve building even more powerful and accurate interferometers,
  • and were not immediately obvious to the practitioners in 1888.

This should give us pause. We might, if we continue as we are, be heading in the wrong direction.

Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary.

This is not to say that a 100 TeV collider isn’t needed!  It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries.  Or it might be that the 100 TeV machine is just what we need to learn something revolutionary.  But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature.

Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history.

An Example

One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate.  Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces.

Fig. 1: ADD's paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter.

Fig. 1: ADD’s paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three-dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter.

Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right.  But it was.

One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1.  This is illustrated in Figure 2.

Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein's law of gravity would be violated at distances shorter than the width of the rod in Figure 1.  The blue line shows Newton/Einstein's prediction; the red line shows what a universe like that in Figure 1 would predict instead.  Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter.

Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein’s law of gravity would be violated at distances shorter than the width of the rod in Figure 1. The blue line shows Newton/Einstein’s prediction; the red line shows what a universe like that in Figure 1 would predict instead. Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter.

These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter.  If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width.

But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be.

Ok Fine; What Other Experiments Should We Do?

So what are the non-obvious experiments we should be doing now or in the near future?  Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.)  Although I’m thinking about these things, I do not yet have any good ideas.  Until I do, or someone else does, this is all just talk — and talk does not impress physicists.

Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit.  I agree with that objection.

Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised.

Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path.

A 100 TeV Proton-Proton Collider?

During the gap between the first run of the Large Hadron Collider [LHC], which ended in 2012 and included the discovery of the Higgs particle (and the exclusion of quite a few other things), and its second run, which starts a year from now, there’s been a lot of talk about the future direction for particle physics. By far the most prominent option, both in China and in Europe, involves the long-term possibility of a (roughly) 100 TeV proton-proton collider — that is, a particle accelerator like the LHC, but with 5 to 15 times more energy per collision.

Do we need such a machine? Continue reading

Learning Lessons From Black Holes

My post about what Hawking is and isn’t saying about black holes got a lot of readers, but also some criticism for having come across as too harsh on what Hawking has and hasn’t done. Looking back, I think there’s some merit in the criticism, so let me try to address it and flesh out one of the important issues.

Before I do, let me mention that I’ve almost completed a brief introduction to the “black hole information paradox”; it should be posted within the next day, so stay tuned for that IT’S DONE!  It involves a very brief explanation of how, after having learned from Hawking’s 1974 work that black holes aren’t quite black (in that they slowly radiate particles), physicists are now considering whether black holes might even be less black than that (in that they might slowly leak what’s gone inside them, in scrambled form.)

Ok. One of the points I made on Thursday is that there’s a big difference between what Hawking has written in his latest paper and a something a physicist would call a theory, like the Theory of Special Relativity or Quantum Field Theory or String Theory. A theory may or may not apply to nature; it may  or may not be validated by experiments; but it’s not a theory without some precise equations. Hawking’s paper is two pages long and contains no equations. I made a big deal about this, because I was trying to make a more general point (having nothing to do with Hawking or his proposal) about what qualifies as a theory in physics, and what doesn’t. We have very high standards in this field, higher than the public sometimes realizes.

A reasonable person could (and some did) point out that given Hawking’s extreme physical disability, a short equation-less paper is not to be judged harshly, since typing is a royal pain if you can’t even move. I accept the criticism that I was insensitive to this way of reading my post… and indeed I thereby obscured the point I was trying to make.  I should have been more deliberate in my writing, and emphasized that there are many levels of discussions about science, ranging across cocktail party conversation, wild speculation over a beer, a serious scientific proposal, and a concrete scientific theory. The way I phrased things obscured the fact that Hawking’s proposal, though short of a theory, still represents serious science.

But independent of Hawking’s necessarily terse style, it remains the case that his scientific proposal, though based on certain points that are precise and clear, is quite vague on other points… and there are no equations to back them up.  Of course that doesn’t mean the proposal is wrong!  And a vague proposal can have real scientific merit, since it can propel research in the right direction. Other vague proposals (such as Einstein’s idea that “space and time must be curved”) have sometimes led, after months or years, to concrete theories (Einstein’s equations of “General Relativity”, his theory of gravity.) But many sensible-sounding vague proposals (such as “maybe the cosmological is zero because of an unknown symmetry”) lead nowhere, or even lead us astray. And the reason we should be so sensitive to this point is that the weakness of a vague proposal has already been dramatically demonstrated in this very context.

The recent flurry of activity concerning the fundamental quantum properties of black holes (which unfortunately, unlike their astrophysical properties, are not currently measurable) arose from the so-called firewall problem. And that problem emerged, in a 2012 paper by Almheri, Marolf, Polchinski and Sully (AMPS, for short), from an attempt to put concrete equations behind a twenty-year-old proposal called “complementarity”, due mainly to Susskind, Thoracius and Uglom; see also Stephens, ‘t Hooft and Whiting.

As a black hole forms and grows, and then evaporates, where is the information about how it formed?  And is that information lost, copied, or retained? (Only if it is retained, and not lost or copied, can standard quantum theory describe a black hole.) Complementarity is the notion that the answer depends on the point of view of the observer who’s asking the question. Observers who fall into the black hole think (and measure!) that the information is deep inside. Observers who remain outside the black hole think (and measure!) that the information remains just outside, and is eventually carried off by the Hawking radiation by which the black hole evaporates.  And both are right!  Neither sees the information lost or copied, and thus quantum theory survives.

For this apparently contradictory situation to be possible, there are certain requirements that must be true. Remarkably, a number of these have been shown to be true (at least in special circumstances)! But as of 2012, some others still had not been shown. In short, the proposal, though fairly well-grounded, remained a bit vague about some details.

And that vagueness was the Achilles heel that, after 20 years, brought it down.

The firewall problem pointed out by AMPS shows that complementarity doesn’t quite work. It doesn’t work because one of its vague points turns out to have an inherent and subtle self-contradiction. [Their argument is far too complex for this post, so (at best) I'll have to explain it another time, if I can think of a way to do so...]

By the way, if you look at the AMPS paper, you’ll see it too doesn’t contain many equations. But it contains more than zero… and they are pithy, crucial, and to the point. (Moreover, there are a lot more supporting equations than it first appears; these are relegated to the paper’s appendices, to keep the discussion from looking cluttered.)

So while I understand that Hawking isn’t going to write out long equations unless he’s working with collaborators (which he often does), even the simplest quantitative issues concerning his proposal are not yet discussed or worked out. For instance, what is (even roughly) the time scale over which information begins leaking out? How long does the apparent horizon last? It would be fine if Hawking, working this out in his head, stated the answers without proof, but we need to know the answers he has in mind if we’re to seriously judge the proposal. It’s very far from obvious that any proposal along the lines that Hawking is suggesting (and others that people with similar views have advanced) would actually solve the information paradox without creating other serious problems.

When regarding a puzzle so thorny and subtle as the black hole information paradox, which has resisted solution for forty years, physicists know they should not rely solely on words and logical reasoning, no matter how brilliant the person who originates them. Progress in this area of theoretical research has occurred, and consensus (even partial) has only emerged, when there was both a conceptual and a calculational advance. Hawking’s old papers on singularities (with Penrose) and on black hole evaporation are classic examples; so is the AMPS paper. If anyone, whether Hawking or someone else, can put equations behind Hawking’s proposal that there are no real event horizons and that information is redistributed via a process involving (non-quantum) chaos, then — great! — the proposal can be properly evaluated and its internal consistency can be checked. Until then, it’s far too early to say that Hawking’s proposal represents a scientific theory.

Visiting the University of Maryland

Along with two senior postdocs (Andrey Katz of Harvard and Nathaniel Craig of Rutgers) I’ve been visiting the University of Maryland all week, taking advantage of end-of-academic-term slowdowns to spend a few days just thinking hard, with some very bright and creative colleagues, about the implications of what we have discovered (a Higgs particle of mass 125-126 GeV/c²) and have not discovered (any other new particles or unexpected high-energy phenomena) so far at the Large Hadron Collider [LHC].

The basic questions that face us most squarely are:

Is the naturalness puzzle

  1. resolved by a clever mechanism that adds new particles and forces to the ones we know?
  2. resolved by properly interpreting the history of the universe?
  3. nonexistent due to our somehow misreading the lessons of quantum field theory?
  4. altered dramatically by modifying the rules of quantum field theory and gravity altogether?

If (1) is true, it’s possible that a clever new “mechanism” is required.  (Old mechanisms that remove or ameliorate the naturalness puzzle include supersymmetry, little Higgs, warped extra dimensions, etc.; all of these are still possible, but if one of them is right, it’s mildly surprising we’ve seen no sign of it yet.)  Since the Maryland faculty I’m talking to (Raman Sundrum, Zakaria Chacko and Kaustubh Agashe) have all been involved in inventing clever new mechanisms in the past (with names like Randall-Sundrum [i.e. warped extra dimensions], Twin Higgs, Folded Supersymmetry, and various forms of Composite Higgs), it’s a good place to be thinking about this possibility.  There’s good reason to focus on mechanisms that, unlike most of the known ones, do not lead to new particles that are affected by the strong nuclear force. (The Twin Higgs idea that Chacko invented with Hock-Seng Goh and Roni Harnik is an example.)  The particles predicted by such scenarios could easily have escaped notice so far, and be hiding in LHC data.

Sundrum (some days anyway) thinks the most likely situation is that, just by chance, the universe has turned out to be a little bit unnatural — not a lot, but enough that the solution to the naturalness puzzle may lie at higher energies outside LHC reach.  That would be unfortunate for particle physicists who are impatient to know the answer… unless we’re lucky and a remnant from that higher-energy phenomenon accidentally has ended up at low-energy, low enough that the LHC can reach it.

But perhaps we just haven’t been creative enough yet to guess the right mechanism, or alter the ones we know of to fit the bill… and perhaps the clues are already in the LHC’s data, waiting for us to ask the right question.

I view option (2) as deeply problematic.  On the one hand, there’s a good argument that the universe might be immense, far larger than the part we can see, with different regions having very different laws of particle physics — and that the part we live in might appear very “unnatural” just because that very same unnatural appearance is required for stars, planets, and life to exist.  To be over-simplistic: if, in the parts of the universe that have no Higgs particle with mass below 700 GeV/c², the physical consequences prevent complex molecules from forming, then it’s not surprising we live in a place with a Higgs particle below that mass.   [It's not so different from saying that the earth is a very unusual place from some points of view -- rocks near stars make up a very small fraction of the universe --- but that doesn't mean it's surprising that we find ourselves in such an unusual location, because a planet is one of the few places that life could evolve.]

Such an argument is compelling for the cosmological constant problem.  But it’s really hard to come up with an argument that a Higgs particle with a very low mass (and corresponding low non-zero masses for the other known particles) is required for life to exist.  Specifically, the mechanism of “technicolor” (in which the Higgs field is generated as a composite object through a new, strong force) seems to allow for a habitable universe, but with no naturalness puzzle — so why don’t we find ourselves in a part of the universe where it’s technicolor, not a Standard Model-like Higgs, that shows up at the LHC?  Sundrum, formerly a technicolor expert, has thought about this point (with David E. Kaplan), and he agrees this is a significant problem with option (2).

By the way, option (2) is sometimes called the “anthropic principle”.  But it’s neither a principle nor “anthro-” (human-) related… it’s simply a bias (not in the negative sense of the word, but simply in the sense of something that affects your view of a situation) from the fact that, heck, life can only evolve in places where life can evolve.

(3) is really hard for me to believe.  The naturalness argument boils down to this:

  • Quantum fields fluctuate;
  • Fluctuations carry energy, called “zero-point energy”, which can be calculated and is very large;
  • The energy of the fluctuations of a field depends on the corresponding particle’s mass;
  • The particle’s mass, for the known particles, depends on the Higgs field;
  • Therefore the energy of empty space depends strongly on the Higgs field

Unless one of these five statements is wrong (good luck finding a mistake — every one of them involves completely basic issues in quantum theory and in the Higgs mechanism for giving masses) then there’s a naturalness puzzle.  The solution may be simple from a certain point of view, but it won’t come from just waving the problem away.

(4) I’d love for this to be the real answer, and maybe it is.  If our understanding of quantum field theory and Einstein’s gravity leads us to a naturalness problem whose solution should presumably reveal itself at the LHC, and yet nature refuses to show us a solution, then maybe it’s a naive use of field theory and gravity that’s at fault. But it may take a very big leap of faith, and insight, to see how to jump off this cliff and yet land on one’s feet.  Sundrum is well-known as one of the most creative and fearless individuals in our field, especially when it comes to this kind of thing. I’ve been discussing some radical notions with him, but mostly I’ve been enjoying hearing his many past insights and ideas… and about the equations that go with them.   Anyone can speculate, but it’s the equations (and the predictions, testable at least in principle if not in practice, that you can derive from them) that transform pure speculations into something that deserves the name “theoretical physics”.