Category Archives: The Scientific Process

What if the Large Hadron Collider Finds Nothing Else?

In my last post, I expressed the view that a particle accelerator with proton-proton collisions of (roughly) 100 TeV of energy, significantly more powerful than the currently operational Large Hadron Collider [LHC] that helped scientists discover the Higgs particle, is an obvious and important next steps in our process of learning about the elementary workings of nature. And I described how we don’t yet know whether it will be an exploratory machine or a machine with a clear scientific target; it will depend on what the LHC does or does not discover over the coming few years.

What will it mean, for the 100 TeV collider project and more generally, if the LHC, having made possible the discovery of the Higgs particle, provides us with no more clues?  Specifically, over the next few years, hundreds of tests of the Standard Model (the equations that govern the known particles and forces) will be carried out in measurements made by the ATLAS, CMS and LHCb experiments at the LHC. Suppose that, as it has so far, the Standard Model passes every test that the experiments carry out? In particular, suppose the Higgs particle discovered in 2012 appears, after a few more years of intensive study, to be, as far the LHC can reveal, a Standard Model Higgs — the simplest possible type of Higgs particle?

Before we go any further, let’s keep in mind that we already know that the Standard Model isn’t all there is to nature. The Standard Model does not provide a consistent theory of gravity, nor does it explain neutrino masses, dark matter or “dark energy” (also known as the cosmological constant). Moreover, many of its features are just things we have to accept without explanation, such as the strengths of the forces, the existence of “three generations” (i.e., that there are two heavier cousins of the electron, two for the up quark and two for the down quark), the values of the masses of the various particles, etc. However, even though the Standard Model has its limitations, it is possible that everything that can actually be measured at the LHC — which cannot measure neutrino masses or directly observe dark matter or dark energy — will be well-described by the Standard Model. What if this is the case?

Michelson and Morley, and What They Discovered

In science, giving strong evidence that something isn’t there can be as important as discovering something that is there — and it’s often harder to do, because you have to thoroughly exclude all possibilities. [It's very hard to show that your lost keys are nowhere in the house --- you have to convince yourself that you looked everywhere.] A famous example is the case of Albert Michelson, in his two experiments (one in 1881, a second with Edward Morley in 1887) trying to detect the “ether wind”.

Light had been shown to be a wave in the 1800s; and like all waves known at the time, it was assumed to be a wave in something material, just as sound waves are waves in air, and ocean waves are waves in water. This material was termed the “luminiferous ether”. As we can detect our motion through air or through water in various ways, it seemed that it should be possible to detect our motion through the ether, specifically by looking for the possibility that light traveling in different directions travels at slightly different speeds.  This is what Michelson and Morley were trying to do: detect the movement of the Earth through the luminiferous ether.

Both of Michelson’s measurements failed to detect any ether wind, and did so expertly and convincingly. And for the convincing method that he invented — an experimental device called an interferometer, which had many other uses too — Michelson won the Nobel Prize in 1907. Meanwhile the failure to detect the ether drove both FitzGerald and Lorentz to consider radical new ideas about how matter might be deformed as it moves through the ether. Although these ideas weren’t right, they were important steps that Einstein was able to re-purpose, even more radically, in his 1905 equations of special relativity.

In Michelson’s case, the failure to discover the ether was itself a discovery, recognized only in retrospect: a discovery that the ether did not exist. (Or, if you’d like to say that it does exist, which some people do, then what was discovered is that the ether is utterly unlike any normal material substance in which waves are observed; no matter how fast or in what direction you are moving relative to me, both of us are at rest relative to the ether.) So one must not be too quick to assume that a lack of discovery is actually a step backwards; it may actually be a huge step forward.

Epicycles or a Revolution?

There were various attempts to make sense of Michelson and Morley’s experiment.   Some interpretations involved  tweaks of the notion of the ether.  Tweaks of this type, in which some original idea (here, the ether) is retained, but adjusted somehow to explain the data, are often referred to as “epicycles” by scientists.   (This is analogous to the way an epicycle was used by Ptolemy to explain the complex motions of the planets in the sky, in order to retain an earth-centered universe; the sun-centered solar system requires no such epicycles.) A tweak of this sort could have been the right direction to explain Michelson and Morley’s data, but as it turned out, it was not. Instead, the non-detection of the ether wind required something more dramatic — for it turned out that waves of light, though at first glance very similar to other types of waves, were in fact extraordinarily different. There simply was no ether wind for Michelson and Morley to detect.

If the LHC discovers nothing beyond the Standard Model, we will face what I see as a similar mystery.  As I explained here, the Standard Model, with no other particles added to it, is a consistent but extraordinarily “unnatural” (i.e. extremely non-generic) example of a quantum field theory.  This is a big deal. Just as nineteenth-century physicists deeply understood both the theory of waves and many specific examples of waves in nature  and had excellent reasons to expect a detectable ether, twenty-first century physicists understand quantum field theory and naturalness both from the theoretical point of view and from many examples in nature, and have very good reasons to expect particle physics to be described by a natural theory.  (Our examples come both from condensed matter physics [e.g. metals, magnets, fluids, etc.] and from particle physics [e.g. the physics of hadrons].) Extremely unnatural systems — that is, physical systems described by quantum field theories that are highly non-generic — simply have not previously turned up in nature… which is just as we would expect from our theoretical understanding.

[Experts: As I emphasized in my Santa Barbara talk last week, appealing to anthropic arguments about the hierarchy between gravity and the other forces does not allow you to escape from the naturalness problem.]

So what might it mean if an unnatural quantum field theory describes all of the measurements at the LHC? It may mean that our understanding of particle physics requires an epicyclic change — a tweak.  The implications of a tweak would potentially be minor. A tweak might only require us to keep doing what we’re doing, exploring in the same direction but a little further, working a little harder — i.e. to keep colliding protons together, but go up in collision energy a bit more, from the LHC to the 100 TeV collider. For instance, perhaps the Standard Model is supplemented by additional particles that, rather than having masses that put them within reach of the LHC, as would inevitably be the case in a natural extension of the Standard Model (here’s an example), are just a little bit heavier than expected. In this case the world would be somewhat unnatural, but not too much, perhaps through some relatively minor accident of nature; and a 100 TeV collider would have enough energy per collision to discover and reveal the nature of these particles.

Or perhaps a tweak is entirely the wrong idea, and instead our understanding is fundamentally amiss. Perhaps another Einstein will be needed to radically reshape the way we think about what we know.  A dramatic rethink is both more exciting and more disturbing. It was an intellectual challenge for 19th century physicists to imagine, from the result of the Michelson-Morley experiment, that key clues to its explanation would be found in seeking violations of Newton’s equations for how energy and momentum depend on velocity. (The first experiments on this issue were carried out in 1901, but definitive experiments took another 15 years.) It was an even greater challenge to envision that the already-known unexplained shift in the orbit of Mercury would also be related to the Michelson-Morley (non)-discovery, as Einstein, in trying to adjust Newton’s gravity to make it consistent with the theory of special relativity, showed in 1913.

My point is that the experiments that were needed to properly interpret Michelson-Morley’s result

  • did not involve trying to detect motion through the ether,
  • did not involve building even more powerful and accurate interferometers,
  • and were not immediately obvious to the practitioners in 1888.

This should give us pause. We might, if we continue as we are, be heading in the wrong direction.

Difficult as it is to do, we have to take seriously the possibility that if (and remember this is still a very big “if”) the LHC finds only what is predicted by the Standard Model, the reason may involve a significant reorganization of our knowledge, perhaps even as great as relativity’s re-making of our concepts of space and time. Were that the case, it is possible that higher-energy colliders would tell us nothing, and give us no clues at all. An exploratory 100 TeV collider is not guaranteed to reveal secrets of nature, any more than a better version of Michelson-Morley’s interferometer would have been guaranteed to do so. It may be that a completely different direction of exploration, including directions that currently would seem silly or pointless, will be necessary.

This is not to say that a 100 TeV collider isn’t needed!  It might be that all we need is a tweak of our current understanding, and then such a machine is exactly what we need, and will be the only way to resolve the current mysteries.  Or it might be that the 100 TeV machine is just what we need to learn something revolutionary.  But we also need to be looking for other lines of investigation, perhaps ones that today would sound unrelated to particle physics, or even unrelated to any known fundamental question about nature.

Let me provide one example from recent history — one which did not lead to a discovery, but still illustrates that this is not all about 19th century history.

An Example

One of the great contributions to science of Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali was to observe (in a 1998 paper I’ll refer to as ADD, after the authors’ initials) that no one had ever excluded the possibility that we, and all the particles from which we’re made, can move around freely in three spatial dimensions, but are stuck (as it were) as though to the corner edge of a thin rod — a rod as much as one millimeter wide, into which only gravitational fields (but not, for example, electric fields or magnetic fields) may penetrate.  Moreover, they emphasized that the presence of these extra dimensions might explain why gravity is so much weaker than the other known forces.

Fig. 1: ADD's paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter.

Fig. 1: ADD’s paper pointed out that no experiment as of 1998 could yet rule out the possibility that our familiar three-dimensional world is a corner of a five-dimensional world, where the two extra dimensions are finite but perhaps as large as a millimeter.

Given the incredible number of experiments over the past two centuries that have probed distances vastly smaller than a millimeter, the claim that there could exist millimeter-sized unknown dimensions was amazing, and came as a tremendous shock — certainly to me. At first, I simply didn’t believe that the ADD paper could be right.  But it was.

One of the most important immediate effects of the ADD paper was to generate a strong motivation for a new class of experiments that could be done, rather inexpensively, on the top of a table. If the world were as they imagined it might be, then Newton’s (and Einstein’s) law for gravity, which states that the force between two stationary objects depends on the distance r between them as 1/r², would increase faster than this at distances shorter than the width of the rod in Figure 1.  This is illustrated in Figure 2.

Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein's law of gravity would be violated at distances shorter than the width of the rod in Figure 1.  The blue line shows Newton/Einstein's prediction; the red line shows what a universe like that in Figure 1 would predict instead.  Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter.

Fig. 2: If the world were as sketched in Figure 1, then Newton/Einstein’s law of gravity would be violated at distances shorter than the width of the rod in Figure 1. The blue line shows Newton/Einstein’s prediction; the red line shows what a universe like that in Figure 1 would predict instead. Experiments done in the last few years agree with the blue curve down to a small fraction of a millimeter.

These experiments are not easy — gravity is very, very weak compared to electrical forces, and lots of electrical effects can show up at very short distances and have to be cleverly avoided. But some of the best experimentalists in the world figured out how to do it (see here and here). After the experiments were done, Newton/Einstein’s law was verified down to a few hundredths of a millimeter.  If we live on the corner of a rod, as in Figure 1, it’s much, much smaller than a millimeter in width.

But it could have been true. And if it had, it might not have been discovered by a huge particle accelerator. It might have been discovered in these small inexpensive experiments that could have been performed years earlier. The experiments weren’t carried out earlier mainly because no one had pointed out quite how important they could be.

Ok Fine; What Other Experiments Should We Do?

So what are the non-obvious experiments we should be doing now or in the near future?  Well, if I had a really good suggestion for a new class of experiments, I would tell you — or rather, I would write about it in a scientific paper. (Actually, I do know of an important class of measurements, and I have written a scientific paper about them; but these are measurements to be done at the LHC, and don’t involve a entirely new experiment.)  Although I’m thinking about these things, I do not yet have any good ideas.  Until I do, or someone else does, this is all just talk — and talk does not impress physicists.

Indeed, you might object that my remarks in this post have been almost without content, and possibly without merit.  I agree with that objection.

Still, I have some reasons for making these points. In part, I want to highlight, for a wide audience, the possible historic importance of what might now be happening in particle physics. And I especially want to draw the attention of young people. There have been experts in my field who have written that non-discoveries at the LHC constitute a “nightmare scenario” for particle physics… that there might be nothing for particle physicists to do for a long time. But I want to point out that on the contrary, not only may it not be a nightmare, it might actually represent an extraordinary opportunity. Not discovering the ether opened people’s minds, and eventually opened the door for Einstein to walk through. And if the LHC shows us that particle physics is not described by a natural quantum field theory, it may, similarly, open the door for a young person to show us that our understanding of quantum field theory and naturalness, while as intelligent and sensible and precise as the 19th century understanding of waves, does not apply unaltered to particle physics, and must be significantly revised.

Of course the LHC is still a young machine, and it may still permit additional major discoveries, rendering everything I’ve said here moot. But young people entering the field, or soon to enter it, should not assume that the experts necessarily understand where the field’s future lies. Like FitzGerald and Lorentz, even the most brilliant and creative among us might be suffering from our own hard-won and well-established assumptions, and we might soon need the vision of a brilliant young genius — perhaps a theorist with a clever set of equations, or perhaps an experimentalist with a clever new question and a clever measurement to answer it — to set us straight, and put us onto the right path.

A 100 TeV Proton-Proton Collider?

During the gap between the first run of the Large Hadron Collider [LHC], which ended in 2012 and included the discovery of the Higgs particle (and the exclusion of quite a few other things), and its second run, which starts a year from now, there’s been a lot of talk about the future direction for particle physics. By far the most prominent option, both in China and in Europe, involves the long-term possibility of a (roughly) 100 TeV proton-proton collider — that is, a particle accelerator like the LHC, but with 5 to 15 times more energy per collision.

Do we need such a machine? Continue reading

Learning Lessons From Black Holes

My post about what Hawking is and isn’t saying about black holes got a lot of readers, but also some criticism for having come across as too harsh on what Hawking has and hasn’t done. Looking back, I think there’s some merit in the criticism, so let me try to address it and flesh out one of the important issues.

Before I do, let me mention that I’ve almost completed a brief introduction to the “black hole information paradox”; it should be posted within the next day, so stay tuned for that IT’S DONE!  It involves a very brief explanation of how, after having learned from Hawking’s 1974 work that black holes aren’t quite black (in that they slowly radiate particles), physicists are now considering whether black holes might even be less black than that (in that they might slowly leak what’s gone inside them, in scrambled form.)

Ok. One of the points I made on Thursday is that there’s a big difference between what Hawking has written in his latest paper and a something a physicist would call a theory, like the Theory of Special Relativity or Quantum Field Theory or String Theory. A theory may or may not apply to nature; it may  or may not be validated by experiments; but it’s not a theory without some precise equations. Hawking’s paper is two pages long and contains no equations. I made a big deal about this, because I was trying to make a more general point (having nothing to do with Hawking or his proposal) about what qualifies as a theory in physics, and what doesn’t. We have very high standards in this field, higher than the public sometimes realizes.

A reasonable person could (and some did) point out that given Hawking’s extreme physical disability, a short equation-less paper is not to be judged harshly, since typing is a royal pain if you can’t even move. I accept the criticism that I was insensitive to this way of reading my post… and indeed I thereby obscured the point I was trying to make.  I should have been more deliberate in my writing, and emphasized that there are many levels of discussions about science, ranging across cocktail party conversation, wild speculation over a beer, a serious scientific proposal, and a concrete scientific theory. The way I phrased things obscured the fact that Hawking’s proposal, though short of a theory, still represents serious science.

But independent of Hawking’s necessarily terse style, it remains the case that his scientific proposal, though based on certain points that are precise and clear, is quite vague on other points… and there are no equations to back them up.  Of course that doesn’t mean the proposal is wrong!  And a vague proposal can have real scientific merit, since it can propel research in the right direction. Other vague proposals (such as Einstein’s idea that “space and time must be curved”) have sometimes led, after months or years, to concrete theories (Einstein’s equations of “General Relativity”, his theory of gravity.) But many sensible-sounding vague proposals (such as “maybe the cosmological is zero because of an unknown symmetry”) lead nowhere, or even lead us astray. And the reason we should be so sensitive to this point is that the weakness of a vague proposal has already been dramatically demonstrated in this very context.

The recent flurry of activity concerning the fundamental quantum properties of black holes (which unfortunately, unlike their astrophysical properties, are not currently measurable) arose from the so-called firewall problem. And that problem emerged, in a 2012 paper by Almheri, Marolf, Polchinski and Sully (AMPS, for short), from an attempt to put concrete equations behind a twenty-year-old proposal called “complementarity”, due mainly to Susskind, Thoracius and Uglom; see also Stephens, ‘t Hooft and Whiting.

As a black hole forms and grows, and then evaporates, where is the information about how it formed?  And is that information lost, copied, or retained? (Only if it is retained, and not lost or copied, can standard quantum theory describe a black hole.) Complementarity is the notion that the answer depends on the point of view of the observer who’s asking the question. Observers who fall into the black hole think (and measure!) that the information is deep inside. Observers who remain outside the black hole think (and measure!) that the information remains just outside, and is eventually carried off by the Hawking radiation by which the black hole evaporates.  And both are right!  Neither sees the information lost or copied, and thus quantum theory survives.

For this apparently contradictory situation to be possible, there are certain requirements that must be true. Remarkably, a number of these have been shown to be true (at least in special circumstances)! But as of 2012, some others still had not been shown. In short, the proposal, though fairly well-grounded, remained a bit vague about some details.

And that vagueness was the Achilles heel that, after 20 years, brought it down.

The firewall problem pointed out by AMPS shows that complementarity doesn’t quite work. It doesn’t work because one of its vague points turns out to have an inherent and subtle self-contradiction. [Their argument is far too complex for this post, so (at best) I'll have to explain it another time, if I can think of a way to do so...]

By the way, if you look at the AMPS paper, you’ll see it too doesn’t contain many equations. But it contains more than zero… and they are pithy, crucial, and to the point. (Moreover, there are a lot more supporting equations than it first appears; these are relegated to the paper’s appendices, to keep the discussion from looking cluttered.)

So while I understand that Hawking isn’t going to write out long equations unless he’s working with collaborators (which he often does), even the simplest quantitative issues concerning his proposal are not yet discussed or worked out. For instance, what is (even roughly) the time scale over which information begins leaking out? How long does the apparent horizon last? It would be fine if Hawking, working this out in his head, stated the answers without proof, but we need to know the answers he has in mind if we’re to seriously judge the proposal. It’s very far from obvious that any proposal along the lines that Hawking is suggesting (and others that people with similar views have advanced) would actually solve the information paradox without creating other serious problems.

When regarding a puzzle so thorny and subtle as the black hole information paradox, which has resisted solution for forty years, physicists know they should not rely solely on words and logical reasoning, no matter how brilliant the person who originates them. Progress in this area of theoretical research has occurred, and consensus (even partial) has only emerged, when there was both a conceptual and a calculational advance. Hawking’s old papers on singularities (with Penrose) and on black hole evaporation are classic examples; so is the AMPS paper. If anyone, whether Hawking or someone else, can put equations behind Hawking’s proposal that there are no real event horizons and that information is redistributed via a process involving (non-quantum) chaos, then — great! — the proposal can be properly evaluated and its internal consistency can be checked. Until then, it’s far too early to say that Hawking’s proposal represents a scientific theory.

Visiting the University of Maryland

Along with two senior postdocs (Andrey Katz of Harvard and Nathaniel Craig of Rutgers) I’ve been visiting the University of Maryland all week, taking advantage of end-of-academic-term slowdowns to spend a few days just thinking hard, with some very bright and creative colleagues, about the implications of what we have discovered (a Higgs particle of mass 125-126 GeV/c²) and have not discovered (any other new particles or unexpected high-energy phenomena) so far at the Large Hadron Collider [LHC].

The basic questions that face us most squarely are:

Is the naturalness puzzle

  1. resolved by a clever mechanism that adds new particles and forces to the ones we know?
  2. resolved by properly interpreting the history of the universe?
  3. nonexistent due to our somehow misreading the lessons of quantum field theory?
  4. altered dramatically by modifying the rules of quantum field theory and gravity altogether?

If (1) is true, it’s possible that a clever new “mechanism” is required.  (Old mechanisms that remove or ameliorate the naturalness puzzle include supersymmetry, little Higgs, warped extra dimensions, etc.; all of these are still possible, but if one of them is right, it’s mildly surprising we’ve seen no sign of it yet.)  Since the Maryland faculty I’m talking to (Raman Sundrum, Zakaria Chacko and Kaustubh Agashe) have all been involved in inventing clever new mechanisms in the past (with names like Randall-Sundrum [i.e. warped extra dimensions], Twin Higgs, Folded Supersymmetry, and various forms of Composite Higgs), it’s a good place to be thinking about this possibility.  There’s good reason to focus on mechanisms that, unlike most of the known ones, do not lead to new particles that are affected by the strong nuclear force. (The Twin Higgs idea that Chacko invented with Hock-Seng Goh and Roni Harnik is an example.)  The particles predicted by such scenarios could easily have escaped notice so far, and be hiding in LHC data.

Sundrum (some days anyway) thinks the most likely situation is that, just by chance, the universe has turned out to be a little bit unnatural — not a lot, but enough that the solution to the naturalness puzzle may lie at higher energies outside LHC reach.  That would be unfortunate for particle physicists who are impatient to know the answer… unless we’re lucky and a remnant from that higher-energy phenomenon accidentally has ended up at low-energy, low enough that the LHC can reach it.

But perhaps we just haven’t been creative enough yet to guess the right mechanism, or alter the ones we know of to fit the bill… and perhaps the clues are already in the LHC’s data, waiting for us to ask the right question.

I view option (2) as deeply problematic.  On the one hand, there’s a good argument that the universe might be immense, far larger than the part we can see, with different regions having very different laws of particle physics — and that the part we live in might appear very “unnatural” just because that very same unnatural appearance is required for stars, planets, and life to exist.  To be over-simplistic: if, in the parts of the universe that have no Higgs particle with mass below 700 GeV/c², the physical consequences prevent complex molecules from forming, then it’s not surprising we live in a place with a Higgs particle below that mass.   [It's not so different from saying that the earth is a very unusual place from some points of view -- rocks near stars make up a very small fraction of the universe --- but that doesn't mean it's surprising that we find ourselves in such an unusual location, because a planet is one of the few places that life could evolve.]

Such an argument is compelling for the cosmological constant problem.  But it’s really hard to come up with an argument that a Higgs particle with a very low mass (and corresponding low non-zero masses for the other known particles) is required for life to exist.  Specifically, the mechanism of “technicolor” (in which the Higgs field is generated as a composite object through a new, strong force) seems to allow for a habitable universe, but with no naturalness puzzle — so why don’t we find ourselves in a part of the universe where it’s technicolor, not a Standard Model-like Higgs, that shows up at the LHC?  Sundrum, formerly a technicolor expert, has thought about this point (with David E. Kaplan), and he agrees this is a significant problem with option (2).

By the way, option (2) is sometimes called the “anthropic principle”.  But it’s neither a principle nor “anthro-” (human-) related… it’s simply a bias (not in the negative sense of the word, but simply in the sense of something that affects your view of a situation) from the fact that, heck, life can only evolve in places where life can evolve.

(3) is really hard for me to believe.  The naturalness argument boils down to this:

  • Quantum fields fluctuate;
  • Fluctuations carry energy, called “zero-point energy”, which can be calculated and is very large;
  • The energy of the fluctuations of a field depends on the corresponding particle’s mass;
  • The particle’s mass, for the known particles, depends on the Higgs field;
  • Therefore the energy of empty space depends strongly on the Higgs field

Unless one of these five statements is wrong (good luck finding a mistake — every one of them involves completely basic issues in quantum theory and in the Higgs mechanism for giving masses) then there’s a naturalness puzzle.  The solution may be simple from a certain point of view, but it won’t come from just waving the problem away.

(4) I’d love for this to be the real answer, and maybe it is.  If our understanding of quantum field theory and Einstein’s gravity leads us to a naturalness problem whose solution should presumably reveal itself at the LHC, and yet nature refuses to show us a solution, then maybe it’s a naive use of field theory and gravity that’s at fault. But it may take a very big leap of faith, and insight, to see how to jump off this cliff and yet land on one’s feet.  Sundrum is well-known as one of the most creative and fearless individuals in our field, especially when it comes to this kind of thing. I’ve been discussing some radical notions with him, but mostly I’ve been enjoying hearing his many past insights and ideas… and about the equations that go with them.   Anyone can speculate, but it’s the equations (and the predictions, testable at least in principle if not in practice, that you can derive from them) that transform pure speculations into something that deserves the name “theoretical physics”.

Wednesday: Sean Carroll & I Interviewed Again by Alan Boyle

Today, Wednesday December 4th, at 8 pm Eastern/5 pm Pacific time, Sean Carroll and I will be interviewed again by Alan Boyle on “Virtually Speaking Science”.   The link where you can listen in (in real time or at your leisure) is

http://www.blogtalkradio.com/virtually-speaking-science/2013/12/05/alan-boyle-matt-strassler-sean-carroll

What is “Virtually Speaking Science“?  It is an online radio program that presents, according to its website:

  • Informal conversations hosted by science writers Alan Boyle, Tom Levenson and Jennifer Ouellette, who explore the explore the often-volatile landscape of science, politics and policy, the history and economics of science, science deniers and its relationship to democracy, and the role of women in the sciences.

Sean Carroll is a Caltech physicist, astrophysicist, writer and speaker, blogger at Preposterous Universe, who recently completed an excellent and now prize-winning popular book (which I highly recommend) on the Higgs particle, entitled “The Particle at the End of the Universe“.  Our interviewer Alan Boyle is a noted science writer, author of the book “The Case for Pluto“, winner of many awards, and currently NBC News Digital’s science editor [at the blog  "Cosmic Log"].

Sean and I were interviewed in February by Alan on this program; here’s the link.  I was interviewed on Virtually Speaking Science once before, by Tom Levenson, about the Large Hadron Collider (here’s the link).  Also, my public talk “The Quest for the Higgs Particle” is posted in their website (here’s the link to the audio and to the slides).

The Fast and Glamorous Life of a Theoretical Physicist

Ah, the fast-paced life of a theoretical physicist!  I just got done giving a one-hour talk in Rome, given at a workshop for experts on the ATLAS experiment, one of the two general purpose experiments at the Large Hadron Collider [LHC]. Tomorrow morning I’ll be talking with a colleague at the Rutherford Appleton Lab in the U.K., an expert from CMS (the other general purpose experiment at the LHC). Then it’s off to San Francisco, where tomorrow (Wednesday, 5 p.m. Pacific Time, 8 p.m. Eastern), at the Exploratorium, I’ll be joined by Caltech’s Sean Carroll, who is an expert on cosmology and particle physics and whose book on the Higgs boson discovery just won a nice prize, and we’ll be discussing science with science writer Alan Boyle, as we did back in February. [You can click here to listen in to Wednesday's event.]  Next, on Thursday I’ll be at a meeting hosted in Stony Brook, on Long Island in New York State, discussing a Higgs-particle-related scientific project with theoretical physics colleagues as far flung as Hong Kong.  On Friday I shall rest.

“How does he do it?”, you ask. Hey, a private jet is a wonderful thing! Simple, convenient, no waiting at the gate; I highly recommend it! However — I don’t own one. All I have is Skype, and other Skype-like software.  My words will cross the globe, but my body won’t be going anywhere this week.

We should not take this kind of communication for granted! If the speed of light were 186,000 miles (300,000 kilometers) per hour, instead of 186,000 miles (300,000 kilometers) per second, ordinary life wouldn’t obviously change that much, but we simply couldn’t communicate internationally the way we do. It’s 4100 miles (6500 kilometers) across the earth’s surface to Rome; light takes about 0.02 seconds to travel that distance, so that’s the fastest anything can travel to make the trip. But if light traveled 186,000 miles per hour, then it would take over a minute for my words to reach Rome, making conversation completely impossible. A back-and-forth conversation would be difficult even between New York and Boston — for any signal to travel the 200 miles (300 kilometers) would require four seconds, so you’d be waiting for 8 seconds to hear the other person answer your questions. We’d have similar problems — slightly less severe — if the earth were as large as the sun.  And someday, as we populate the solar system, we’ll actually have this problem.

So think about that next time you call or Skype or otherwise contact a distant friend or colleague, and you have a conversation just as though you were next door, despite your being separated half-way round the planet. It’s a small world (and a fast one) after all.

Comet ISON Befuddles the Experts

Take a ball of loosely aggregated rock and ice, the nucleus of a comet, fresh from the distant reaches of the solar system.  Throw it past the sun, really fast, but so close that the sun takes up a large fraction of the sky.  What’s going to happen?  The answer: nobody knows for sure.  Yesterday we actually got to see this experiment carried out by nature.  And what happened?  After all the photographs and other data, nobody knows for sure.  Comet ISON dimmed sharply and virtually disappeared, then, in part, reappeared [see the SOHO satellite's latest photo below, showing a medium-bright comet-like smudge receding from the sun, which is blacked out to protect the camera.]  What is its future, and how bright will it be in the sky when it starts to be potentially visible at dawn in a day or two?  Nobody knows for sure.

[Note Added --- Now we know: the comet did not survive, and the bright spot that appeared shortly after closest approach to the sun appears to have been all debris, without a cometary "core", or nucleus, to produce the additional dust and gas to maintain the comet's appearance.  Farewell, ISON!  Click here to see the video of the comet's pass by the sun, its brief flare after passage, and the ensuing fade-out.]

I could not possibly express this better than was done last night in a terrific post by Karl Battams, who has been blogging for NASA’s Comet ISON Observing Campaign. He playfully calls ISON “Schroedinger’s Comet”, in honor of Schroedinger’s Cat, referring to a famous and conceptually puzzling thought-experiment of Erwin Schroedinger, in which a cat is (in a sense) put in a quantum state in which it is neither/both alive nor/and dead. Linking the comet and the cat is a matter of poetic metaphor, not scientific analogy, but the metaphor is a pretty one.

Battams’ post beautifully captures the slightly giddy mindset of a scientist in the midst of intellectual chaos, one whose ideas, expectations and understanding are currently strewn about the solar system. He brings to you the experience of being flooded with data and being humbled by it… a moment simultaneously exciting and frustrating, when scientists know they’re about to learn something important, but right now they haven’t the faintest idea what going on.

Why Scientists Can Be Happy Even When They Find Nothing

Appropriate for General Readership

Last week, the LUX experiment reported its results in its search for the dark matter that (speaking roughly) makes up 25% of the stuff in the universe (see here for the first report and here for some Q&A).  [See this article, specifically the "Dark Matter Underfoot" section, for some nontechnical discussion about how experiments like LUX work.]  Shortly thereafter, a number of articles in the media made a big deal out of the fact that, simultaneously,

  1. the LUX experiment did not find evidence of dark matter
  2. yet scientists at the LUX experiment appeared to be quite happy

as though this was contradictory and mystifying. Actually, if you think about it carefully, this is perfectly normal and typical, and not the slightest bit surprising. But to make sense of it, you do also have to understand the levels of “happiness” that the LUX scientists are expressing.

The point is that whenever scientists do an experiment whose goal is to look for something whose precise details aren’t known, there are two stories running simultaneously:

  1. The scientists are trying to do the best experiment that they can, in order that their search be as thorough and as expansive as it could possibly be with the equipment that they have available.
  2. The scientists are hoping that the thing that they are looking for (or perhaps something else equally or more interesting) will be within reach of their search.

Notice that humans have control over the first story. The wiser they are at designing their experiment, and the more skillful they are in carrying it out, the more effective their search will be. But they have no control over the second story. Whether their prey lies within their reach, or whether it lies far beyond, requiring the technology of the distant future, is up to nature, not humans. In short, story #1 is about skill and talent, but story #2 is about luck. Even a great experiment can’t do the impossible, and even one that doesn’t work quite as well as it was supposed to can be fortunate.

Of course, there is some interplay between the stories. A disaster in story #1 precludes a happy ending in story #2; if the experiment doesn’t work, there won’t be any discoveries! And the better is the outcome in story #1, the more probable is a success in story #2; a more thorough search is more likely to get lucky.

The LUX researchers, in order to make a discovery, have to be lucky in several ways, as I described on Thursday.

  • Dark matter (at least some of it) has to be made from particles which are heavier than protons and have uniform properties;
  • These particles have to be rather smoothly distributed through the Milky Way galaxy, rather than bound up in clumps the way ordinary matter is, so that some of them are likely, just by chance, to be passing through the earth;
  • And they have to interact with ordinary matter at a rate that is not insanely small — no less than a millionth of the interaction rate of high-energy neutrinos with ordinary matter.

None of these things is necessarily true, given what we know about dark matter from our measurements of the heavens. And if any one of them is false, no detector similar to LUX will ever find dark matter; we’ll need other methods, some of which are already under way.

Now, in this context, what’s the worst thing that could happen to a group of scientists who’ve built an experiment? The worst thing that could happen is that after spending several years preparing the experiment, they find it simply doesn’t work. This can happen! These are very difficult experiments requiring very special and remarkable techniques, and every now and then, in the history of such experiments, an unexpected problem arises that can’t be solved without a complete redesign, which is usually too expensive and in any case means years of delay. Or something just explodes and ruins the experiment. Something like this is extremely depressing and often deeply embarrassing.

So if instead the experiment works, the scientists who designed, built and ran it are of course very relieved and reasonably happy. And if, because of a combination of hard work and cleverness, it works better than they expected and as well as they could have hoped, they’re of course enormously pleased, and proud of their work!

Now what could make them happier still — even ecstatic, to the point of staying up late drinking entire bottles of champagne? A discovery, of course. Discovering what they’re looking for, or perhaps something they weren’t even looking for, if it is truly novel and of fundamental importance.  If that happens, then they won’t care as much if their experiment worked better than expected… because, if you’re an experimental scientist, there’s nothing, nothing at all, better than discovering something new about nature.

So with this perspective, I think the LUX scientists’ emotions (as conveyed during his talk by Richard Gaitskell of Brown University, the project’s leader) are actually very easy to understand. They are very happy because their experiment works better than they expected and as well as they hoped… maybe even better than that. For this, they get the high respect and admiration of their colleagues. But make no mistake: they’d certainly be a lot happier — overjoyed and humbled — if they’d discovered dark matter. For that, they’d get a place in the history books, major prizes (perhaps a Nobel, if the Nobel Committee could figure out who to give it to), lasting fame, and the almost unimaginable feeling of having uncovered something about nature that no human previously knew, and that (barring a complete collapse of civilization) will never be forgotten. So yes, they’re happy. But not nearly as happy as can be. They’re frustrated, too, just like the rest of us, that nothing’s shown up yet.

However, they’re also hopeful. Since they’ve built such a good experiment, and since they’ve only run it for such a short time so far, they’ll have another very reasonable shot at finding dark matter when they run it for about a full year, in 2014. Not only will they run it longer, they’ll surely also learn, from their experience so far, to be smarter about how they run it. So expect, at the very least, powerful new limits on dark matter from them in eighteen months or so. And maybe, just maybe, something more.