The huge Milner prizes for nine well-known scientists, and the controversy they generated, have motivated me to relate a story. It happened during the theorist/experimentalist workshop that was held in early August (see also here) at the Perimeter Institute in Waterloo, Canada. And it illuminates something that many scientists, science commentators and science journalists, as well as science fans in the public, seem to be unaware of, but ought to know.
Before I start, I want to make one thing clear. I am by no means a flag waver for the string theory community; the theory’s been spectacularly over-hyped, and the community’s political control of high-energy physics in many U.S. physics departments has negatively impacted many scientific careers, including my own. On the other hand, I am also not going to tell you that string theory, as a theory, is somehow evil incarnate; I have done a certain amount of string theory research, and not only have I learned a great deal from it that I could not have learned any other way, doing the research had a positive effect on my career. So I feel it is unfortunate that string theory has been a political football, with two violent teams trying to kick the ball toward their opponents’ goal posts. From my perspective, the game is irrational and preposterous, reasonable people were long ago refusing to play it, and it is high time the ball were grabbed by the referee and placed quietly in the middle of the field where it belongs.
My story takes place on the evening of Friday, August 3rd, following the second day of the workshop, which brought together theorists and CMS experimentalists for discussions concerning research strategies at the Large Hadron Collider [LHC]. (CMS and ATLAS are the two general purpose experiments at the LHC, and their co-discovery of a Higgs-like particle was announced July 4th.) I was sitting at a square white table on the Perimeter Institute’s ground floor, illuminated by sunset light pouring in through the Institute’s plate-glass windows. Aside from me, those at the dinner table included six members of the CMS experiment, among them Joe Incandela, the current spokesman of CMS, who a month before had the great privilege of presenting CMS’s new discovery to the world. The eighth person at the table, sitting to my left between me and Incandela, was a theorist, David Kosower, an American working as a senior professor in France, at Saclay.
Discussion ranged widely, but at some point Incandela began describing how important the work of Kosower and his colleagues in the BlackHat collaboration had been in helping confirm the validity of a measurement technique that Incandela’s group (his postdoctoral researchers and students, and perhaps some of his faculty colleagues as well) had been trying to employ. This technique formed a crucial part of their effort to search at CMS for signs of new undetectable particles [i.e., particles that, like neutrinos, pass through CMS (and ATLAS) without leaving any trace]. (This is often billed as a “search for supersymmetry”, but in fact is a much more general way to look for many types of new particles that would be essentially invisible to CMS.)
Now, what is “BlackHat?” The Standard Model, the set of equations we use to describe all the known particles and non-gravitational forces in nature, works very well for predicting the processes that occur at the LHC — as far as we can tell. A crucial limiting factor in our ability to tell, however, is our ability to calculate. Many processes that we observe occurring at the LHC are quite complicated, and the rates at which such processes take place often cannot be calculated, currently, to better than 50% precision or worse. This means that if a new phenomenon were causing a certain process to occur 20% more often than in the Standard Model, it is quite possible that we would not yet know it, due to an imprecise Standard Model prediction. Over the years, theorists like Kosower and his colleagues, and their competitors pursuing other approaches, have gradually been calculating more and more complex processes at the precision levels needed for top-notch measurements at the LHC. And BlackHat is the computer program that Kosower and his friends have written to translate all of their theoretical insights and methods into actual predictions for the LHC experimentalists to use.
What is truly remarkable is that today BlackHat and its competitors can do calculations that were once thought, as recently as 2005 or so, would lie far beyond the reach of theorists during the entire LHC era. What happened? Well, there was a revolution in calculational techniques… and it has allowed measurements such as that carried out by Incandela and his group to be significantly more precise, in turn allowing us to know much more about what is present, and absent, in the data produced by the LHC.
The revolution, at least as far as BlackHat specifically is concerned, actually had two stages. It started in the 90s, when various new techniques allowed theorists sometimes to abandon the famous but extremely awkward method of Feynman diagrams for calculations. Many of these techniques were developed by Kosower along with Lance Dixon (professor at the SLAC laboratory outside Stanford University, where I was a graduate student), Zvi Bern (now a professor at the University of California at Los Angeles [UCLA]) and David Dunbar (now a professor at Swansea University in the UK.)
But there was another key advance that occurred in the middle of the last decade. If you ask Bern, Kosower or Dixon, which you can do in part by reading their Scientific American article on BlackHat from May, 2012, they will tell you that one of the key developments was a technique called “on-shell recursion”. What this term means is very technical. Where it comes from is fascinating.
Here’s a slide from one of Bern’s recent talks, given in 2011 at the Institute for Nuclear Theory at the University of Washington.
You see that he refers to this technique as “a very general machinery” whose “power comes from [its] generality”. And he cites a paper, whose authors are Ruth Britto, Freddy Cachazo, Bo Feng and Ed Witten — three young people collaborating with the famed string theory grandmaster, winner of the Milner Prize and the Fields Medal, among many other awards. (The youngsters were then postdoctoral researchers working mainly on string theory at the Institute for Advanced Study in Princeton, where Witten and three other Milner prize-winners are faculty. Today Cachazo is a professor at the Perimeter Institute, Britto is a professor at Saclay, and Feng is a professor at Zhejiang University in China. Sadly, none of them took jobs in the United States.)
And Bern goes on: here’s a later slide, where he lists some key developments.
Second among them is the “realization of the remarkable power of complex momenta in generalized cuts” in another paper of Britto, Cachazo and Feng, which draws its “Inspiration from Witten’s twistor string paper.”
Bern is referring to Witten’s 2003 paper entitled “Perturbative Gauge Theory As a String Theory in Twistor Space.” What is that all about?
“Perturbative Gauge Theory” refers to a class of quantum field theories of which the Standard Model itself (in the context of the LHC) is an example; Witten was studying a simpler one, the so-called “maximally supersymmetric gauge theory”, which is the spherical cow [i.e. the simplest case --- simplistic in some ways, but much easier to study] of perturbative gauge theories. “String Theory” is just what you think it is, and “Twistor Space” — well, whatever it is (and I won’t try to explain it here) it’s not the real world.
It was a remarkable paper, as Witten’s so often have been, drawing together numerous theoretical ideas (including some that played a role in the above-mentioned 1990s calculational revolution) into one unexpected place, and showing how rich their implications could be. But Witten’s paper had nothing directly to do with calculations relevant to the LHC; it had to do with using string theory in a weird and little-studied context to carry out calculations in the vastly simpler case of the maximally supersymmetric gauge theory. The whole thing lies, it would seem, infinitely far from experiments.
Yet the results Witten obtained led to the Britto, Cachazo, and Feng papers, including the one written with Witten himself, which managed to pull from the string theory developments some key insights that were needed for more general calculations. From there we follow the results to BlackHat, whose leaders all did some amount of string theory early in their careers but who turned by choice to practical calculations and invented many new techniques themselves. They’re exactly the sort of people you’d expect to dismiss these efforts by string theorists as naive. But no. They credit Britto et al. prominently for a key insight that makes BlackHat possible. And finally we arrive back at the dinner table, with me listening to Joe Incandela, who, fresh from the completion of the CMS paper on the observation of the Higgs-like particle, is praising BlackHat for its contribution to searches for new phenomena in CMS data. It’s only a few steps from Incandela to Witten — from experiment to the most apparently-useless edge of string theory.
So it’s unfair to Witten, when he is given a prize for some reason or other, to denigrate his theoretical work as something that cannot be tested experimentally — as though being “testable” were the only possible criterion for determining whether theoretical work has value for physics.
Here’s a theory that’s false as a theory of the world: a theory of only gluons, the particles that are associated with the strong nuclear force in the same way that photons are associated with the electromagnetic force. (This theory is often called “pure Yang-Mills theory.”) Obviously it does not apply to reality; the proton is made from quarks and antiquarks and gluons, not from gluons alone. Yet the study of this theory, both numerically using computers and conceptually, has given significant insight into how non-perturbative gauge theory works. Since non-perturbative gauge theory is what forms protons and neutrons, these insights have been helpful as a step along a longer road to develop better calculational tools in the non-perturbative context. So even though pure Yang-Mills theory is not a theory of nature, nevertheless yet we find it useful to study it very carefully.
The same standards should be applied to string theory. It may or may not be, as its hype-sters suggest, the “theory of everything” (a glib moniker that actually means “a single, unified theory of all the elementary forces and particles of nature, including the known particles and forces and also [at least] dark matter, so-called `dark energy’, and quantum gravity”). But that’s hardly the only important problem where string theory has something to contribute. String theory has made a number of important hard problems (in non-perturbative gauge theory, for instance) much easier to solve; it has helped address several long-standing conceptual puzzles in theoretical physics; and it has inspired many new ideas that have had application well outside of string theory. Consequently it has value, perhaps less than its proponents may claim, but certainly more than asserted by its detractors.
Indeed, the most extreme critics of string theory, who would perhaps argue that string theory should have been long ago cast out of theoretical physics, as a mathematical construct with no value for science, are in an increasingly untenable position. Would it really have been a good thing for high-energy physics at the LHC if no physicists had ever worked on twistor strings? This is not the only tough question an absolutist critic would have to answer.
Yet the detractors’ complaints have some merit. Personally, I feel string theory’s possible application to “everything” has been wildly over-promoted; for this purpose, string theory cannot be tested at present, and that situation might continue for a very long time, perhaps centuries. Meanwhile we have too many string theorists teaching at the top U.S. universities, and not enough theorists doing other aspects of high-energy physics, including Standard Model predictions, such as carried out by the BlackHat folks. As a result, far too few particle physics theorists were trained at top universities in the U.S. in recent years, and our theoretical LHC research is now spread very thin. But does this mean that string theory is a total waste of time or that Witten’s work is undeserving of high praise and recognition? I’ve given you just one of many reasons (and not by any means the strongest one) why the answer is clearly no; perhaps I’ll give you other reasons in future articles. (Assuming I survive to write them.)
I have found the war over string theory a very ugly thing to watch, and it’s been constantly unpleasant and professionally very damaging to be stuck in the cross-fire, as I’ve been for well over twenty years. Although both the virulently pro-string and anti-string camps have important and intellectually honest points to make, it seems to me that it’s time for both of them to accept a United Nations-monitored cease-fire. While they’ve been carrying out a destructive war on the field of battle, and fighting for public sympathy, an apolitical and pragmatic process has long been underway, often unseen and unsung, in which a certain peace and mutual appreciation has been established, leaving the belligerents on both sides out of date and out of step with reality. String theory is an essential tool in the toolbox of the theoretical physicist, and it’s here to stay — not because it’s necessarily the theory of “everything,” but because it has proven over the decades to be profoundly and broadly useful.
So on many levels I view it as inappropriate to criticize the Milner prize for Witten on the grounds that Witten’s work on string theory can’t be tested; it is not unusual for critically important theoretical work to lie more than one step away from experiment. Still, shouldn’t those who take those steps, such as Britto, Cachazo and Feng, and the BlackHat folks, deserve more credit than they’re getting? It’s clear that the practical benefit for high-energy physics would have been far greater if Milner had given three million dollars to the BlackHat collaboration, and to others pursuing similar aims. Is Milner reading this? Or maybe someone else with a deep pocket, and perhaps a greater commitment to the actual process of listening to what nature has to say? Ensuring that every drop of information is squeezed from the LHC’s data is arguably the highest priority in high-energy physics right now, and doing so is difficult and personnel-intensive. Private funding supporting the research of those who do the most important calculations could really make a difference.