One of the most prominent theoretical physicists of our time, Professor Joe Polchinski of the University of Santa Barbara, who has made lasting contributions to our understanding of quantum field theory, of gravity, and of string theory, gave a couple of talks at the Institute for Advanced Study in Princeton this week. The two presentations manifested a certain amusing (anti-)parallel; the first was on a puzzle that was thought to have been mostly solved 20 years ago, but turns out to have only been partly resolved; the second was related to a puzzle that was thought to have been solved last year, but turns out to have been partly solved over 20 years ago.

*In the middle of all of this, it was announced that Polchinski was one of several people awarded one of these new-fangled Fundamental Physics Prizes that are getting lots of attention — specifically, one of the Frontiers Prizes, if you’re keeping score. You can read about that elsewhere. Here we’ll try to keep our focus on the science.*

Polchinski’s first talk, on Monday, based on work done with Ahmed Almheiri, Don Marolf, and James Sully, addressed a long-standing question regarding the interplay of quantum field theory (the equations we use to describe the behavior of elementary fields and particles) and black holes (objects whose gravity is so strong that even light cannot escape, yet which will evaporate away, by spitting particles off their edges [their “horizons”]). In the 1970s Stephen Hawking, in his work showing that black holes evaporate, argued that quantum mechanics is violated in this process; the information about how the black hole is produced is lost as it evaporates, which is impossible in a standard (“unitary”) quantum theory. But in 1993, Leonard Susskind, Larus Thorlacius and John Uglum argued that the information is not lost, that there is no problem with quantum mechanics, and that an obvious contradiction is avoided because quantum phenomena as seen by an observer falling into the black hole are different — *complementary* — to those seen by an observer who remains outside and far from the black hole. “Complementarity” was a nice idea, and it was bolstered by discoveries in string theory (the field theory/string theory [or “AdS/CFT”] correspondence) that made it apparent that the evaporation of black holes ** can** be described by quantum mechanics. This gave strong evidence that Hawking’s original way of thinking couldn’t be right, but still, the field theory/string theory correspondence didn’t really clarify exactly how complementarity would work. And now it turns out that it doesn’t quite work as expected, if at all; Polchinski and friends have shown it leads to a paradox that has even puzzled Susskind. The debate as to what it all means extended for many hours at the IAS, and continued on Tuesday. Clearly the apparent paradoxes at the nexus of black holes and quantum mechanics will be with us for some time to come.

Polchinski’s second talk, on Tuesday, addressed a long-standing problem in quantum field theory: whether there exist quantum field theories with a certain special property. Since the answer is “no”, I won’t trouble you with the details. *(For the experts; the question is whether there exist any unitary scale-invariant field theories, in three spatial and one time dimension, that are not also conformally invariant.)* The claim by Polchinski and his co-authors Markus Luty and Riccardo Rattazzi (famous physicists in their own right) is related to an important advance in the field that I described last year, by Komargodski *(another prize winner)* and Schwimmer. *[Note Added: following a comment below, which I encourage you to read, I am urged to mention Fortin, Grinstein and Stergiou, who first claimed to have discovered scale-invariant theories that are not conformally invariant, (going back as far as this paper), and then backed off, instead claiming only they have discovered conformally invariant theories which have cyclically-varying couplings. There were also changes in the Luty et al. paper in their second version, apparently reflecting ongoing discussion between the two sets of authors. Both 2nd versions appeared on November 9th. I am unclear if disagreements between the groups remain.]* Among the interesting observations Polchinski made in the talk is that the breakthrough work of Komargodski and Schwimmer was partially presaged by work 20 years ago by Hugh Osborn (also famous), and by Osborn with Ian Jack. Osborn seems to have buried his main result in a long, complex and difficult paper that most people didn’t understand. Well, this kind of thing happens sometimes… In any case, this earlier work deserves some credit, though there’s also much to be said for Komargodski and Schwimmer’s argument, which is more widely applicable and, importantly, is much easier to understand both technically and conceptually.

In attendance were a remarkable number of the world’s experts, along with a healthy number of young experts-in-training. After the talks they clustered in groups and engaged in lengthy conversations, trying to interpret the more confusing elements. For me it was a bit like old times; Monday’s audience included four of my colleagues from my early years as a Rutgers University postdoctoral researcher (in the mid-90s) and many people who were then in Princeton. I was bemused that the conversations concerning black holes seem as confused and confusing now as they were two decades ago. Sometimes progress is made in a slowly ascending spiral, requiring a visit to old territory before further advances can be made.

## 26 Responses

I think the information loss problem is thrown into starker relief if you think almost in more classical terms of high frequency radiation entering a system – higher frequency = more information storage capacity in a smaller space – after being filtered through chemical/biological systems etc a small number of high energy photons are converted to a larger number of low frequency photons so the information is “smeared out” kind of like a holographic plate being expanded (or red-shifted) – the info is still there it;s just a matter of whether you can get it since any information processing procedure requires entropic gradients to make it work – that’s my take anyway.

Dear Matt,

The question whether scale invariance implies conformal invariance was not settled by Osborn and collaborators, neither by Komargodski and Schwimmer. It was not settled by Luty, Polchinski and Rattazzi nor by Fortin, Grinstein and Stergiou as can be seen from the first version of their respective preprint. The question was settled by both Luty, Polchinski and Rattazzi as well as Fortin, Grinstein and Stergiou as can be seen from the second version of their respective preprint.

The proof of the strong version of the c-theorem (at weak coupling) is due to Osborn and collaborators while the proof of the weak version of the c-theorem (non-perturbative) is due to Komargodski and Schwimmer.

The understanding that non-conformal scale-invariant theories live on RG recurrent behaviors of a specific type of beta-functions is due to Fortin, Grinstein and Stergiou and this discovery allowed for the proof that scale implies conformal invariance by both Fortin, Grinstein and Stergiou as well as Luty, Polchinski and Rattazzi.

Thank you for this clarification.

Dear Matt,

Note that the new CFTs found by Fortin, Grinstein and Stergiou behave in the same way than usual CFTs, the cyclic behavior is not physical for any CFTs (it has physical implications only for non-conformal scale-invariant theories).

The new CFTs were at first thought to be only scale-invariant due to some subtleties. These subtleties were already described by Osborn and collaborators, but their work was not recognized.

In version 2 of the most recent Fortin, Grinstein and Stergiou paper, the abstract states “we study some properties of these new, `cyclic’ CFTs”. But if the cyclicity is “not physical”, then what is “new” about these theories? And why call them cyclic if there is no physical consequence to the cyclicity? Is this just due to a cyclic redefinition of fields as a function of scale?

First let me make a distinction between two beta-functions, the usual beta-functions which are usually denoted by beta and the “new” beta-functions denoted by B.

Osborn and Jack and Osborn already found that the condition for conformality is B = 0, not beta = 0 as people usually assume.

Until the work of Fortin, Grinstein and Stergiou, all CFTs were fixed points of beta, i.e. beta = 0. Fortin, Grinstein and Stergiou found CFTs which are fixed points of B, i.e. B = 0, and in that sense they are new CFTs.

As also discovered by them, fixed points of beta are fixed points of B (and thus honest CFTs) but the converse is not necessarily true. Fixed points of B which are not fixed points of beta correspond to RG recurrent behaviors in terms of the usual beta-functions beta (not B, obviously), hence the name “cyclic CFTs” for these new CFTs. Because of this unexpected property they were first thought to be non-conformal scale-invariant theories (see the work of Fortin, Grinstein and Stergiou for the connection between non-conformal scale-invariant theories and RG recurrent behaviors).

As you point out the cyclicity can be seen as a cyclic redefinition of fields as a function of scale (this point is tricky when discussing non-conformal scale-invariant theories though, if they were to exist).

However, the cyclic behavior is not physical in the sense that correlation functions do not exhibit the cyclic behavior (this could happen for non-conformal scale-invariant theories however). For example, both usual CFTs and cyclic CFTs have two-point correlation functions with normal power law behaviors.

Nevertheless, the cyclic CFTs exhibit a new operator in their spectrum which is always trivial in usual CFTs, and that also makes them new.

From your post it seems you attended the talks, so it is amazing that the contributions of Fortin, Grinstein and Stergiou were not even mentioned.

do you have any resources I could read on it, or just a simple explanation of it?This is a place to start, though not a place to end:

http://en.wikipedia.org/wiki/Black_hole_information_paradox

Many thanks. my few vague questions are now many more precise questions. Such is the nature of progress.

I should also mention that Susskind wrote a popular book on the subject, fancifully titled

The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics.Here is a funny, entertaining and educative video lecture from Lenny Susskind about the black hole war:

http://www. youtube. com/watch?v=pf0D8A0jRiY

Right. And a lot of that work was based on the notion of complementarity — which is now in serious question. Which doesn’t mean Hawking was right all along; it would appear, for the moment, that they are probably both wrong.

The real problem is that complementarity was always a good idea, but it was never actually shown to be correct. And it was never shown exactly where Hawking’s calculation goes awry. The technical difficulty of resolving these problems is enormous.

What we know is that in the field theory/string theory (“AdS/CFT”) correspondence, quantum mechanics (in particular, in its implications for probability) is fine, at least for distant observers, and also, there are black holes. Hawking would have said otherwise. So it seems Hawking is wrong, but what is right is even less clear than before.

I must second the question posited by Gastón E. Nusimovich; indeed to be more specific.

Firstly, what IS information? It sounds like one of those words that has a far more precise meaning than the public use, and I should like to get my thinking straight regarding it.

Second, I assume that there is some sort of conservation law, akin to that for energy that means information can never be destroyed, and this is why the problem exists, do you have any resources I could read on it, or just a simple explanation of it?

I’m not an expert in either classical entropy or quantum entropy, but as far as I understand all this, it is not per se about a conservation law, but it is more like entropy, in some ways, behaves like a transformation onto information, like say, encryption or lossless compression.

In that sense, natural processes are irreversible (entropy does not decrease, it either stays the same or increases after a natural process happens, mostly, entropy will increase).

The information of the system affect the effect of entropy might get shuffled and mangled in many “obscure” ways, but it is never lost.

If that were to be the case (and we don’t know that for sure!), information of the system “black hole” should not be lost just because it (the black hole) evaporates.

Kind regards, GEN

Curiouser and curiouser. I’ve always thought increasing entropy destroyed information; a building is ordered and structure made of many materials in a complex arrangement, yet is breaks down if not maintained to a pile of rubble that may have been an almost endless number of things. Or is this an example of what you were saying? I do remember being told that the information content of a message is greater if it can be a number of messages as opposed to just one, which would suggest white noise has the highest information content.

The definition of information is something I do not understand well enough to convey accurately. Maybe an expert reader can give it a shot.

However, there is a very definite meaning to the information-loss problem.

According to Hawking’s calculation, if you form a black hole of mass M by throwing a big pile of elephants together, a large amount of dark matter, huge assemblages of planets, or a giant star, in the long run what will come out of the black hole as it evaporates away to nothing is radiation that is the same as you would get from a hot boring object, or “black-body” (http://hyperphysics.phy-astr.gsu.edu/hbase/mod6.html), characterized only by the late history of the black hole. All information about what was used to build the black hole is lost in the black hole’s

singularity, a place [actually a time!] of extreme conditions deep inside the black hole, beyond the black hole’s horizon (which is the point of no return, and is believed to be a place of non-extreme conditions.) [The details here depend on the exact type of black hole; here I refer to the case of the simplest, non-rotating, chargeless black hole.] No matter how you define information, it is clear that information you had at the start is completely gone at the end.The quantum mechanics language for this is that a state with definite properties (a “pure state”), the one that is used to make the black hole, evolves over time into a state with indefinite properties (a “mixed state”) such as describes thermal black-body radiation. The initial state has zero entropy; the final state has non-zero entropy. This cannot happen in quantum mechanics because of unitarity, or probability conservation [that the sum of the probabilities of all possible things is always 1, at all times]. If you start with zero entropy

and you don’t lose information about the state somehow, then you end with zero entropy. Somehow, entropy is created in black hole formation and decay (according to Hawking) and therefore information of some sort has been lost. Where? How? Or is Hawking wrong?Yet in AdS/CFT it appears that quantum mechanics can indeed describe the formation and decay of a black hole… in which case no entropy is created and no information is lost (though it may certainly be scrambled and hard to reconstruct!)

The real problem here actually isn’t the singularity, but the structure of the spacetime of a black hole, and the fact that the black hole evaporates away. The singularity is so far from the horizon that even if some fancy physics makes the singularity not so extreme, it remains extremely difficult to imagine how that new fancy physics can fix quantum mechanics, so that, when the black hole evaporates, the information about how it was produced can come back out, making it possible that the final state is a pure state, containing all the information about the pure state from which it was formed, so that no entropy is produced. And no place else in the black hole (including the horizon) appears to be so extreme that any new fancy physics is needed. Hence the paradox.

Right, The problem is neatly defined for me now.

I have a question then that may help me frame this. If I have a particle and antiparticle and they meet and become two photons, where is the information about the two particles I started with? If the two initial particles had sufficient speed (And thus energy) then even lightweight particles like electrons would produce two photons that could also have been produced by a number of other colliding particles, we would only be able to determine the energy of the two initial particles and not their identity.

This must obviously be wrong, so where is the error in my reasoning?

Here are some arguments saying that the mixed state density matrix can be corrected by an invisibly small amount (due to QG effects) such that the observe Hawking radiation is pure again for a pure initial state:

http://motls.blogspot.com/2012/12/hawking-radiation-pure-and-thermal.html?m=1

Matt: I have two questions regarding black holes.

Q1) why black hole’s entropy corresponds to its surface area at Schwarzschild radius? (=i.e. larger the surface area, larger the entropy.)

I read that a black hole cannot split into two because the split reduces its entropy(=surface area). However, it can still evaporate and reduce its entropy because the evaporation(radiation) process dumps the entropy into space. Is this true? How/why? the Hawking radiation has much higher entropy than its not yet evaporated black hole? How can you calculate the total entropy of a (small) black hole and the total entropy after it is completely evaporated? can you use Boltzmann formula to compute them?

Q2)I read that a spinning black hole looks asymmetric due to relativistic effect. Does this mean that an electron also looks asymmetric due to its (intrinsic) spin? Is relativistic effect applicable to any system/object/particle?

I have a question regarding these subjects:

What kind of experimental data supports all these arguments and counter-arguments regarding information theory (Shannon entropy) and black hole evaporation?

Nice article (I’m still evaluation the links now) 🙂

I hope the comments will stay nice and reasonable, since some really dangerous s-words are mentioned in the article …

So can one say all scale invariant theories have to be conformal invariant too?

Does this also hold for classical field theories?

At Physics SE, we had some discussion about the firewalls and black holes too:

http://physics.stackexchange.com/q/38005

(However I can no longer honestly recommand to take part in Physics SE, since powerful people responsible for running the whole Stack Exchange network (which consists of many sites for different topics) interfere to negatively and too strongly with the daily business of the physics community there …)

So can one say all scale invariant theories have to be conformal invariant too?The dimensionality of the system is important: this statement seems not to hold in 4-epsilon dimensions, and Matt was careful to specify “unitary scale-invariant field theories, in three spatial and one time dimension”. I’m more familiar with the work by Fortin, Grinstein and Stergiou (such as arXiv:1202.4757 and arXiv:1208.3674) than with that by Luty, Polchinski and Rattazzi, but I have not actually looked carefully at any of these papers.