*For More Advanced Non-Experts*

*[This is part 6 of a series, which begins here.]*

I’ve explained in earlier posts how we can calculate many things in the quantum field theory that is known as the “Standard Model” of particle physics, itself an amalgam of three, simpler quantum field theories.

When forces are “weak”, in the technical sense, calculations can generally be done by a method of successive approximation (called “perturbation theory”). When forces are very “strong”, however, this method doesn’t work. Specifically, for processes involving the strong nuclear force, in which the distances involved are larger than a proton and the energies smaller than the mass-energy of a proton, some other method is needed. (See Figure 1 of Part 5.)

One class of methods involves directly simulating, using a computer, the behavior of the quantum field theory equations for the strong nuclear force. More precisely, we simulate in a simplified version of the real world, the imaginary world shown in Figure 1 below, where

- the weak nuclear force and the electromagnetic force are turned off,
- the electron, muon, tau, neutrinos, W, Z and Higgs particles are ignored
- the three heavier types of quarks are also ignored

(See Figure 4 of Part 4 for more details.) This makes the calculations a lot simpler. And their results allow us, for instance, to understand why quarks and anti-quarks and gluons form the more complex particles called hadrons, of which protons and neutrons are just a couple of examples. Unfortunately, computer simulations still are nowhere near powerful enough for the calculation of some of the most interesting processes in nature… and won’t be for a long time.

Another method I mentioned involves the use of an effective quantum field theory which describes the “objects” that the original theory produces at low energy. But that only works if you know what those objects are; in the real world [and the similar imaginary world of Figure 1] we know from experiment that those objects are pions and other low-mass hadrons, but generally we don’t know what they are.

This brings us to today’s story. Our success with the Standard Model might give you the impression that we basically understand quantum field theory and how to make predictions using it, with a few exceptions. But this would be far, far from the truth. As far as we can tell, much (if not most) of quantum field theory remains deeply mysterious.

Is this merely an academic problem of no interest for the real world? That is partly a question of whether any of these poorly understood quantum field theories actually is playing a role in particle physics… which is something we may not know until we complete our study of nature many centuries hence. But at a certain level, we already know the answer is “no, it’s not academic”. Theories of a similar type have a role to play in “condensed matter” physics, and if we understood quantum field theory thoroughly, it might allow for a number of the most difficult puzzles in that subject to be resolved.

To illustrate some of the issues, I’m going to give you some examples of imaginary worlds described by quantum field theories that don’t appear in the Standard Model, and tell you something of what we do and don’t know about them.

**Modifying the Strong Nuclear Force: Additional Low-Mass Quarks**

First, let’s talk about a case very similar to the imaginary world of Figure 1. The only difference is that instead of having three “flavors” of low-mass quarks — up, down and strange — imagine a world that has eight. Or seven. Or nine. What happens in such a world? What do the equations of the corresponding quantum field theory predict?

Does this imaginary world behave much like the real world, with proton-like and neutron-like objects, and pion-like objects that become massless when the quarks are massless? Or does this theory have the property that, when the quarks are massless, it is scale-invariant, a feature I described in the second post in this series? As the quark masses are reduced to zero, does the proton mass in such a world remain substantial, or does it drop to zero too? Are there other features that are very different from the real world? Is there a simple effective quantum field theory that describes how this world behaves at long distances and low energies?

*We don’t know.*

We can probably figure this case out, perhaps in the next few years, using computer simulations; armed with recent advances in both computer technology and in simulation techniques, people are already trying. We can certainly simulate the case where the quark masses are relatively big, and then, step by step, lower the quark masses. Each step makes the calculation harder and more unstable, but we can gain insights into what is happening as the quark masses go to zero by looking at how the theory changes — and particular, how the masses of the hadrons change — when the quark masses are gradually reduced. Some of my friends with very big computers are working hard to figure this out, but despite 15 years of effort, we still don’t have clear answers… yet.

More generally, we can consider similar imaginary worlds where the number of colors is any integer N > 1, and where the number of types of quarks (with N colors each) is any integer Q > 0. The equations for all these theories are similar, but their behavior can be quite different. For a large class of these imaginary worlds (typically those with N>3 and Q between about 2N and 4N) almost nothing is known about their behavior. Except for N=2 and 3, there have been (to my knowledge) no attempts to run computer simulations. So an enormous amount remains unknown, though potentially knowable, at least in part.

**Modifying the Strong Nuclear Force: A Chiral Theory**

Here’s another imaginary world to think about. This one has something a little different. As I discussed in this article, quarks (as well as electrons and other matter particles in the Standard Model) are really made from two half-quarks, which I named quark-right and quark-left *(the reason for the names is technical)*. So we can consider a theory that has seven quarks-left which carry color (and their anti-quarks-right that carry anti-color), and it has a single half-“sextet”-right which carries two colors (and an anti-sextet-left that carries two anti-colors.) It’s not obvious that this makes sense, but… trust me. This is called a “chiral” theory because the objects that carry color aren’t the mirror image of the things that carry anti-color.

The key feature of such a theory is that these half-quarks and half-sextet particles *can’t be given a mass*; to give a mass to a quark-left, you need a quark-right, and similarly a sextet-right needs a sextet-left. And in this world, these partners are missing. So these particles have to remain massless.

How does this imaginary world behave? I don’t know, and worse, * I don’t know how to find out*. At sufficiently low energy and long distance, successive approximation won’t work in this imaginary world, anymore than it works for the real-world’s strong nuclear force. And computers can’t be used to do calculations, because the calculational techniques currently available simply don’t work when quarks (or other spin-1/2 particles) are massless. And finally, no one knows how to guess a suitable effective quantum field theory to describe the long-distance and low-energy behavior of this imaginary world; neither experiment nor computer simulation is available to give us a hint. It’s not even clear that an effective quantum field theory with nicely behaved equations exists at all. (We’ll see soon that there are cases where we know no such theory exists.)

In short, even though the equations of this quantum field theory are known and look very similar to the ones that describe the strong nuclear force in the real world, the physical phenomena that result from those equations are a mystery. This is not unusual. *For the majority of quantum field theories for which the equations are known, this is the situation!*

**The Generic Situation: Theories with Unknown Equations**

And then… there are the quantum field theories for which we don’t even know any useful equations, or any way to characterize them by the types of fields and particles that they have. We know (or perhaps suspect) these quantum field theories exist, but our knowledge is indirect, and the standard types of equations used in quantum field theory don’t work for them.

[For some of these quantum field theories, we do know some relevant equations that could *in principle*, with careful adjustment, be used. But often these equations are as hard as the ones described in the previous section, for which neither computer simulation nor successive approximation can work, so *in practice *the careful adjustment can’t be done, and no calculations are possible.]

How, you may rightly wonder, could we possibly even *know* that there are quantum field theories for which no useful equations can currently be written down? That is a story I’ll tell soon. For today, suffice it to say that we have no idea whether this set of theories is a tiny minority of quantum field theories, or the overwhelming majority. There simply aren’t any techniques to tell us the answer. Indications are that they are very common. Still, without the equations, how can we hope to learn anything about them?

This is one place where ** supersymmetry** — not in its potentially direct real-world application, as a means by which to address the naturalness problem of the Standard Model, but rather

**— comes into play.**

*as a tool for studying the properties of otherwise incomprehensible imaginary worlds**To Be Continued…*

Here I may say that it is appropriate that I ask : would you please clarify what is knowable and what is unknowable in this whole situation ?

Because quantum field theories are precisely specified mathematical structures, everything about them that could be reflected in a physical experiment is in principle knowable.

Nothingin a quantum field theory that corresponds to something that can be measured is in principle unknowable. The problem is a lack of practical tools.You wrote: “When forces are “weak”, in the technical sense, calculations can generally be done by a method of successive approximation (called “perturbation theory”). When forces are very “strong”, however, this method doesn’t work. Specifically, for processes involving the strong nuclear force, in which the distances involved are larger than a proton and the energies smaller than the mass-energy of a proton, some other method is needed.”

It is funny, but the usual electromagnetic field is strong, especially low-frequency one and the perturbation theory does not work (IR catastrophe). It does not work in a sense of truncating the series. We need to sum up all soft terms, without truncation. Summing up all soft terms of a series means obtaining a non-perturbative solution and using it as another, more physical, initial approximation in the total series. However, in strong force theories we follow the same way of switching the interaction on (a gauge principle) as if it were weak. No wonder we have calculation problems here too.

The two problems are different, though it is easy to confuse them the first time you encounter them. Really this discussion belongs in Part 4 of this series.

The first one is a pure infrared effect, one that means you have to be careful to ask physical questions when you use perturbation theory. The forces are not strong; but it is possible to ask nonsensical questions about the physics, by trying to count the number of massless photons. You will certainly fail. If you choose the right, physically meaningful questions, which may require reorganizing the perturbation calculation somewhat, you will still get good answers if the forces are “weak”. That’s why perturbation theory works so well for QED — at ALL accessible energies.

The second one is an effect that affects perturbation theory even if you *do* ask sensible, physical questions. It’s completely distinct from the first one; it comes from the forces really and truly becoming “strong”. That’s why perturbation theory does not work at all for QCD at energies below 1 GeV, no matter what physical question you choose to ask.

I disagree because a strong perturbation in IR is the same as a strong perturbation when the coupling is large. in IR the coupling is a product of the small alpha and a large dimensionless factor depending on IR photon frequency. In both cases the partial sums do not help calculate and need summation to the end. in IR problem we can do it (fortunately) and in strong force case we cannot, but the structure of equations is the same – a poor zeroth-order approximation that has huge correction (perturbation). It factually means that we do not understand what to start with. Otherwise we would choose a better initial approximation and a really small perturbation.

Sorry; you’re talking to one of the world’s experts here, and you are completely and profoundly mistaken. You are confusing a specific feature of the strong nuclear force [that it becomes strong in the infrared and weak in the ultraviolet] with a general one, and you are revealing a profound misunderstanding of strong dynamics in quantum field theory. For instance, in a scale-invariant strongly interacting field theory, the problems are just as bad in the UV and in the IR. Still other theories are strongly interacting in the UV and free in the IR. Meanwhile the infrared problem you refer to in the context of massless photons is **always** an infrared problem, associated with massless particles. These are basic issues of quantum field theory that every graduate student who passes through my course learns thoroughly over a period of a few weeks. I suggest you reconsider your understanding of quantum field theory very carefully.

Sorry, Matt, but mistaken are you. You speak of massless photons (thresholdless excitations) and nonsensical questions as if it helped to write down a better initial approximation. No, it does not. You start from the same poor initial approximation; that is why the perturbative corrections are big. And do not say that the perturbation theory works in QED – perturbative theory means, first of all, usefulness of a truncated series. IR problem makes us sum up all the terms just because its truncated version is useless (wrong, does not work). Try to understand me – we are practically obliged to use another initial approximation for the strong (IR) part of interaction in QED. The coupling constant is contained in this approximation in a non-perturbative way, you just did not think of it this way.

Yes, yes — you don’t need to lecture me on resummation, I teach all that in class too. We’re not disagreeing about QED, just about how to express it — you’re focused on the technicality of resummation, I’m making the conceptual point that perturbation theory works, though it does require you first do the resummation so that you’re asking something physical. This is all rather technical — important technical points with physical meaning, of course, but far beyond the scope of this website.

We’re disagreeing about whether what happens in QED is similar to what happens in QCD. It’s not. What happens in QCD is that you can do lots of resummation, but it doesn’t help at all, because the effects of strong forces CANNOT be resummed, for deep conceptual reasons. The source of the issues in QED and QCD have some overlap, but the main issue in QCD has a completely different source from the thing you’re so excited about in QED. And that’s why (in this non-technical website, where I certainly can’t go around pointing out that there are multiple sources of large logarithms in QCD calculations and that you have to distinguish them carefully) I said things a bit more simply than I would if I were talking to experts.

Oh, the joys of physics blogging ;-)

You think that as a leading expert on one of the hardest and most esoteric areas of physics you can give up some of your precious time to “enlightening the masses’ and at least get some gratitude, but it turns out that a non-negligible proportion of the “masses” consists of people whose Nobel prizes have been “stolen” (by `t Hooft, etc.) and who are bent on enlightening you about that… ;-)

Doesn’t the practical difficulties with QCD calculations have as much to do with the fact that gluons themselves interact with each other in a way that photons do no, as it does with the strength of the force per se?

No. If the number of quarks was 20 instead of 3, QCD would be scarcely more difficult than QED.

So the physical experiments availability actually decides what is unknowable , and of course I mean w.r.t. Humans not the equations as a representation system.

No — the IN PRINCIPLE availability of the experiments. That is to say, imagine people 1 million years in the future with extraordinary technology that lies far beyond anything you and I could imagine; for any experiment they could ever do in the FUTURE, the corresponding calculation in quantum field theory is knowable TODAY — in principle. It may not be known in practice. But the difference between known TODAY and knowable TODAY should be clear; knowable TODAY means there is every reason to expect it WILL be known in the future, while unknowable TODAY means it can never, in principle, be known.

And this applies to imaginary worlds described by quantum field theory as well as the real one.

Please think carefully about this before you shoot off a reply. You’re not being logical.

Then the IN PRINCIPLE UN – availability of experiment even in ten million years is the limit for being knowable.

My point is : the unknowable is a fact of being limited .

You don’t have any idea how long the universe will last, and you’re confusing yourself just the same way Zeno did with his paradox and others do with trying to count the number of angels on the head of a pin. You can go discuss this with philosophers; as a physicist I won’t discuss this further.

Let me give you an example of the unknowable today ; it can never be known in principle how I feel in any particular situation …..All subjective sensations and feelings are unknowable to the outsider in principle , remember my friend , physics is not the totality of being , physics is but a measurable –with limits– aspects of the phenomenal.

This is *exactly* as I understood your initial question; and in this precise sense, everything about a quantum field theory that corresponds to anything that could possibly, ever, in any imaginable time or place, be measured is KNOWABLE. I answered your question the first time, but you weren’t listening. No more on this, please, or I’ll start deleting your comments.

Matt: Nice article.Questions: (1) What is the motivation behind introducing no of quarks beyond 6? Is there any success at all?

(2) I thought the reason to introduce left and right quarks was parity violation in weak interaction. Is this needed in strong interaction

studies by themselves? Can you give a review reference to this stuff? Thanks.

1) The motivation for studying imaginary worlds is always the same: they might not, in fact, turn out to be entirely imaginary, and they can give you insight into aspects of the real world.

2) Left- and right-quarks are an essential part of how quarks (and electrons and all similar spin-1/2 particles) are constructed. The only thing that is required by the weak nuclear force is that the left- and the right-quarks (and electrons) have *different* properties, rather than the same ones.

Any discussion of “chiral symmetry” should address this. Georgi’s book on the weak interactions is one example.

Thanks. So do I understand that left-right decomposition of quarks is useful in purely strong interaction also? If it is not too technical for this blog , can you briefly explain for what purpose? Of course the answer may be in Georgi’s book!

It’s not just useful; it’s intrinsic to the nature of quarks. In technical speak: if you take quarks with a mass and you reduce the mass to zero, you discover that quarks break up into two distinct representations of the Lorentz group, one left-handed and one right-handed in chirality; so you should understanding quarks with mass as built by tying together two massless half-quarks.

Thanks.I see your point. That would be similar to how neutrino and anti neutrino got their handedness.

There are at least 10 elements with a single stable isotope, like (Be9, F19, Na21) and all have an odd number of hadrons. Does any quantum field theory predict these ?

The quantum field theory of the strong nuclear force should predict all of them; that doesn’t mean the prediction is easy, because it is a big step from the strong nuclear force to the effective theory of pions to the full details of nuclear physics.

That answer sounds like QFT does not know what the 3 quarks and gluons in hydrogen or a neutron are doing either as of today. Is that correct ? ps and thank you for all of your answers.

When it comes to QFT calculations one soon encounters divergent solutions both IR &UV. My question is, are these divergences a result of the conceptual foundations of QFT or the limited mathematical tool set at the disposal of the theoretical physicist in our present era to describe natural phenomena?

All of the divergences (i.e. infinites) that appear in quantum field theory calculations are artifacts of asking unphysical questions. All physical questions have finite answers. What one is taught, in great detail, in quantum field theory class is how to identify sensible physical questions and organize the calculations so that all divergences cancel, as they must.

That said, the infinities *reflect* different things. Some of them reflect deeply important things about how quantum field theory works, others are simple mathematical details. But that’s a very, very big subject; it takes a full year of quantum field theory class, often more, to go through all the examples.

If you’d ask me, I would answer yes, are these divergences are a result of the conceptual foundations (blunders) of classical and QFT. I have proofs.

If we can isolate the source of the divergences then it is possible to build a logically consistent QFT free from trivial solutions.However, as Matt points out, the best way to avoid divergent solutions is first of all not to load trivial inputs into the calculations.Having said that, I m curious to know 1 or 2 conceptual blunders in QFT that are sources of infinities from what you are suggesting.

The first (physical) blunder is our (mis)taking an inclusive (average, “macroscopic”) picture for an elastic one, a compound system for an elementary one. If we deal in reality with an electron permanently coupled with its own electromagnetic field ( compound system), we should write our theory correspondingly, not to consider “free” electron and then switch this interaction on. I explained it in simple terms here: http://arxiv.org/abs/1110.3702

A better (more physical) formulation (in particular, a better initial approximation) “solves” both divergence problems at one stroke – they are absent in calculations. And the physics is simple and usual, with no appeal to non-observable and weird stuff.

The second (psychological) blunder is our stubbornly sticking to our current formulations with its obviously nonsensical ideology (bare stuff, renormalizations, nonsensical initial approximation, etc.). Thus, this all is practiced for so long that it seems it has always been and should be so. Inventing excuses is not advancing physics.

Richard Feynman was not happy with renormalization techniques and considered them as fudge factors whose time will end once an elegant solution is found.I will be reading your paper to find how effective are your solutions.

Hi Vladimir, Using your formulation an you calculate for example the equivalent of radiative corrections to the electron magnetic moment, without any renormalization “tricks”?

Hi JR,

“Hi Vladimir, Using your formulation, can you calculate, for example, the equivalent of radiative corrections to the electron magnetic moment, without any renormalization “tricks”?”

Yes, I can, but I did not do it. Similar estimation was already done by T. Welton in his paper Phys. Rev. 74, 1157-1167, Nov.1948. He obtained the right dimension and value, but the opposite sign since his estimation was non relativistic.

I think JR is alluding to the suggestion that you should illustrate the potency of your solution by giving real world examples.This would certainly attract the attention of peers.

Dear Stuart,

I myself would love to get down to the real calculations, believe me. Unfortunately, I am very down to the earth at my present work, which leaves no time/energy for my dream. I am still on the level of non relativistic “electronium” (http://arxiv.org/abs/0806.2635).That is why I decided to write a toy model to be understood by anyone.

Would it be possible to use domain wall fermions for studying the model you described (with chiral quarks in the sextet representation)? While computer simulations would be difficult, in practice, they wouldn’t be impossible, in principle, right (once anomalies are properly cancelled)? And I’m wondering whether an approach like mean field theory might provide some insight-or has this already been tried?

I don’t think this is possible… maybe an expert will set me straight, but my memory is that this cannot be done.

Could you give a reference where the model is described?-I’d be interested in looking into the technical details.

Well, this chiral quantum field theory is no different from many others, conceptually; I wouldn’t focus on it specifically. Any similar theory, with fermions to which ordinary masses cannot be added, is worth a similar look.

The state of the art in 2011 is described here http://arxiv.org/pdf/1103.4588v2.pdf . I’ll look it over again. You can see that people have made a lot of progress in the last few years, but I believe some key problems have still not been solved. It is possible that my view is too pessimistic…

Thank you. I’m a bit more optimistic, since I have done some work on this topic. I just couldn’t, immediately, see what the charge assignments in the sextet are, so as to cancel the anomalies of the other massless fermions. I’ll have to work them out. Many thanks for a wonderful series of articles.

For experts: Domain wall fermions are great for studying global chiral symmetries of vector-like gauge theories such as QCD. In contrast, the model described above is a chiral gauge theory. Chiral gauge theories generically have complex-valued euclidean actions, so the exp[-S] that one wants to identify as a probability distribution in numerical calculations is also complex.

This “phase problem” is like a sign problem on steroids. While there’s no proof that it’s impossible to solve the phase problem through some clever new technique, I don’t think anybody is very optimistic at present.

I’m also not familiar with this model, and curious to know where it comes from. At first I thought it might be something out of arXiv:1309.5948, but I don’t see it after a quick glance through. It sounds almost like the model in arXiv:1310.3653 except that uses the two-index anti-symmetric rep.

PS. While I’m writing, I’ll mention that there is a small community of lattice folks carrying out calculations for SU(N) with N>3 (mostly focused on N<8 since computational costs scale ~N^3). A recent review is arXiv:1309.3638.

Thanks — this confirms what I believed to be true. Seems the ol’ memory still works…

Regarding the PS: For Q=0 there was work by Teper et al. for N>3; but now you’re saying there’s been work for Q>0? Excellent.

There is such a “phase problem” if the anomalies are not cancelled. In the electroweak theory these are between the quarks and the leptons, within each family. A chiral gauge theory that does not have such a cancellation mechanism is inconsistent. With domain wall fermions it is possible to reaiize this cancellation. it just hasn’t been done, since these have been used to study QCD, where there isn’t such an issue, since the theory doens’t have a local, but a global, chiral symmetry.

Gauge anomalies must cancel in order for the theory to exist. The phase comes from the chiral couplings, not anomalies, and is a problem for any consistent chiral gauge theory. Ginsparg–Wilson (e.g. overlap or domain wall) fermions seem to be a necessary but not sufficient ingredient for trying to tackle these systems head-on.

As far as I’m aware, the state of the art in chiral lattice gauge theory is trying to deal with two-dimensional U(1) models, which sounds trivial but turns out to be bloody tough. arXiv:1211.6947 is a recent example, and arXiv:1003.5896 is a related review that baldly states “we do not yet have a method of approximating an arbitrary chiral gauge theory by latticizing and then simulating it on a computer—even in principle.” The electroweak theory is an obvious target in four dimensions, but I don’t believe folks exploring that have had much luck (I’m looking at hep-lat/9705022 and arXiv:0709.3658).

For N>3 lattice calculations, Q>0 work is indeed underway. Most (but not all) of it employs the quenched approximation, but large-N arguments imply that quenching should be a less-significant problem as N increases.

About quenching: this is surely false, if properly considered. If you hold Q fixed, then increasing N improves the quenched approximation. But the interesting regime is increasing N and increasing Q with Q/N fixed. Then the quenched approximation does not improve, and in particular, in the most interesting regime of all, 2N < Q < 4N, more or less, the quenched approximation is a disaster.

So I would say that no, this does not represent significant effort yet. Which is fine; we should get the answer for N=3, Q=8 first.

I agree. But I’d add that in all the work to date the anisotropy of the gauge couplings hasn’t been taken into account. So it is possible to set out a calculation-but it is quite non-trivial.

So are you saying this is merely a matter of setting up the calculation and doing it? I am confused as to what is possible in practice with current techniques.

It’s a bit more technical than can be presented in a paragraph here, but let me try. Domain wall fermions allow you to define fermions with a given chirality on one boundary and the other is at the other boundary, separated by the extra dimension. if you’ve got a chiral theory, you need to cancel the anomalies on *each* boundary *and* you need the existence of a scaling limit. This means that you need the appropriate charge assignments between *different* fermion species: for each species, separately, there will be a “phase problem”, that will cancel out, between the species.

In this way you can decouple the boundaries from the bulk. To obtain this scaling limit, you need to tune the anisotropy between the couplings (along the boundary and in the bulk). This is, particularly, the case, for the 4+1->4 dimensional models. This is the part that has not been taken into account in the literature mentioned by Schaich.

Sorry, I should have specified that the quenched calculations consider fixed Q=2 or 2+1, to explore “large-N QCD”. These account for most N>3 lattice studies, and it is indeed the N=4 work exploring near-conformality that has used dynamical fermions (in two-index reps — arXiv:1202.2675 and arXiv:1307.2425). Given (1) how difficult it is to study near-conformal gauge theories just with N=2 & 3, and (2) the extent to which existing N>2 lattice calculations appear to agree with simple large-N scaling arguments, it is not at all clear whether much larger-scale studies would really be justified. The point is not to simulate every single model, but to gain some intuition for the range of possibilities, and one can argue that N<5 suffices for that purpose.

Switching gears: formulating chiral gauge theories on the lattice has been a "holy grail" of the field for decades. It's an infamously hard problem, and was part of the motivation for the original development of the domain wall and overlap fermion formulations. As some of the papers I've already cited discuss, even that technology doesn't suffice to allow practical lattice calculations of chiral gauge theories. I'll add one more review to the pile, arXiv:0912.2560 by David Kaplan, one of the inventors of domain wall fermions: "there is currently no practical way to regulate general nonabelian chiral gauge theories on the lattice. (There has been a lot of papers in this area, however, in the context of domain wall – overlap – Ginsparg-Wilson fermions… If a solution to putting chiral gauge theories on the lattice proves to be a complicated and not especially enlightening enterprise, then it probably is not worth the effort (unless the LHC finds evidence for a strongly coupled chiral gauge theory!)"

In short, in practice, with current techniques, there seems to be no way to study chiral gauge theories on the lattice. Anybody who is able to develop a robust formulation of chiral lattice gauge theories will thereby become one of the most renowned lattice gauge theorists in the world. So, Stam, if you believe you have a solution, publish it in PRL, not in comments on Matt's blog.

Do you recall if there are any proofs that the theory is stable and theoretical consistent and can produce “physical” (although, of course, counterfactual) output within particular ranges of N>3 and Q>6? Similarly, do we know of any combinations that just can’t work.

The theories are certainly consistent theoretically for any N and Q, with the subtlety that if Q > 11/2 N, some additional physics is required at exponentially high energy scales. Stability is a little more subtle, but I believe stability is proven. If there were spin-zero fields around, that would be trickier.

Prof. Strassler,

Once again, kudos to you for educating the public on intricate and difficult topics, such as the ones pertaining to quantum field theories.

Where do you find the energy and motivation to go through all these efforts?

As a non-technical but physics interested reader I would first like to say thank you. Non valid/philosopical/religious questions are perhaps not the most interesting for your target audience. Amazing website thank you. Kind regards Johan

Matt,

I know that you already discussed it in part 2, but can you elaborate more about scale invariance? I would like to know also about conformal symmetry and non conformal dual theories… And :-) well, maybe enough for 1quest. but also about the dilaton… Thanks a LOT !

It’s coming, but not immediately.

1. While I’d agree that the path from QCD to condensed matter physics is often unknown and important to figure out, I’m not convinced that it is appropriate or helpful to describe the relationship as “mysterious”. I don’t currently know the relationship between my financial transactions for the year to date and my tax liability for 2013 either, but the fact that it hasn’t been calculated and will take immense time and suffering to do so doesn’t make it mysterious, just unknown. Mystery implies a reason to expect surprising and counterintuitive results, and the likelihood that some of the key clues have not yet been discovered, neither of which is obviously true in the QCD to condensed matter bridge.

2. ” For a large class of these imaginary worlds (typically those with N>3 and Q between about 2N and 4N) almost nothing is known about their behavior. Except for N=2 and 3, there have been (to my knowledge) no attempts to run computer simulations. So an enormous amount remains unknown, though potentially knowable, at least in part.”

I have to say that a lack of knowledge of theories with more QCD colors than exist in real life, and a lack of knowledge of variants of QCD in which, e.g. right handed quarks are absent, both of which are obviously not our physical reality, don’t strike me as a horrible tragedy, any more than our lack of good knowledge of how many angels can fit on a pinhead.

We live in any N=3, Q=6 (but for many practical purposes 5 or 3) world, in which all quarks come in left and right parity versions. While one can imagine using variants with other numbers as a means to an end tool to approximate N-3, Q=6 (or 5 or 3), that tool doesn’t make sense to use if it is harder to do the math for higher N and Q systems than it is for lower N and Q systems, as seems to be the case.

3. I do think that there has been a fair amount of theoretical literature regarding N=4 systems, perhaps analytic rather than lattice-simulation based, but still something about which a fair amount is known (mostly trying to shoehorn lepton number as a fourth QCD color in furtherance of GUT theories).

1) I don’t think you understood me. IT’s not a QCD-to-condensed matter bridge I’m talking about. If we understood quantum field theory more thoroughly — specifically, theories NOT like QCD — we’d simply do the computations in the appropriate condensed matter theories, without appeal to QCD.

It turned out in the 1990s that there were many, many surprising things about Quantum Field Theory that we didn’t know in the 70s and 80s. Some of them are currently being explored in the condensed matter physics context (see the work of Subir Sachdev at Harvard, for instance.) We’re still learning new ones.

2) With apologies, I feel you are thinking like a pedestrian here; see the previous paragraph. We have many, many puzzles in particle physics. For all you or I know, the solution to those puzzles will require understanding how quantum field theory works for some N and Q that we don’t currently have a grasp of. So if we restrict ourselves to the N and Q that we have already found in the real world, and never explore other cases, we may miss a key secret of the universe. That’s WHY particle physicists spend time exploring imaginary worlds — because one of them might give us an insight we need in the real one. An example: before we knew there were three generations of quarks, Kobayashi and Maskawa wondered “gosh, what would happen if there were three generations instead of two?” By your logic, they were wasting their time and there was not much value in their effort. Well, they have Nobel prizes now, so …

3. I don’t think anything is known about N=4, 8 < Q < 16 (the boundaries being a bit fuzzy.)

I can not help but wonder why it is so much easier for nature to calculate these things than it is for us to calculate them even with big computers?

It is either nature is a far much superior computer or it has found a startlingly simple and elegant solution. I m more inclined to believe it is the latter.

Having said that, nature has a tendency of hiding the solution in plain sight i.e. the truth is so overt that it is covert.IMHO quantum gravity is the key. Gravity is the ultimate gauge field and in quantum gravity lies the much sort after chiral lattice guage theory.

I have worked on a lattice gauge theory inwhich the gauge boson is a composite graviton which constitutes the lattice cells of the discretized space-time. I call the boson the:-nexus graviton.you can download the paper by following this url http://dx.doi.org/10.4236/ijaa.2013.33028

A follow up is currently in its second month of review by a mainstream journal on particle physics. Here I make corrections to the first paper and expand the scope to include the origins of Milgroms acceleration constant.

P.S. in the first paper 2pi is missing in the term k=2pi/r.

Kristoffer, I think this is a very good question, although I would phrase it differently. Pondering this question can take you on a tour through many features of nature and the theories we have about it. It is therefore very hard to give a good answer. I will try to give a (surely inadequate) sketch:

Computers are specially prepared pieces of our universe. We put the computer into a state that corresponds in some meaningful way to information we

have, then we let the computer “do its job” according to the laws of nature such that it gets into a state that corresponds to information wewant. The question is: Why is this process so very inefficient when it comes to calculating in detail the behavior of even the tiniest part of (another piece of) nature?Part of the answer is:

Computers are designed to “hide” some essential features of nature:Continuity. Nature is (or at least appears down to unmeasurably small distances and times) continuous. In a computer all the bits are nicely separated and there is only a certain number of them. We need to find good arguments that the finite amount of numbers we can store in a computer and the finite number of calculation steps can tell us something about the infinite amount we would need to describe a piece of nature accurately. Also each bit in a computer is designed to stabilize after each step at ‘0’ or ‘1’. (That’s a big point, see below.)

Parallelism. Nature is “highly parallel”. For example an electromagnetic field changes at every point in space

at the same timeaccording to some equations. Most parts of a computer are designed to store or transport bits without modifying them. Even if you have a million cores, that’s only a small number of locations where calculations happen, compared to “every point in space”.Quantum mechanics!All the above pales in comparison to this one. In a vague sense you can see it as a higher order of parallelism: Nature “tries every possibly combination of everything at once” (see “Path integral formulation”). This ismuchworse than “at every point in space”, it’s more like “for all points in spaceallpossiblecombinationsof values at these points”! Computers on the other hand are designed to stabilize each bit after each step at a well-defined ‘0’ or ‘1’, basically by constantly dumping the vast majority of what goes on inside in form of heat into the environment, such that what remains is nicely organized and observable. This “destroys” the delicate quantum information.Thus a straight-forward simulation of a quantum field is hopeless. Only using ingenious mathematical tricks can the smallest problems be transformed into something manageable by the biggest computers.

I strongly disagree on this concept of computer. Just quote Chomsky and Turing if you want to describe theoretical limits. Just talk about qubits if you believe in the future. Just say that nothing is more powerfull than maths used in physics… But don’t mess with performance tuning, ic, dsp, gpu or virtual grid unless you want be forgiven just for the other interesting things you say ;-)

In my response I did not mean a computer as a

concept. I meant a real conventional (non-quantum) computer as aphysical system– as you can by it off the shelf, or as it is installed in data centers. An interesting thing is that the workings of the transistors that make up the heart of such a computer can only be explained by using quantum mechanics, but at the level at which the computer executes the algorithms we program, it appears as a classical system. That’s what I meant by saying the computer is designed in such a way that it “hides quantum mechanics”, although it is itself of course a quantum mechanical system like everything else.A quantum computer is different in that it is (or will be) designed to take

advantageof the fact that it is a quantum mechanical system.Theoretical computer science of course takes a very different view, completely abstracting the concept of a computer from its physical realization. Maybe that’s the viewpoint you missed in my response, but it was not what I was aiming at.

Yes, I missed that. Now that you write about transistors I ‘d be very interested in follow up but there’s no space here. I’m not sure I agree with your traditional picture but maybe I misunderstood you about this too… Transitors hiding qm is funny, I studied them at university as electronic engineer and had only an exam of qm, I’m working in a big IT …”Regretfuly”, I didn’t choose maths or physics ;-)

All (classical) computers are the same if you abstract away memory size and speed. Quantum computers are only different in that they may be able to break the extended Church Turing thesis.

Look into Boson sampling for a real physical system that a classical computer can’t handle in a reasonable amount of time. If the quantum field theory that best describes nature involves some form of boson sampling in order to make predictions then classical computers will be useless. Worse it may even be impossible to derive the theory if you don’t have quantum computers to guide you.

@ppnl The inequality of p, bpp and bqp has not been proved yet

I disagree about the continuity of nature.Nature has a tendency to manifest in quantized forms. This in my view is the overarching principle of nature.This is why we seek quantum gravity because we are guided by this fundamental principle. The assumption that nature is continuos leads to all sorts of divergences in QFT. If you input this assumption in formulating a quantum theory you are going against very essence of QFT and are bound to obtain divergent solutions.

Sorry for the off topic, just a quick question: now quantum gravity is stuck, isn’t it?

Quantum gravity is not stuck!? Maybe you say it is stuck because of the Fermi-LAT findings. These findings just show that assuming that the lattice spacing in lattice gauge theories tends to zero is wrong because we begin to introduce once again continuity which is against the spirit of quantization and we are bound to get wrong predictions.

Ops.. Ehm… Sorry for not quoting him, I’m one of his thousands followers on twitter. I took it for granted posting on is blog but the comment is from http://www.scientificamerican.com/article.cfm?id=black-hole-firewall-paradox

Mm..h Do you notice in the article that the specialists are confused as to what form should quantum gravity assume?But nevertheless they still seek it.

If i throw a ball in the air, it does not have to calculate a parabola.

Kristoffer: Nature Does Not calculate these things , Nature was created thus

And the proof is our in-ability to calculate what is already there.

There would be a certain beautiful topological symmetry if we had this sequence of forces: non-directional (like Higgs), unidirectional (like gravity), bidirectional (like electric force) and tridirectional (like strong force). These would be the simplest figures you can draw from 1, 2, 3 or 4 dots: [.] [..] […] [.:.], with one dot as the vector vertex and others as vector ends. Alas, the electric force is only an apparent force and originates from electroweak symmetry breaking, and the strong force picture is complicated by antiquarks. To add insult to injury, we have no technology to properly probe the gravitational force, and even the unification of electroweak and strong forces is only a dream. Our world is really a hard problem.

Matt,

Thanks for these bloggs. It gives us non experts some insight into what’s going on in these complex fields. My question is (sorry if its simplistic) when you talk of the quark masses going down to zero are you considering this in relation some modification of the Higgs mechanism (which I understand is responsible for imparting the masses in the first place) or its removal? ie whats the mechanism by which the quarks are no longer massive?

Pingback: Background independence in a nutshell: the dynamics of a tetrahedron by Rovelli et al | quantumtetrahedron

Pingback: If it’s fission you lack « How my heart speaks

Pingback: Quantum Field Theory, String Theory, and Predictions (Part 7) | Of Particular Significance

Prof Strassler,

Having recently discovered your blog, I have been slowly reading through your many topics. Pedagogically i think they’re wonderful, explaining “more correctly” than even some of Feynman’s popularized explanations.

By profession & training I am an engineer, but have taken introductory non-relativistic QM courses in the 1970s … but never any advanced QM or QFT stuff.

My questions today are, “WHAT is a quantum field?” ie, at each point in space “what” is it describing?

And does it have some physical significance, or is it an abstract non-physical thing, analogous to the magnetic vector potential?

For classical vector fields (eg, an electric or wind velocity field), or scalar fields (eg, temperature), it’s pretty clear what’s being described as a function of position & time. But it’s not clear to me what a quantum field’s physical interpretation is.

Forgive me if this has aready been asked and answered, with the 100s of comments I may have missed it.

Very lucid explanations, can’t wait for more.