*[This is the third post in a series; here’s #1 and #2.]*

The quantum field theory that we use to describe the known particles and forces is called the “Standard Model”, whose structure is shown schematically in Figure 1. It involves an interweaving of three quantum field theories — one for the electromagnetic force, one for the weak nuclear force, and one for the strong nuclear force — into a single more complex quantum field theory.

We particle physicists are extremely fortunate that this particular quantum field theory is just one step more complicated than the very simplest quantum field theories. If this hadn’t been the case, we might still be trying to figure out how it works, and we wouldn’t be able to make detailed and precise predictions for how the known elementary particles will behave in our experiments, such as those at the Large Hadron Collider [LHC].

In order to make predictions for processes that we can measure at the LHC, using the equations of the Standard Model, we employ a method of* successive approximation* (with the jargon name “method of perturbations”, or “perturbation `theory’ ”). It’s a very common method in math and science, in which

- we make an initial rough estimate,
- and then correct the estimate,
- and then correct the correction,
- etc.,

until we have a prediction that is precise enough for our needs.

What are those needs? Well, the precision of any measurement, in any context, is always limited by having

- a finite amount of data (so small statistical flukes are common)
- imperfect equipment (so small mistakes are inevitable).

What we need, for each measurement, is a prediction a little more precise than the measurement will be, but not much more so. In the difficult environment of the LHC, where measurements are really hard, we often need only the first correction to the original estimate; sometimes we need the second (see Figure 2).

Until recently the calculations were done by starting with Feynman’s famous diagrams, but the diagrams are not as efficient as one would like, and new techniques have made them mostly obsolete for really hard calculations.

The method of successive approximation works as long *as all the forces involved are rather “weak”*, in a technical sense. Now this notion of “weak” is complicated enough (and important enough) that I wrote a whole article on it, so those who really want to understand this should read that article. The brief summary suitable for today is this: suppose you took two particles that are attracted to each other by a force, and allowed them to glom together, like an electron and a proton, to form an atom-like object. Then *if the relative velocity of the two particles is small compared to the speed of light, the force is weak*. The stronger the force, the faster the particles will move around inside their “atom”. (For more details see this article. )

For a weak force, the method of successive approximation is very useful, because the correction to the initial estimate is small, and the correction to the correction is smaller, and the correction to the correction to the correction is even smaller. So for a weak force, the first or second correction is usually enough; one doesn’t have to calculate forever in order to get a sufficiently precise prediction. The “stronger” the force, in this technical sense, the harder you have to work to get a precise prediction, because the corrections to your estimate are larger.

If a force is truly strong, though, everything breaks down. In that case, the correction to the estimate is as big as the estimate, and the next correction is again just as big, so no method of successive approximation will get you close to the answer. In short, *for truly strong forces,* * you need a completely different approach* if you are to make predictions.

In the Standard Model, the electromagnetic force and the weak nuclear force are “weak” in all contexts. However, the strong nuclear force is (technically) “strong” for any processes that involve distances comparable to or larger than a proton‘s size (about 100,000 times smaller than an atom) or energies comparable to or smaller than a proton’s mass-energy (about 1 GeV). For such processes, successive approximation does not work at all; it can’t be used to calculate a proton’s size or mass or internal structure. In fact *the first step in that method would estimate that quarks and anti-quarks and gluons are free to roam independently and the proton should not exist at all*… which is so obviously completely wrong that no method of correcting it will ever give the right answer. I’ll get back to how we show protons are predicted by these equations, using big computers, in a later post.

But there’s a remarkable fact about the strong nuclear force. As I said, at distances the size of a proton or larger, the strong nuclear force is so strong that successive approximation doesn’t work. Yet, at distances shorter than this, **the force actually becomes “weak”**, *in the technical sense*, and successive approximation *does* work there.

Let me make sure this is absolutely clear, because the difference between what we think of colloquially as “weak” is different from “weak” in the sense I’m using it here. Suppose you put two quarks very close together, at a distance r closer together than the radius R of a proton. In Figure 3 I’ve plotted how big the strong nuclear force (purple) and the electromagnetic force (blue) would be between two quarks, as a function of the distance between them. Notice both forces are very strong (colloquially) at short distances (r << R), but (I assert) both forces are weak (technically) there. The electromagnetic force is much the weaker of the two, which is why its curve is lower in the plot.

Now if you move the two quarks apart a bit (increasing r, but still with r << R), both forces become smaller; in fact both decrease almost like 1/r², which would be your first, naive estimate, same as in your high school science class. If this naive estimate were correct, both forces would maintain the same strength (technically) at all distances r.

But this isn’t quite right. Since the 1950s, it was well-known that the correction to this estimate (using successive approximation methods) is to make the electromagnetic force decrease *just a little faster* than that; it becomes a tiny bit weaker (technically) at longer distances. In the 60s, that’s what most people thought any force described by quantum field theory would do. But they were wrong. In 1973, David Politzer, and David Gross and Frank Wilczek, showed that ** for the quantum field theory of quarks and gluons, the correction to the naive estimate goes the other direction**; it makes the force decrease just a little more

*than 1/r².*

**slowly***[Gerard ‘t Hooft had also calculated this, but apparently without fully recognizing its importance…?]*It is the small, accumulating excess above the naive estimate — the gradual deviation of the purple curve from its naive 1/r² form — that leads us to say that this force becomes technically “stronger” and “stronger” at larger distances. Once the distance r becomes comparable to a proton’s size R, the force becomes so “strong” that successive approximation methods fail. As shown in the figure, we have some evidence that the force becomes constant for r >> R, independent of distance. It is this effect that, as we’ll see next time, is responsible for the existence of protons and neutrons, and therefore of all ordinary matter.

So: *at very short distances and high energies, the strong nuclear force is a somewhat “weak” force*, stronger still than the electromagnetic and weak nuclear forces, but similar to them. And therefore, successive approximation **can** tell you what happens when a quark inside one speeding proton hits a quark in a proton speeding the other direction, as long as the quarks collide with energy far more than 1 GeV. * If this weren’t true, we could make scarcely any predictions at the LHC, and at similar proton-proton and proton-antiproton colliders!* (We do also need to know about the proton’s structure, but we don’t calculate that: we simply measure it in other experiments.) In particular, we would never have been able to calculate how often we should be making top quarks, as in Figure 2. And we would not have been able to calculate what the Standard Model, or any other quantum field theory, predicts for the rate at which Higgs particles are produced, so we’d never have been sure that the LHC would either find or exclude the Standard Model Higgs particle. Fortunately, it

*is*true, and that is why precise predictions can be made, for so many processes, at the LHC. And the success of those and other predictions underlies our confidence that the Standard Model correctly describes most of what we know about particle physics.

But still, the equations of the strong nuclear force have only quarks and anti-quarks and gluons in them — no protons, neutrons, or other hadrons. Our understanding of the real world would certainly be incomplete if we didn’t know why there are protons. Well, it turns out that * if we want to know whether protons and neutrons and other hadrons are actually predicted by the strong nuclear force’s equations, we have to test this notion using big computers*. And that’s tricky, even trickier than you might guess.

*Continued here*

* *

## 103 Responses

Greetings from Los angeles! I’m bored to death at work so I decided

to browse your blog on my iphone during lunch break.

I enjoy the knowledge you provide here and can’t wait

to take a look when I get home. I’m surprised at how quick your blog

loaded on my mobile .. I’m not even using WIFI, just 3G ..

Anyways, very good site!

Interesting Read

I all the time used to study article in news papers

but now as I am a user of web therefore from now I

am using net for articles or reviews, thanks to web.

“And in fact we expect QED to break down, due to quantum gravity …” Are there any physicists who DO NOT expect QED to break down due to quantum gravity? Consider this idea: In terms of measurement, QED breaks down due to quantum gravity if and only if Heisenberg’s uncertainty principle needs to be replaced by an uncertainty principle involving both h-bar and alpha-prime. Is there any empirical proof that the preceding idea is wrong?

I appreciate the series of articles that you have started. I was just wondering about a few things discussed here. Refering to the statement: ‘Then they go to the point on the horizontal axis where the quark masses equal their real-world values and the pion mass comes out agreeing with experiment, and they draw a vertical black line upward….’

Now from what I know, we can’t measure the exact masses of quarks due to their confinement properties (which restricts them from being observed as free particles). So from what I guess shouldn’t it be more of a calculation to check the consistency? Something like this: ‘Then they go to the point on the vertical axis where the hadron mass (say Omega’s mass) equal their real-world values. The values of quark masses are noted at this point, and they draw a vertical black line upward at this point and check if the predicted value of other hadrons (say nucleons) is same as the real world values’

L’ha ribloggato su ontosofiaxe ha commentato:

.grazie g.al-fini…al-fetta…al-legri…ange-letta…sconfitta per-fetta…ontosofiax memorialx eventux memorialx proust-heidegger…p

ontosofiax memorialx eventux memorialx proust-heidegger…p

I didn’t get a lot out of this post myself, and I found myself unhappy about what looks like some mixing-up of forces and fields. For example, the electron has its electromagnetic field Fuv, as does the positron, and their field interactions result in linear and/or rotational force denoted by E and B. I guess I share Vladimir’s sentiment that you start with QED and move on to quarks and QCD later.

Dear Professor, Question: In Fig.1, why is the indication of the particle weights important?

In the grand theory to specify the 19 or so parameters of the SM , THAT theory have more than 100 parameters itself then we will need a super grand theory with 10000 parameters to specify the 100 ones , can physics proceed in such virtual road to reality ? Or it is crunch time for physics as newscientist magazine told !?

More simple explanation for a layman, thank you Prof,

/the equations of the strong nuclear force have only quarks and anti-quarks and gluons in them — no protons, neutrons, or other hadrons./

Protons and neutrons were identified inside nucles by atomic spectra and their spin h-bar, than the electrons, which not satisfy 1h-bar – so electron spin around (3D degree of freedom) the nucleus cloud, obey conservation of angular momentum and pauli exclusion principle.

Gluons are the virtual particles out of QCD, color to colorless particles (hadronize). Color confinement is varified by failure of free quarks. What makes the gluons confine in hadrons ?

Because gluons relative mass constitutes most of matter’s mass, and the quantities that are compared to experiment are the masses of the hadrons, there is again the missing link ?

Iam skeptical about the mathematics (Arithmetic) of Lattice QCD. The way out is String theorie’s “fluxtubes”, and compactification along with it ?

Our space-time geometry has a fundamental tendency to expand. Tendency to expand compresses itself (strong force). Additionally, there appears energy in the confinement volumes (equals to their rest mass) against the internal stresses. The electromagnetic field is the flow of strains like extending wrinkles in our expanding geometry.

Which exert constant force when stretched. Due to this force, quarks and gluons are confined within composite particles called hadrons ?

The gluons are another form of vacuum energy, not mass-energy ?

Excellent choice for Fig 1, the Standard Model graphic. While it isn’t as detailed as some (lacking explicit numerical values of electromagnetic charge and isospin), its emphasis on the boson interacts between types of particles is great.

Hi Prof Matt,

Thank you for the latest threads. I have several comments/questions, Please bear with me if they are longish and boring.

I can assure everybody here without minimum (it does not mean little) amount of proper textbook understanding of these subjects I doubt very much if an iota of understanding can be achieved. I know that from my own experience as a “bright” young EE (with Masters) struggling to understand from “popular” writings.

With ad-hoc many years of learning from textbooks on my own I feel now I am in a superposition from undergraduate to a post doc with appropriate probabilities of understanding.

Second point, QFT from its inception was already plagued with many problems which some already came from standard QM. The Lagrangian was always guessed, really, what is left for the theory to do. Also, a cut off is introduced and there goes your mass and coupling predictions, great! I can go on and on. But you will tell me look of all the good things, I say yes but it cannot be the base for the truly fundamental theory. And, as GUT and String have their base in QFT I say they are doomed except for some miracle. I am unconvinced regarding QCD, and I can hardly swallow the electroweak.

Let me add, QED is better but only more so than gauss law. but now we have virtual particles, what on earth are these. Ok, just some math, representing what, so much for a fundamental theory?

Even more bizarre, E=mc^2 a property of particles but derived from SR and not QFT, how stranger can it get?

Do you have any good answers?

I mean : Is the strong force –and other forces– depends on a specifications defining mechanism dictating its properties , ie , strength , range , coupling etc , or it’s specifications are brute facts ? What can physics tell us ?

Hello Professor Strassler. In this post:

http://profmattstrassler.com/articles-and-posts/particle-physics-basics/mass-energy-matter-etc/matter-and-energy-a-false-dichotomy/

you say that energy is something that objects have, that energy is not an object itself. But I have heard string theorists like Brian Greene say on science tv shows that if string theory is true then everything is made of strings. And what are these strings? Brian Greene says they are strings of energy. He says these strings are made of energy. But if strings can be made of energy, wouldn’t that mean energy is stuff? And wouldn’t that mean matter is energy at the fundamental level?

A truly excellent article; accurate, details and understandable. Yet, every story can be told from some different angles and with different *languages*. This SM story consists of the following key points.

a. There are two types of particles, quarks and leptons.

b. Quarks have a special attribute, quark colors while leptons are colorless.

c. Both quarks and leptons have three *generations*.

d. The relationships among those particles are mediated with three different forces.

This 4-point-story can in fact be told with a very simple *language (not model)*.

i. In order to describe the quark colors, we can use the music-chair language which consists of two types of strings. Line-string is as a chair with three seats which can be identified with three *hidden colors (red, yellow, blue)* while those colors are identified with its locations (center = yellow, red = left to center, blue = right of center). Yet, when this Line-string joins to become a Ring-string, its hidden colors disappear, becoming colorless. That is, this Line/Ring-string language is a 4-color-system (red, yellow, blue and white).

ii. We can use *genecolors* as the language *codes* to represent the SM generations, and it also forms a 4-color-system (1, 2, 3 and white). Together with the quark colors, it forms a 7-color-language system (red, yellow, blue, 1, 2, 3 and white).

iii. This Line/Ring-strings (music-chairs) are all identical (chair-symmetry). When a *seater* (who sits on those chairs) is introduced into the above *language* system, the chair-symmetry breaks down into 48 types of sitting patterns.

Thus, a language (not model) system can be constructed with only two parts, line/ring-strings and one type of seater. And, this simple language can *describe* the SM story. There is no right or wrong about any *language* which can only be good (giving a good description about the story) or bad (not giving a good description).

With the above language, anyone is able to complete the *String-unification (reproducing the 48 SM particles)*. For your convenience, it is available at http://tienzengong.wordpress.com/2013/08/24/g-string-and-dark-energy/

Does the behavior of the strong force — purple curve configuration — contingent upon a causal mechanism or it is just a brute fact ( till now at least ) ?

Ill defined question. What are your assumptions? Of course it is contingent! It’s certainly contingent on the existence of the universe… so what do you take for granted, and what do you consider contingent?

The coupling among photons and quarks are contingent , what would be the shape of the universe if there was no such coupling.

By the way , this is the kind of physics that the specialist would feel great awe facing it

Now we are into ill-defined speculation; what would the earth be like if rocks did not exist? I have no idea.

If any observation can be written in many possible ways w.r.t. Math. Then physics is not Physical ! It is Mathematical !? Yes????

Do I got your permission to proceed in asking or should I stop ?

Physics is about PREDICTION. A prediction cannot be rewritten in multiple ways. A book falls to the ground if you let go of it; it doesn’t matter whether you describe that in terms of gravitational fields, gravitons, curvature of spacetime, or something else.

The most complete and simple description of the world we have is in terms of fields. That is not, however, likely to be unique. However, the prediction of the electron’s response to the magnetic field will not change even if the description of how to obtain it changes.

Re. Your reply on 11.25AM , you said ( they just do not merge ) , but Why? As far as I understand a photon field can ” merge ” with a quark field in all possible ways simultaneously , then , physically , Why they merge only in very specific spatial- temporal locations ?

The fields all exist everywhere at all times. Sometimes and some places they are zero; sometimes and places they are non-zero and constant; sometimes they are rippling (i.e. they have particles in them) — they do many different things, and how they affect each other depends on what they are

doingin a particular place and time.Re. Your 11.28 AM reply to me — many thanks for your effort — can I say that accordingly we cannot leap to any particular ontology from any particular Math. Representation ?

I think that’s correct; there is no unique way to look at the world, because you can change the way you write the math down without changing any of the predictions.

Hi Matt. Will you mention (anytime in this serie) that the most precise QFT devised by particle physicists (QED) just makes sense bellow a Landau Pole, and so it is mathematically a somehow “non-theory” ? How can in fact ST correct this aspect ? Thanks

String Theory isn’t necessary to fix it. Grand unified field theories are examples of field theories that fix it. . Lots of other field theories can fix it. Presumably any quantum gravity theory will fix it. It’s an exponentially small problem with lots of possible solutions, so we don’t worry much about it.

I would like to challenge you Matt. Can you sacrifice few hours and take up my challenge? You need to read one (or two) scientific paper(s) and say with arguments why wouldn’t it work. You can comment through any platform you want (this blog, my blog, email, snail mail, commercials, carrier pigeon etc).

There is also Juno Earth flyby coming… 🙂

Not right now; I have to finish a paper, which is taking longer than expected but is urgent.

Here in this post you are being specific about field theories that match reality, right? So, how is it solved by bSM approaches while keeping QED’s succesful description of eletrically charged particles? Thanks

Well, these are field theories that *appear*, so far, to match reality. Remember the Higgs particle is not yet known for sure to be what the Standard Model predicts.

As I said, there are a ton of different ways to solve the problem and the problem is exponentially small. There’s absolutely no effect on QED’s success at the energies we can measure today, or next century.

And in fact we expect QED to break down, due to quantum gravity, long before this exponentially tiny issue would become important. In other words, no one thinks this problem in the theory called QED is actually a problem in the real world. Remember, QED

in isolationis a theory of an imaginary world, not the real world. Sure, QED in isolation is very useful for many current experiments. But the problem you’re describing only could become important for experiments for which QED would not be expected to be sufficient anyway.QED, as a model, is important because we construct other theories by analogy with QED. And QED problems (Landau pole, IR problems) are inherited in our next generation theories.

Are quantum fields THE unique interpretation of observations or more possible ones exist ?

There is never a *unique* interpretation of any observation, because any math you write down can be rewritten in other ways. Newton’s laws have been written down at least four different ways, in terms of forces, energy, action and Hamilton-Jacobi waves. But all of these different ways give *exactly* the same predictions, because all you’ve done in switching from one to the other is rewrite the math.

Up to now, no one has rewritten the math of fields and quantum field theory in a language which I find more intuitive than the one I use on this website. For obvious reasons, it isn’t helpful in explaining things to the public (or even to beginning students) to use multiple ways of speaking simultaneously.

Chemistry is a global sequences of all fields interactions , now the point is : what is the mechanism that regulates the types , sequence , spatio- temporal aspects of the possibility of all atonce simultaneous overall interaction to take over ?

Since all quantum field co-exist in the same one space , what prevents all fields from merging rendering the universe a global simultaneous all inter-actions arena of total chaos ?

Just think of gravity and electromagnetism; their fields coexist just fine. There’s no need for something to actively “prevent” their merging; merging just isn’t something they do.

When we say : there are 8 types of Gluons , is that an observational fact or a math. Requirement ? And if the later is it unique requirement or other alternatives are possible ?

Partially observational fact, partially math requirement; and unique if you want your calculations to be simple.

What would change in QCD if Zc(3900) tetraquark particle is confirmed?

Nothing. No more than discovery of a new molecule would change atomic physics.

OK , let it be so.

Then why the quarks does not merge ? What keeps the boundary of each quark ?

Quarks are ripples in fields, and do not have an intrinsic size; your question is not meaningful as stated.

I think you are trying to understand why protons and neutrons do not pile on top of each other?

Now it is clear! Thanks.

If I understand,in Fig 2, you are not interested in showing asymptotic freedom for real short distances where the mutual force between quarks is supposed to go to zero. Is this distance of asymptotic freedom anywhere close to the distance probed in LHC?

The force becomes (technically!) “weaker” and “weaker” EXTREMELY SLOWLY, so slowly that the actual pull between the quarks actually does NOT go to zero at short distances, but becomes larger and larger. Confusing, huh? Well, to a good approximation, the formula is

F = constant / (r^2 log[R/r])

Notice that the

technical“strength” of the force is proportional toF r^2 = constant / log[R/r]

and this goes to zero, very, very slowly.

But the

actualstrength of the force, measured by how hard you have to pull to separate the quarks, never goes to zero. In fact, it goes to infinity (more slowly than expected, however) as r –> 0.This is very interesting point which I did not realize before. From what you are saying , it seems that all the four forces have 1/r**2 dependence, except that strong and weak forces have exponential cutoff at large distances and strong forces have additional log dependence at small distances to take care of asymptotic freedom. Do you think this is due to the fact that we are in 3-dim world and 1/r**2 is required by geometry?

I don’t know about Matt but I do 😉 And there are perfectly good explanations for those limit conditions.

Yes, the fact that we live in three spatial dimensions is associated with the fact that forces have roughly 1/r^2 force laws. It’s just Gauss’s law from first-year physics.

The strong nuclear force does not have an exponential cut-off at long distances; it becomes constant, as in the figure. But because it is so strong, quarks are never found by themselves. The *residual* strong nuclear force between *hadrons*, such as protons and neutrons and pions, has an exponential fall-off.

Thanks. Yes. I knew about implication of Gauss’s law for gravity and EM for 3-dim world. But how do you write Gauss’s for forces with log and exponential factors floating around? Has anyone carefully considered this?

@S.E.Z.: Allow me to add : what will we measure if imaginary LHC was in no gravity location ( imaginary space ship for example )

No difference.

But you said there would be in extreme cases. You can’t have it both ways 🙂

No difference or negligible difference? Any difference in a very strong gravity location?

negligible.

@Parlyne : A very delicate philosophical point ; If we can calculate what we can never observe , how can this be called empirical science ?

Remember the multi/meta/extra/hyper universe ?

We observe many things in our experiments, both directly and indirectly, which check our calculations, equations and viewpoints. What are you worried about here?

But at collision there are huge energy outside the strong force application, how can we isolate the effect of the later ? Let us face reality ; what I understood till now is that we are looking in a kind of fog not clear light , Am I right ?

A very good question. This is a point I decided not to cover. To prove that you can do this — isolate the quark-quark collision from the rest of the proton-proton collision — is not trivial, and even today a mathematical proof is lacking. There are limited cases where you can prove that this is legitimate in quantum field theory. In the end, the best evidence that it makes sense to do this is that it works! I.e., if you assume that this separation is possible, the answers you get agree with data again and again. So this is a case where we know it is true, from the success of hundreds and thousands of measurements, but it would certainly be best if a very smart person can think of a good way to prove it once and for all.

I am going to reserve my major comments to the second part of this, for me you are just getting to the interesting bit, the computations involved.

But I do want to make one observation and ask one question.

Observation: from a pure mathematics point of view the Fig. 1 diagram always has seemed unsatisfactory in it’s apparently arbitrary complexity (lack of simplicity). From what you have said before, the Higgs field is like the other quantum fields I assume (i.e., not continuous).

Question: and, perhaps “of course,” there is no mention of gravitational mass, although you mention the non-gravitational forces. I am ignoring for a moment the quantum attempts to include gravitation or notions such as gravitons and asking the naive question from the point of view of GR:

Does this model imply that gravitational mass is entirely independent and that the mass/energy of these particles is.invariant with respect to it?

For example, if the LHC were located in lesser or greater gravitational mass (and assuming that it remained operational and functionally the same under all conditions) would the mass/energy values of the particles remain exactly the same? Is any gravitational mass compensation made at all in the results from the LHC.

I assume the answer is negative on the basis that the effect is insignificant and the relocation of the LHC would be noticed only in the trajectories of the particles. Or, perhaps not?

1) In Einstein’s theory, gravitational mass is simply an effect of the fact that all energy gravitates. All particles have energy, so they gravitate — even photons. And all particles that have mass by which I mean what some call “rest mass”], and therefore energy E=mc^2 even when standing still, gravitate accordingly. There’s no need to put gravitational mass in as an additional thing; the particles already have energy, and that’s all you need, once gravity is included, for them to gravitate.

2) the effect of being in the gravitational field of the earth or sun is certainly negligible; the magnetic fields and electric fields that keep the protons in their orbit inside the LHC beam-pipes are much, much, much stronger than any gravitational effects. There would perhaps be important but easy-to-calculate (and not very surprising) effects on LHC physics very near the edge of a black hole, where gravitational forces could distort the orbits of the particles in the ring.

So I think you’re trying to draw distinctions that really don’t exist, and look for effects that really aren’t there. It’s all very simple; gravity pulls on things and if it is strong enough it can change their trajectories. Nothing else dramatic happens unless you’re close to a gravitational singularity or trying to avoid falling into a black hole or something else like that which is dramatic on its own, with or without particle physics.

That is one interpretation of GR but it is not the only one – for example, a holistic interpretation of the field equations states that wherever there is a gravitational effect there is gravitational mass, i.e., all mass/energy calculations are inertial mass + gravitational mass, even at small scales.

I interpret that what you say to suggest that if this were a valid interpretation, that the gravitational mass will indeed vary, were the LHC relocated, but that the effect is so small at these scales to be of concern in normal particle oriented calculations, but that its variance would be manifest in the extreme cases and that then, according to this view, the masses of these particles do, indeed, vary. As they must in matter accumulations at larger scales.

So, in fact, I suggest that there is a distinction too many in your account. Unless you deny the equivalence principle.

As you say, some of this is semantics: what do you mean by “mass”? Particle physicists take a point of view different from most gravitational theorists. But this is just shuffling facts around from one definition to another. If you ask a precise physical question about how objects behave, rather than about how to think about why they behave that way, I can probably answer that without any confusion over terminonology.

I mean mass in the conventional sense of E=mc^2. In which m is defined as inertial + local gravitational mass (per the equivalence principle).

The following is a naive thought experiment based on this view.

The example is the well-known double split experiment that appears to show dualistic behavior, even when you independently fire individual particles at a screen and yet still get the interference pattern. A pattern that disappears it you turn on detectors either at a slit or after the particle has passed the slit (in order to determine which slit the particle passed through).

If the above interpretation of the field equations were correct then the gravitational mass provides a medium that would be disrupted by the high-energy particle, providing a wave (that persists awhile) in the post slit gravitational mass domain. A second particle now passes through the other slit and will produce an interference pattern in the post slit gravitational mass. Let us also suggest that it you turn on a detector, one of the kind noted above, it too will have an effect upon the gravitational mass that essentially removes the wave effect because of the energy required to perform the detection.

Now I imagine by your account that even if this were the case, you will say, the effect of the wave in the gravitational mass post slit would not be sufficient to alter the trajectory of the subsequent high-energy particles (in order to produce the interference pattern). Yet the model suggests otherwise.

This would then suggest that gravitational mass plays a significant role at the particle level that is trajectory altering.

There are other significant cosmological consequences of this view and its consideration arises in my work only as an illustration of the necessary conclusions of applying an exact method that forces a consistency between Einstein’s advocated epistemology and his equations.

I think we understand that if the above model is correct then the results from the LHC would need to be re-evaluated.

So, without extensive justifications, let me summarize other results related to the discussion above that fall out of this interpretation of Einstein’s equations – and may explain why the seemingly innocent gravitational mass possibly has the effect suggested by the double slit experiment.

The most counter intuitive result in this interpretation seems so only because of modern pedagogy and it simply explains why light necessarily appears to be constant: light does not, in fact, move – it is a static (continuous) field (just as GR thinks of gravitation). It is only free inertial mass that “moves” (and, of course, has a covariant effect upon the light field, its source). You get gravitational mass from the distortion and free inertial mass arises in the extreme case of light field distortion in GR (that I am sure you can interpret appropriately).

What you do to get this result is isolate space-time to its epistemic cause, as Einstein intended, and replace it in GR with the above mentioned light field. The gravitational effect is then a distortion of the light field giving you the medium of gravitational mass discussed above.

The light field is necessarily finite in this interpretation in order to consider ideal initial conditions.

The result is that mass/energy (as particles) and all associated forces are shown to be the product of this light field distortion. The covariant effect of matter and motion (particles etc.) is the movement of free inertial mass passing through the light field (a medium of gravitational mass).

Particle physics and electrodynamics then becomes a second order consideration.

Now, from my point of view this result is simply a purely mathematical interpretation of the current models by applying greater epistemic rigor to the known results. I like the model because of its simplicity.

And I will be quite happy for you to say, “Nah, that’s crazy because …”

Either way, an article that discusses the distinctions between GR and gravitational mass and its implications to QFT seems necessary to dig us out of the imaginary world.

BTW: in this interpretation it is a straight forward matter to combine Einstein’s two greatest equations.

Of course, when I say “inertial mass” in the above, I am referring to the intrinsic inertial mass with zero gravitational mass. I believe this is what you are speaking of when you say “particle physicists take a point of view different from most gravitational theorists.” Particle theory takes no direct account of gravitational mass in its mass calculations.

“Particle theory takes no direct account of gravitational mass in its mass calculations”

It does; in scattering experiments one constructs a centre-of-mass frame which is inertial, and relies upon that. Inertial frames are what are “special” in SR and one can effectively fully recover SR in infinitesimal patches of the smooth manifold (more precisely where the region of spacetime covered by the coordinate system goes to zero in size).

Strictly speaking the mathematics of GR forbids this; you can only unambiguously determine the velocities of two clocks when they are at exactly the same location in the coordinate system. However, where curvature is very small, the inaccuracies will also be very small.

General coordinate transformation is all one needs to map between an almost arbitrary laboratory frame of reference (certainly any that would support human life) and the centre-of-momentum frame of an experimental event

Casually speaking, experimenters and their equipment (including that generating, accelerating, and aiming the particles) will fail due to distortions like spaghettification in much gentler spacetime curvature than microscopic particles they are testing will.

Read again what you quoted Brody. I said “in mass calculations.”

Given the meaning of “Brody’s” name I offer an immediate troll warning.

Fine , then allow me to ask : do we exactly know what really happens at the boundary of the proton w.r.t. The strong force in case of many adjacent P and N ?

I don’t think this is a well-posed question. Are you asking why nuclei form at all? That we know. Are you asking whether we have an understanding of how nuclei behave with a precision of 1%? The answer is no.

@Parlyne : there is a contradiction here , you say the purple curve is between two quarks , then how do you explain it’s extension beyond proton radius where no free quark can exist ?

It’s perfectly possible to calculate what you would see in cases that couldn’t actually be physically realized in any straightforward way. (And, in fact, such situations can exists for very brief periods of time – for instance just after a collision.)

Actually, I will answer this question more carefully, probably in my next post.

You summed it all up when you wrote: “So: at very short distances and high energies, the strong nuclear force is a somewhat “weak” force, stronger still than the electromagnetic and weak nuclear forces, but similar to them.”

Amazing how clear you make these concepts. Thanks so much.

These are a really great set of articles thus far. Just wanted to second the request above that you discuss renormalisation in one of your posts.

Matt, It would be nice if you could devote a post to renormalization in QFT.

Someday…

I think it is an important issue since bare particles interact (do they?) differently than dressed particles. It is not clear to what case your curves belong 😉

Heavy stable dressed particles, if you want to be precise.

If a particle (charge) is dressed, it is smeared quantum mechanically so its elastic potential is much softer than a Coulombian or other kind of singularity at r=0. See, for example, the second atomic form-factor here: http://arxiv.org/abs/0806.2635

Somebody obtained exact dressed particle solutions? Never heard of it.

You once Mentioned that if protons and or neutrons come too close to each other then the strong force is repulsive otherwise they will merge , how this may be shown on the purple curve , I would like to see its equation .

Not in any simple way. The purple curve is the force between two quarks. To understand the forces between protons and neutrons at this fundamental a level requires an understanding of the structure of protons and neutrons which, in turn, depends on the forces between the quarks in a non-trivial way. (To put it in an extraordinarily imprecise way, since the whole picture of the interaction in terms of forces between quarks is tied up in what doesn’t really work right at the scales necessary to talk about protons and neutrons.)

This is an even more complicated effect, not visible on the curve or through any simple equation.

Matt, it is nice to see that LHC data are in agreement with SM. However, in article #1 you used the word “inconsistency” for SM. Perhaps you will explain that in future article. But let me ask a question here anyway. To understand renormalizability of SM, I looked into Standard Model Primer (Burgess and Moore). They say that apart from the phenomena of neutrino oscillations, which may require new physics, SM is renormalizable. Do you agree with this statement? Then, is the inconsistency coming from lack of understanding of why the Higgs mass and CC are small? Previously I had understood that t’Hooft proved renormalizability of only weak-EM intn, but strong intn were still up in air. If this is too technical for this blog, I will appreciate a reference for it. Thanks.

UGH! that was a *typo*! It should have said “The Standard Model withOUT any Higgs…” Sorry for confusing you.

The Standard Model (without neutrino masses) is renormalizable. There is no inconsistency once the Higgs particle is put into the theory.

Is the landscape of validity of successive iterations and the landscape of unvalid ones dictated by some sort of principles or it is totally random ?

In any particular renormalizable quantum field theoretic model the successive corrections are specified exactly. The only questions are whether we can actually do the computations, whether we know any external parameters involved well enough for the correction to be useful, and whether the successive approximations actually decrease in size.

Sorry, I don’t understand the equation. It’s certainly not a random landscape; the mathematics is rather well-understood. As I said, if there are weak forces successive approximation is a useful tool; if the forces are “strong” then it can’t work. Sometimes the method can fail for specific reasons, but the reasons are pretty well-understood.

You never explained the physical meaning of the nuclear strong force being undefined when r >>R . I mean do we really know how the purple curve is extended for r>>>>>>>>R ?

We will get to this.